March 30, 2019
On 3/29/19 9:10 PM, H. S. Teoh wrote:
> It's clear that Walter & Andrei excel at technical expertise, but
> lack in the people skills department.

As Walter tells me on the phone when it comes about community, "the main problem with our community is it's poorly led" - a very humble, humbling, and positive take.

Speaking for myself, my doing a poor job at leadership in our community is so obvious to me, it has almost become a cop-out. It's second to none in terms of failure per unit of time invested, so in a way simply admitting I'm just not good at it is the easy way to explain an overwhelmingly complex reality.

This had me revisit the past for similarities and differences. And that inevitably takes to C++.

In the C++ community the outcome has been quite the opposite. After Modern C++ Design (of which a friend went so far as to say "showed Bjarne what C++ really was about"), I have inadvertently enjoyed enormous impact and inspirational power, whilst being aloof to it. One forum post was enough to determine Hans Boehm and Herb Sutter to lead the standardization of threads in C++11; off-the-cuff ideas and conversations became N4259 or P0323R1; Mojo, an afternoon's worth of work, may have provided at least part of the reason for rvalue references. To this day, I am receiving credit about things I'd all but forgotten about. I could burp in a keynote and someone will create a great C++ template out of it. I knew things came full-circle when I got this question following my "The Next Big Thing" talk: "Have you heard of std::conditional?" (meant as a clever comeback to my plea for "static if"). I realized that both the person asking and myself had forgotten I'd invented the blessed thing.

And this is most puzzling, because I haven't given C++ the time of the day in a long while, so in terms of at-least-somewhat-related-to-leadership-impact per unit of effort, my participation C++ has been quite productive. (And for a good part it's been unwitting and almost unwilling, a la Stepan Trofimovich Verkhovensky in Dostoyevsky's "Demons" - hopefully in a positive way though.)

Not to say that arguably the very best of my work is to be found in D, many miles away from what I could ever do for C++. But even not counting that: with me being a common part of the inequation, I can be simplified away. Which leads to the most puzzling question: why have the outcomes across the C++ and D communities have been so different?

The answer to this question may unlock the potential of our community.
March 31, 2019
On 31.03.19 01:14, Andrei Alexandrescu wrote:
> Great, so ag0aep6g you're good to go with your PR. I'd approved it yesterday.

Thanks, guys. Glad we managed to work this out.

Just to be clear, I can't merge PRs. It sounds like you might expect me to do that.
March 30, 2019
On 3/30/19 8:19 PM, ag0aep6g wrote:
> On 31.03.19 01:14, Andrei Alexandrescu wrote:
>> Great, so ag0aep6g you're good to go with your PR. I'd approved it yesterday.
> 
> Thanks, guys. Glad we managed to work this out.
> 
> Just to be clear, I can't merge PRs. It sounds like you might expect me to do that.

Did so. Thanks for the work.
March 30, 2019
On 3/30/19 2:49 PM, Jonathan M Davis wrote:
> This would also be a great opportunity to fix some of the issues with shared
> in druntime

The problem here is, of course, that nobody knows exactly what shared is supposed to do. Not even me. Not even Walter.

One monumental piece of work would be DIP 1021 "Define the semantics of shared". Then people who build with -dip1021 know what to do and what to expect. Now that would patch one of the bigger holes mentioned in the other post.
March 31, 2019
On Sunday, 31 March 2019 at 00:18:29 UTC, Andrei Alexandrescu wrote:
> Not to say that arguably the very best of my work is to be found in D, many miles away from what I could ever do for C++. But even not counting that: with me being a common part of the inequation, I can be simplified away.

That would assume that your output (not input) to the two are equal and more importantly separable form the outputs of others. The second one is definitely not true.

> Which leads to the most puzzling question: why have the outcomes across the C++ and D communities have been so different?

Leadership and vision (or rather lack thereof) are two of the most critical issues holding D back. Forget quality for the moment, it can be well approximated by a simple game of numbers:

D:
    Walter and Andrei
    No vision document for I've forgotten how long
    No regular direction steering meetings with users (mod that one ~5 months ago)
    One conference a year
    A few local regional get togethers
    DLF

C++:
    Herb, Marshall, Bjarne, ...
    Multiple direction documents / roadmaps for language, library, ecosystem, HPC, ...
    C++ standards committees: 1 week every 3 months
    I don't even know how many conferences per year
    Who knows how many local gatherings
    C++ alliance / C++ standards body

And I'll be blunt, much of your recent leadership has actually been gatekeeping, especially w.r.t refactoring.

How to remedy this?

Well _I'm_ starting with:
the DConf AGM (draft agenda to be published soon™).
DLF quarterly meetings, especially if we can get them to coincide with various regional quarterly gatherings
Greater (corporate) participation in DLF processes, vision outreach.

What are you going to do?
March 31, 2019
On Sunday, 31 March 2019 at 00:41:15 UTC, Andrei Alexandrescu wrote:
> On 3/30/19 2:49 PM, Jonathan M Davis wrote:
>> This would also be a great opportunity to fix some of the issues with shared
>> in druntime
>
> The problem here is, of course, that nobody knows exactly what shared is supposed to do. Not even me. Not even Walter.

If that is the case then you should not be so hostile to suggestions *cough *Manu *cough* to improve that (maybe that was more Walter than you, I don't remember).

> One monumental piece of work would be DIP 1021 "Define the semantics of shared". Then people who build with -dip1021 know what to do and what to expect. Now that would patch one of the bigger holes mentioned in the other post.

Its in the vision section for the dconf AGM as:
shared ( and @safe and shared)

March 30, 2019
On 3/30/2019 6:51 PM, Nicholas Wilson wrote:
> And I'll be blunt, much of your recent leadership has actually been gatekeeping, especially w.r.t refactoring.

I know we're having differences about what constitutes a good refactoring, and what isn't. I hope that between us at DConf with the aid of some suds we can reach a consensus on this.
March 31, 2019
On Saturday, March 30, 2019 6:41:15 PM MDT Andrei Alexandrescu via Digitalmars-d wrote:
> On 3/30/19 2:49 PM, Jonathan M Davis wrote:
> > This would also be a great opportunity to fix some of the issues with shared in druntime
>
> The problem here is, of course, that nobody knows exactly what shared is supposed to do. Not even me. Not even Walter.
>
> One monumental piece of work would be DIP 1021 "Define the semantics of shared". Then people who build with -dip1021 know what to do and what to expect. Now that would patch one of the bigger holes mentioned in the other post.

I confess that I find the amount of confusion over shared to be confusing, though maybe I'm missing something. Yes, it has some details that need to be worked out, but I would have thought that basics would be well understood by now. For shared to work, it has to prevent code that isn't thread-safe - which means either making any operation which isn't guaranteed to be thread-safe illegal and/or making the compiler insert code that guarantees thread-safety. The latter is very hard if not impossible, if nothing else, because that would require that the compiler actually understand the threading and synchronization mechanisms being used in a given situation (e.g. associating a mutex with a shared variable and then somehow guaranteeing that it's always locked at the appropriate time and unlocked at the appropriate time). So, basically, that means that any operations that aren't guaranted to be thread-safe then need to be illegal for shared, and then to actually read or modify a shared object, you have to either use atomics, or you have to use whatever synchronization mechanisms that you want to be using (e.g. mutexes), cast away shared while the object is protected, operate on the object as thread-local, make sure that no thread-local references to it exist when you're done, and then release the mutex. It's exactly what you'd be doing in C++ code except that you have to worry about casting away shared while actually operating on the object, because the type system is preventing you from shooting yourself in the foot by reading or writing to shared objects, since that's not thread-safe without synchronization mechanisms that aren't part of the type system.

It seems to be confusing for many people, because they expect to actually be able to directly operate on shared objects, but you can't do that and guarantee any kind of thread-safety unless you're calling stuff that takes care of the protection for you (e.g. a type could contain a mutex and lock it in its member functions, thereby encapsulating all of the casting away of shared - which is basically what synchronized classes were supposed to do, just with the outer level only, whereas you can do more than that if you're doing it manually and write the code correctly). But shared types in general aren't necessarily going to have any kind of synchronization mechanism built in (e.g. shared(int*) certainly won't).

The primary problem that I see with shared as things stand is that it still allows operations which aren't guaranteed to be thread-safe (e.g. copying). It may need some tweaks beyond that, but as far as I can tell, it mostly works as-is. However, it's much harder to use than it should be because the stuff in core.sync has only been partially made to work properly with shared.

I expect that when you get down to all of the nitty gritty details of making operations that aren't guaranteed to be thread-safe illegal that there could be some hairy things that need to be sorted out, but we've already made _some_ of them illegal, and the addition of copy constructors to the language actually fixes one of the big hurdles, which is making it possible to make it thread-safe to copy an object (by having its copy constructor be shared and handle the mutex or atomics or whatnot internally). So, by no means am I claiming that we have it all figured out, but I would have thought that it would primarily be an issue of figuring out how to correctly make operations that aren't guaranteed to be thread-safe illegal (just like finishing @safe by making operations @system when the compiler can't guarantee that they're memory safe). Some of the details could turn out to be nasty, but I would have thought that it would just be a matter of working through them and that as things stand, the basic design of shared works.

Maybe I'm missing something here, but it seems to me that it _should_ be pretty straightforward to just move forward from the idea that operations on shared objects have to be thread-safe or be illegal. That may involve more casting than would be ideal, but I don't see how we can really do much with the language understanding enough to do the casting for you, and the result is basically what you get in C++ - except that the compiler is preventing you from shooting yourself in the foot outside of @system code where you're handling the casting away of shared. So, it's pretty much the same situation as @safe vs @system in the sense that shared code disallows operations that the compiler can't guarantee the safety of, and you have to do @system stuff to guarantee the safety yourself in the cases where you need to do that stuff.

Regardless of the details of what we want to do with shared though, core.sync doesn't handle it properly. Mostly, it doesn't use shared at all, and when it does, it's hacked on. So, at some point here, we really need to replace what's there with a v2 of some sort. And until we do, no matter how well shared works on its own, it's going to be seriously hampered, because the constructs in core.sync are core to writing threaded code.

- Jonathan M Davis



March 31, 2019
On Saturday, March 30, 2019 1:38:46 PM MDT ag0aep6g via Digitalmars-d wrote:
> On 30.03.19 19:32, Jonathan M Davis wrote:
> > RefRange is trying to force reference semantics, which then
> > fundamentally
> > doesn't fit with forward ranges. It sort of half fits right now, because
> > of save, but as with forward ranges that are reference types, it causes
> > problems, and code frequently isn't going to work with it without extra
> > work.
> >
> > As such, I really don't think that RefRange makes sense fundamentally.
>
> You haven't spelled it out, but I suppose you also think that classes don't make sense as forward ranges. And Andrei has stated something to the effect that they wouldn't be supported in his redesign.
>
> That is fine for a range redesign. But with the definitions we have right now, classes can be forward ranges, and Phobos should work with them. And if Phobos works with classes, it can work with RefRange (if we cut out opAssign).
>
> If RefRange really is fundamentally at odds with current ranges, I'd be interested in an example where it causes trouble that isn't due to opAssign (or opSlice() which also shouldn't be there). Maybe I just fail to see the problem.
>
> One issue I'm aware of is that RefRange does a GC allocation on `save`. That isn't pretty, but I wouldn't call it fatal.
>
> I've said before that "we can cut out the bad parts" of RefRange, referring to opAssign, but I think I was wrong. Deprecating assignment would be so limiting that we might as well deprecate the whole thing.
>
> Then again, Walter has made (limited) efforts to support ranges that are non-assignable due to const fields. Nobody acknowledges that those exact changes also work to accommodate RefRange, but they do.

Fundamentally, forward ranges are essentially value types. They've just had their copy operation moved to save in order to accommodate classes. They have to be copyable to work (even if that copying is done via save), and any function that requires a forward range _will_ copy it via save, or it wouldn't require a forward range. Once the range is saved, RefRange is pretty useless, because the results of what are done to the copy don't affect the original.

And to use RefRange, you're basically depending on implementation details of a particular algorithm and when it chooses to call save and what exactly it does with the original before and after calling save - something that is not part of the function's API or any guarantee that it makes. And because the actual semantics of copying a range depend on how it's implemented, algorithms can't rely on whether copying a range will result in an independent copy or not and are free to rearrange when they do or don't copy a range so long as they don't use the original range after it's been copied. They're also free to change when save is called so long as the function's result is the same. It's also assumed that the function can do whatever it pleases with the original so long as it doesn't violate the range API, since in general, you can't use a range again after it's been copied (because doing so would not work the same with all range types).

RefRange on the other hand is basically trying to get access to the results of the first part of the algorithm prior to the call save (and possibly some of what's done to the original after the call to save). It's purposefully setting up the code to reuse a range in circumstances where you can't reuse a range in generic code. The code isn't necessarily generic, which does change things, but if the algorithm you're passing RefRange too is generic, then it wasn't written with RefRange in mind, and there will be no expectation that the caller will do anything with the range that was passed in. If anything, the expection is that they won't do anything with it except maybe assign the result of the function to it to reuse the variable.

Basically, RefRange is trying to fight how forward ranges work, and while that works in corner cases, in general, it's a mess. And the odds of code that uses RefRange breaking due to changes to algorithms that would otherwise be fine are pretty high, because code using RefRange is depending on the implementation details of any functions that RefRange is passed to.

So, while RefRange might be okay in code that's written specifically for use with RefRange, in general, it's a terrible idea. And I really regret that I got into Phobos. The fact that I did just shows how much less I understood about ranges at the time.

- Jonathan M Davis



March 31, 2019
On 3/30/19 10:08 PM, Nicholas Wilson wrote:
>> One monumental piece of work would be DIP 1021 "Define the semantics of shared". Then people who build with -dip1021 know what to do and what to expect. Now that would patch one of the bigger holes mentioned in the other post.
> 
> Its in the vision section for the dconf AGM as:
> shared ( and @safe and shared)

The necessity to work on shared was also present in the January 2015 vision document (https://wiki.dlang.org/Vision/2015H1): "Nail down fuzzily-defined areas of the language (e.g. shared semantics, @property)."

Writing it down doesn't get it done.

Defining shared properly would take a team with a programming languages expert, a threading expert, and an application expert. (Some roles may be realized within the same person.) I know people within our community with the required expertise. But I don't know any who'd also have the time to embark on this.

I got burned out on writing vision documents (Walter was never a fan so I did them all) because it was difficult to figure their impact. Contributors asked for a vision document. So we started one. Then contributors continued doing what they were doing.