April 04, 2019
On Thursday, 4 April 2019 at 04:25:59 UTC, Andrei Alexandrescu wrote:
> On 4/4/19 12:24 AM, Andrei Alexandrescu wrote:
>> On 4/3/19 11:09 PM, Nicholas Wilson wrote:
>>> I don't think we are going to be able to do this without iterating on the design and closing holes and nuisances that we discover. I'm not saying that it is a bad idea to design up front as much as we can, but we shouldn't wast time getting hung up on design when implementation can give gains to users and guidance to the design.
>> 
>> I don't think this works for programming language design. In fact I'm positive it doesn't. It's the way we've done things so far.
>
> Well I'm exaggerating. I mean to say every time we did it that way,

Examples please?

> the result hasn't been good.

e.g. DIP1000 was bad, not because it was iterated upon to fix the holes in it, but because the changes were not communicated properly and not documented. I suggest we don't make those same mistakes again.


April 03, 2019
On Wed, Apr 3, 2019 at 7:10 PM Andrei Alexandrescu via Digitalmars-d <digitalmars-d@puremagic.com> wrote:
>
> On 3/31/19 6:25 PM, Manu wrote:
> > On Sat, Mar 30, 2019 at 5:45 PM Andrei Alexandrescu via Digitalmars-d <digitalmars-d@puremagic.com> wrote:
> >>
> >> On 3/30/19 2:49 PM, Jonathan M Davis wrote:
> >>> This would also be a great opportunity to fix some of the issues with shared in druntime
> >>
> >> The problem here is, of course, that nobody knows exactly what shared is supposed to do. Not even me. Not even Walter.
> >
> > As an immediate stop-gap, shared must have *no read or write access*
> > to data members. Simple. Make that change. Right now.
> > Then shared will at least be useful and not embarrassing, and you can
> > continue to argue about what shared means while I get to work.
> >
> >> One monumental piece of work would be DIP 1021 "Define the semantics of shared". Then people who build with -dip1021 know what to do and what to expect. Now that would patch one of the bigger holes mentioned in the other post.
> >
> > Sure. But in the meantime, fix the objective bug with the current semantics where you can read/write un-protected data freely. Right now. Please for the love of god.
>
> Incidentally Walter and I discussed in the early days the idea that "shared" would offer no access whatsoever aside from a few special functions in druntime such as "atomicRead" and "atomicWrite". Of course, that raised the question how those functions could be implemented - the possible solution being some casts for typing and asm for the needed memory barriers.

Casting away shared seems to be the only reasonable option with respect to this general design for shared.

This applies just fine to atomics; casting away shared is effectively
asserting that you have created a thread-local context for some window
of time that you can perform a thread-safe interaction.
Atomic operations have an effective zero-length execution window, so
from that perspective, it's correct that you are able to cast-away
shared in order to perform a single atomic operation on an int. The
assertion that you have a thread-local lease on the int for the
duration of an atomic operation naturally holds by definition.

> In the end we got scared that there was no precedent in other languages, and we could not predict whether that would have been good or bad. The result is the current semantics, which should be a felony in the 48 contiguous US states.

Okay... so you're effectively saying you had a possibly-good idea, but
instead of trying it, instead did something else that's just
straight-up broken?
So, like, how about we actually try the thing before we decide it
didn't work? We have nothing to lose!

> This does not work as a two stages process, though the "stop the bleeding first then come with the new solution" metaphor seems attractive.

You're saying a design was proposed, and it's *almost* implemented in the language. It's been there for longer than I've been around, but we're still yet to actually try out the design as is almost implemented so long ago. It's really weird that we're declaring the design failed before trying it out! We've had, like... 12 years to try it out? Why haven't we tried it yet?

We're not bleeding from a failed design. If there is a wound in place, it's the fact that the intended design was only half-implemented, and then left that way for over a decade.

Making this change would give something to work with. If it turns out
it's not a workable solution after all, then it would be good to know
before we eject it into space!
Better design may emerge from understanding how this design failed.
But it hasn't failed yet, because we haven't actually tried it yet...
it's just been sitting in limbo waiting while people scratch their
head and try to understand what happened here.

That process is surely more useful than the current situation, which is that `shared` means nothing, and nobody quite knows what it's for other than a sort of self-documentation.

I understand you may prefer a strong design proposal, but nobody has
moved the bar in the decade I've been waiting.
We're clearly not making ground in the way you prefer, so let's just
actually implement the design almost-implemented 12(+?) years ago, and
see how terrible it actually is?

I have a fairly strong sense of what will emerge, I suspect it'll be workable and useful.

> The main issues being when we break code that people got to
> work, we need to offer the alternative as well. Another being that the
> exact kind of things we disable/enable may be dependent on the ultimate
> solution.

I'm fairly sure we're not 'breaking' anything. Any code that breaks
with this change is almost certainly already broken.
The remedy would be to add a cast to their code, and it will be
exactly as it is now; probably still broken. But it's really not very
disruptive.

> This would be a large effort requiring a strong team. Walter, yourself, and I would be helpful participants but I think between the three of us we don't have the theoretical chops to pull this off. At least I know I don't. We need the likes of Timon Gehr, Johan Engelen, and David Nadlinger (whom I cc'd just in case).

Go for it... but like, maybe first, how about we actually try out the
design you came up with 12 years ago before we declare it a failure
and spend another few years trying to do something else?
We have no evidence it's a failure, only that the half-implementation
of the original design is a failure, and mostly for the reason that
the half-semantics just don't mean anything, not that the original
idea is broken.
April 03, 2019
On Wed, Apr 3, 2019 at 9:25 PM Andrei Alexandrescu via Digitalmars-d <digitalmars-d@puremagic.com> wrote:
>
> On 4/3/19 11:09 PM, Nicholas Wilson wrote:
> > On Thursday, 4 April 2019 at 02:05:15 UTC, Andrei Alexandrescu wrote:
> >> On 3/31/19 6:25 PM, Manu wrote:
> >>> Sure. But in the meantime, fix the objective bug with the current semantics where you can read/write un-protected data freely. Right now. Please for the love of god
> >>
> >> This does not work as a two stages process, though the "stop the bleeding first then come with the new solution" metaphor seems attractive. The main issues being when we break code that people got to work, we need to offer the alternative as well. Another being that the exact kind of things we disable/enable may be dependent on the ultimate solution.
> >
> > Well whatever happens I'll be gobsmacked if its not behind an opt in
> > switch.
> > With that in mind, if Manu gets use out of the stopgap of disabling
> > read/write access, then I think we should implement that ASAP and then
> > listen to whatever he complains about next ;)
> >
> >> This would be a large effort requiring a strong team. Walter, yourself, and I would be helpful participants but I think between the three of us we don't have the theoretical chops to pull this off. At least I know I don't. We need the likes of Timon Gehr, Johan Engelen, and David Nadlinger (whom I cc'd just in case).
> >
> > I don't think we are going to be able to do this without iterating on the design and closing holes and nuisances that we discover. I'm not saying that it is a bad idea to design up front as much as we can, but we shouldn't wast time getting hung up on design when implementation can give gains to users and guidance to the design.
>
> I don't think this works for programming language design. In fact I'm positive it doesn't. It's the way we've done things so far.

You say your original design worked how I suggest (I'm not surprised, it's the only thing that makes sense), so... close the circuit! Maybe it was a success, and nobody ever had the chance to demonstrate it. We've waited so long to try it, so let us try it out!
April 04, 2019
On 4/4/19 1:50 AM, Nicholas Wilson wrote:
> On Thursday, 4 April 2019 at 04:25:59 UTC, Andrei Alexandrescu wrote:
>> On 4/4/19 12:24 AM, Andrei Alexandrescu wrote:
>>> On 4/3/19 11:09 PM, Nicholas Wilson wrote:
>>>> I don't think we are going to be able to do this without iterating on the design and closing holes and nuisances that we discover. I'm not saying that it is a bad idea to design up front as much as we can, but we shouldn't wast time getting hung up on design when implementation can give gains to users and guidance to the design.
>>>
>>> I don't think this works for programming language design. In fact I'm positive it doesn't. It's the way we've done things so far.
>>
>> Well I'm exaggerating. I mean to say every time we did it that way,
> 
> Examples please?

Shared itself, the postblit, lazy, properties, alias this - are all "first-order thinking" ideas that are not bad, but fail to take into consideration second-order interactions and their consequences.

(A good read: https://fs.blog/2016/04/second-order-thinking/)

Language design is all about second-order thinking.

>> the result hasn't been good.
> 
> e.g. DIP1000 was bad, not because it was iterated upon to fix the holes in it, but because the changes were not communicated properly and not documented. I suggest we don't make those same mistakes again.

DIP1000 is actually an example of second-order thinking. Walter pored over it for months before writing and implementing it.

So are the recently-introduced copy constructors. We had what we thought was a workable design at probably one dozen times during the process. All had large flaws.

Incrementalism is an anti-pattern in language design.

April 04, 2019
On Thursday, 4 April 2019 at 11:10:09 UTC, Andrei Alexandrescu wrote:
> On 4/4/19 1:50 AM, Nicholas Wilson wrote:
>> e.g. DIP1000 was bad, not because it was iterated upon to fix the holes in it, but because the changes were not communicated properly and not documented. I suggest we don't make those same mistakes again.
>
> DIP1000 is actually an example of second-order thinking. Walter pored over it for months before writing and implementing it.

That is was a good idea does not excuse how sloppily the procedure of implementation was handled. I note also that it underwent significant changes post implementation. That is the iteration I'm taking about (just handled better, i.e. docs & community engagement).

> Incrementalism is an anti-pattern in language design.

I'm talking about design iteration, not language by incremental feature addition.
Incremental feature addition is difficult, if not impossible, to undo if it turns out it was a bad idea. Iterative design, by definition, does not suffer from that problem.


April 04, 2019
On 4/3/19 8:33 AM, Guillaume Piolat wrote:
> I don't how long you've been around, but this community seems to me as indeed unreasonable and often disrespectful (I've been doing it too at times).

I also think we have patterns of negativity. In the recent days I have received a number of private messages also mentioning that as an ongoing difficulty (you know who you are; thank you). I'm not very worried about some heat in casual discussions in the forums, though all of us could use more civility. I am, however, worried about negativity "in production", i.e. on github and in DIP-related and other consequential exchanges in the forums.

My current hypothesis is it has to do with people's lack of time.

Consider modeling a "specialist short on time". For a while I tried to model lack of time as a sort of decrease in IQ. For sure that's happened to me. Whenever I gave myself only a few minutes to make a decision on a pull request, that would definitely count as a net decrease in competence.

However, that model is not very accurate. For example, a specialist short on time would still be able to point out a mistake that would escape a non-specialist.

So a better model is, a specialist short on time in a design or code review has a more negative bent than one with time. This is because software is quintessentially constructive. In math, proving a negative is difficult because the burden is to prove something can't exist. In software, proving a positive is more difficult because it must be constructed. What is easy in code and design reviews is small "proofs" that an existing proposal has a problem. It's also time effective - a threading specialist may see a five-line race pattern immediately. A programming languages expert would see how qualifiers are bad for postblits in a minute.

Most of all, pointing out a negative gives the reviewer undeniable expertise high ground and a sense of doing the right thing. I pointed out a problem, the reasoning goes, thus alerting people that a mistake could be made. The reviewer gets a good jolt of satisfaction and moves on with their day.

The fallacy within is akin of the one of broken windows. The first-order reasoning goes, a dangerous mistake is being averted and correctness or at least some notion of consistency (people in software design love consistency) is being preserved.

The second-order effects, however, are numerous and pernicious. The main problem is, again, the constructive nature of software: often, a pull request or some other proposal embedding a mistake originates from a genuine need. People need to use malloc and free in pure code. They need reference-counted collections that are also immutable. Then, our archetypal specialist-short-on-time reviewer points out how that is problematic and considers the matter done with. "The problem is solved," - the subtext reads almost triumphantly - "it can't be done". To a non-investor specialist interested in correctness, this is a perfectly valid outcome. To the poor sap who wanted to get work done, that's none too helpful.

Contagion is another second-order effect. People in the community, would-be contributors and reviewers, see specialists who are mostly negatively biased. Clearly they know what they're talking about, so they are admired and set the standard everybody is aspiring to. Soon enough, it becomes "comme il faut" to point out weaknesses in proposed code instead of doing the much more difficult and time-consuming work of helping improving it, proposing better alternatives, and such. So all reviewers turn negative, whether they are specialists or not.

As soon as I'd put a pull request that had any informational entropy to it (i.e. not 100% obvious and boring), I might as well put the ear to the ground to hear the sound of shovels digging. The Vickers getting mounted. Barb wire getting rolled. Reviewers were getting ready for trench warfare. It has sapped all of my creative streak. My students, enthusiastic but scared soldiers, hesitate when they hear my whistle. "They'll kill me if I put this pull request out for review," I heard more than once put this way or another.

More specialists would help, but many specialists with a short time each would be less than helpful. We need steady energy, not just the occasional jolt of power. Electrical grid, not lightning. Great Work done by invested specialists has the true potential to turn negativity around.
April 04, 2019
On 3/28/19 1:05 PM, Andrei Alexandrescu wrote:
>>
>> Part of that is we've been cagey about defining copy and assignment semantics of ranges in a simple and comprehensive manner. It seems to me going with these is the right thing:
>>
>> * Input ranges are copyable and assignable, and have pointer semantics (all copies refer to the same underlying position, and advancing one advances all others).
>>
>> * Forward ranges are copyable and assignable, but distinct copies refer to distinct positions in the range such that advancing one does not advance the others.
>>
>> * We don't support other semantics.
> 
> Forget to add - no more save(). We just use some sort of flag and simple introspection.

I say "Hooray!" to that!

I've long felt that 'save()' seemed a rather strange duplication of assignment/copying (or perhaps more accurately, as Walter pointed out, copy construction).

That seemed ugly enough in and of itself, but to make matters much worse, this in turn left the semantics for passing and assigning ranges underdefined and, in effect, needed to treated by algorithms as undefined behavior. And THAT meant that nearly any occurrence of failing to pass ranges by ref (surprisingly common in phobos, at least at the time I noticed this) were much guaranteed to be a bug factory (In my experience, this was especially problematic when working with output ranges.)
April 04, 2019
On 3/28/19 2:19 PM, H. S. Teoh wrote:
> 
> IOW, all ranges will be required to be structs?  And if we need dynamic
> behaviour it will have to be a struct that overrides opAssign to copy
> the state of the underlying range (for forward ranges)?
> 

IIUC, you'd want to use a copy constructor for that, not opAssign.
April 04, 2019
On 3/28/19 2:10 PM, Andrei Alexandrescu wrote:
> 
> Yah, for such we need I think a more primitive notion of UnbufferedRange. An unbuffered range of T has only one API:
> 
> bool fetchNext(ref T);
> 
> Note that the user provides all state, which is interesting. The primitives fills the object and moves to the next one. Returns true on success. Returns persistently false at end of range even if called multiple times. An optional interface (for efficiency's sake) would be:
> 
> size_t fetchNextN(scope T[]);
> 
> which fills a full array and returns the number of elements filled.

Something like that is definitely needed. I remember needing to augment the existing range concept with something like that when I was dealing with crypto hashes. And obviously it'd be very important for I/O steams. Having such thing standardized in phobos would be great.

Steve's input on this may be very important, given his work in iopipe.

> Then we'd have input ranges, which have a buffer of at least one element, i.e. today's input ranges. Input ranges that are not forward ranges are liable to reuse their buffers, so after a call to front(), a call to popFront() may overwrite the current front. This is because by construction input ranges that are not forward ranges do not iterate objects in memory, but instead they transfer data from somewhere else into memory, chunkwise.

This sounds reasonable, but I would REALLY like if we could come up with some way to actually *prevent* the user from accessing a stored .front after the next call to .popFront instead of relying on "programming by convention". Because this seems to run in complete opposition to D philosophy. A static check would be fantastic if possible, or at least injecting an optional run-time check analogous to bounds checking. Not sure how feasible either would be, though.
April 04, 2019
On 3/28/19 2:30 PM, Andrei Alexandrescu wrote:
> 
> We've been worrying too much about changing things.

Yup. *Very* glad to hear this coming from you of all people ;)


> We have a language with virtually no global scope - a versioning engineer's dream. Yet we are coy to tap into that power. 

Yes, precisely! Hear hear!