January 26, 2019
On Fri, 25 Jan 2019 23:08:52 +0000, kinke wrote:

> On Friday, 25 January 2019 at 19:08:55 UTC, Walter Bright wrote:
>> On 1/25/2019 2:57 AM, kinke wrote:
>>> On Thursday, 24 January 2019 at 23:59:30 UTC, Walter Bright wrote:
>>>> On 1/24/2019 1:03 PM, kinke wrote:
>>>>> (bool __gate = false;) , ((A __pfx = a();)) , ((B __pfy =
>>>>> b();)) , __gate = true , f(__pfx, __pfy);
>>>>
>>>> There must be an individual gate for each of __pfx and pfy.
>>>> With the rewrite above, if b() throws then _pfx won't be destructed.
>>> 
>>> There is no individual gate, there's just one to rule the caller-destruction of all temporaries.
>>
>> What happens, then, when b() throws?
> 
> `__pfx` goes out of scope, and is dtor expression (cleanup/finally) is run as part of stack unwinding. Rewritten as block statement:

And nested calls are serialized as you'd expect:

int foo(ref S i, ref S j);
S bar(ref S i, ref S j);
S someRvalue(int i);

foo(
    bar(someRvalue(1), someRvalue(2)),
    someRvalue(4));

// translates to something like:
{
    bool __gate1 = false;
    S __tmp1 = void;
    S __tmp2 = void;
    S __tmp3 = void;
    __tmp1 = someRvalue(1);
    try
    {
        __tmp2 = someRvalue(2);
        __gate1 = true;
        __tmp3 = bar(__tmp1, __tmp2);
    }
    finally
    {
        if (!__gate1) __tmp1.__xdtor();
    }
    S __tmp4 = void;
    bool __gate2 = false;
    try
    {
        __tmp4 = someRvalue(4);
        __gate2 = true;
        return foo(__tmp3, __tmp4);
    }
    finally
    {
        if (!__gate2)
        {
            __tmp3.__xdtor();
        }
    }
}
January 25, 2019
On Fri, Jan 25, 2019 at 4:20 PM Neia Neutuladh via Digitalmars-d-announce <digitalmars-d-announce@puremagic.com> wrote:
>
> On Fri, 25 Jan 2019 23:08:52 +0000, kinke wrote:
>
> > On Friday, 25 January 2019 at 19:08:55 UTC, Walter Bright wrote:
> >> On 1/25/2019 2:57 AM, kinke wrote:
> >>> On Thursday, 24 January 2019 at 23:59:30 UTC, Walter Bright wrote:
> >>>> On 1/24/2019 1:03 PM, kinke wrote:
> >>>>> (bool __gate = false;) , ((A __pfx = a();)) , ((B __pfy =
> >>>>> b();)) , __gate = true , f(__pfx, __pfy);
> >>>>
> >>>> There must be an individual gate for each of __pfx and pfy.
> >>>> With the rewrite above, if b() throws then _pfx won't be destructed.
> >>>
> >>> There is no individual gate, there's just one to rule the caller-destruction of all temporaries.
> >>
> >> What happens, then, when b() throws?
> >
> > `__pfx` goes out of scope, and is dtor expression (cleanup/finally) is run as part of stack unwinding. Rewritten as block statement:
>
> And nested calls are serialized as you'd expect:
>
> int foo(ref S i, ref S j);
> S bar(ref S i, ref S j);
> S someRvalue(int i);
>
> foo(
>     bar(someRvalue(1), someRvalue(2)),
>     someRvalue(4));
>
> // translates to something like:
> {
>     bool __gate1 = false;
>     S __tmp1 = void;
>     S __tmp2 = void;
>     S __tmp3 = void;
>     __tmp1 = someRvalue(1);
>     try
>     {
>         __tmp2 = someRvalue(2);
>         __gate1 = true;
>         __tmp3 = bar(__tmp1, __tmp2);
>     }
>     finally
>     {
>         if (!__gate1) __tmp1.__xdtor();
>     }
>     S __tmp4 = void;
>     bool __gate2 = false;
>     try
>     {
>         __tmp4 = someRvalue(4);
>         __gate2 = true;
>         return foo(__tmp3, __tmp4);
>     }
>     finally
>     {
>         if (!__gate2)
>         {
>             __tmp3.__xdtor();
>         }
>     }
> }

Is this fine?

Given above example:

int foo(ref S i, ref S j);
S bar(ref S i, ref S j);
S someRvalue(int i);

foo(
    bar(someRvalue(1), someRvalue(2)),
    someRvalue(4));

===>

{
  S __tmp0 = someRvalue(1);
  S __tmp1 = someRvalue(2);
  S __tmp2 = bar(__tmp0, __tmp1);
  S __tmp3 = someRvalue(4);
  foo(__tmp2, __tmp3);
}

Removing the `void` stuff end expanding such that the declaration + initialisation is at the appropriate moments; any function can throw normally, and the unwind works naturally?
January 26, 2019
On Fri, 25 Jan 2019 18:14:56 -0800, Manu wrote:
> Removing the `void` stuff end expanding such that the declaration + initialisation is at the appropriate moments; any function can throw normally, and the unwind works naturally?

The contention was that, if the arguments are constructed properly, ownership is given to the called function, which is responsible for calling destructors. But if the argument construction fails, the caller is responsible for calling destructors.

I'm not sure what the point of that was. The called function doesn't own its parameters and shouldn't ever call destructors. So now I'm confused.
January 25, 2019
On Fri, Jan 25, 2019 at 6:50 PM Neia Neutuladh via Digitalmars-d-announce <digitalmars-d-announce@puremagic.com> wrote:
>
> On Fri, 25 Jan 2019 18:14:56 -0800, Manu wrote:
> > Removing the `void` stuff end expanding such that the declaration + initialisation is at the appropriate moments; any function can throw normally, and the unwind works naturally?
>
> The contention was that, if the arguments are constructed properly, ownership is given to the called function, which is responsible for calling destructors.

No, that was never the intent, and certainly not written anywhere. Ownership is assigned the the calling scope that we introduce surrounding the statement. That's where the temporaries declared; I didn't consider that ownership unclear.

> I'm not sure what the point of that was. The called function doesn't own its parameters and shouldn't ever call destructors. So now I'm confused.

Correct. You're not confused. The callee does NOT own ref parameters.
January 25, 2019
On Fri, Jan 25, 2019 at 4:00 AM Walter Bright via Digitalmars-d-announce <digitalmars-d-announce@puremagic.com> wrote:
>
> On 1/24/2019 11:53 PM, Nicholas Wilson wrote:
> > That the conflation of pass by reference to avoid copying and mutation is not only deliberate but also mitigated by @disable.
>
> The first oddity about @disable is it is attached to the foo(int), not the
> foo(ref int). If I wanted to know if foo(ref int) takes rvalue references,

And right here, I can see our fundamental difference of perspective...

I never said anything about 'rvalue references', and I never meant
anything like that; at least, not in the C++ sense, which you seem to
be alluding to.
In C++, rval references are syntactically distinct and identifiable as
such, for the purposes of implementing move semantics.

If we want to talk about "rvalue references", then we need to be
having a *completely* different conversation.
That said, I'm not sure why you've raised this matter, since it's not
written anywhere in the DIP.

What I'm talking about is "not-rvalue-references accepting rvalues", which if you want to transpose into C++ terms, is like `const T&`.

> There are indeed
> unlikable things about the C++ rules, but the DIP needs to pay more attention to
> how C++ does this, and justify why D differs. Particularly because D will likely
> have to have some mechanism of ABI compatibility with C++ functions that take
> rvalue references.

I'm not paying attention to C++ T&& rules, because this DIP has nothing to do with T&&, and there would be no allusion to connecting this to a T&& method. Again, I find that to be a very interesting topic of conversation, but it has nothing to do with this DIP.

> [...]
> Should `s` be promoted to an int temporary, then pass the temporary by
> reference? I can find no guidance in the DIP. What if `s` is a uint (i.e. the
> implicit conversion is a type paint and involves no temporary)?

As per the DIP; yes, that is the point.
The text you seek is written: "[...]. The user should not experience
edge cases, or differences in functionality when calling fun(int x) vs
fun(ref int x)."
That text appears at least 2 times through the document as the stated goal.

Don't accept naked ref unless you want these semantics. There is a suite of tools offered to use where this behaviour is undesirable. Naked `ref` doesn't do anything particularly interesting in the language today that's not *identical* semantically to using a pointer and adding a single '&' character at the callsite. This DIP attempts to make `ref` interesting and useful as a feature in its own right. In discussions designing this thing, I've come to appreciate the UFCS advantages as the most compelling opportunity, among all the other things that burn me almost practically every time I write D code.

> The DIP should not invent its own syntax

I removed it, and replaced it with simpler code (that I think is exception-correct) in my prior post here. It's also a super-trivial amendment.

> It should never have gotten this far without giving a precise explanation of how
exception safety is achieved when faced with multiple parameters.

I apologise. I've never used exceptions in any code I've ever written,
so it's pretty easy for me to overlook that detail.
Nobody else that did the community reviews flagged it, and that
includes you and Andrei, as members of the community.

> All that criticism aside, I'd like to see rvalue references in D. But the DIP needs significant work.

This is *NOT* an "rvalue-references" DIP; this is a "references" DIP. If you want to see an rvalue references DIP, I agree that's a completely different development, and it's also interesting to me... I had *absolutely no idea* that an rvalue-references DIP was welcome. I thought D was somewhat aggressively proud of the fact that we don't have rvalue-references... apparently I took the wrong impression.

That said, this remains infinitely more important to me than an rvalue-references DIP. It's been killing me for 10 years, and I'm personally yet to feel hindered by our lack of rvalue-reference support.
January 25, 2019
On Fri, Jan 25, 2019 at 7:44 PM Manu <turkeyman@gmail.com> wrote:
>
> On Fri, Jan 25, 2019 at 4:00 AM Walter Bright via Digitalmars-d-announce <digitalmars-d-announce@puremagic.com> wrote:
> >
> > The DIP should not invent its own syntax
>
> I removed it, and replaced it with simpler code (that I think is exception-correct) in my prior post here. It's also a super-trivial amendment.

Incidentally, the reason I invented a syntax in this DIP, was because
we have no initialisation syntax in D, despite the language clearly
having the ability to initialise values (when they're declared); we
have an amazingly complex and awkward library implementation of
`emplace`, which is pretty embarrassing really.
The fact that I needed to invent a syntax to perform an initialisation
is a very serious problem in its own right.

But forget about that; I removed the need to express initialisation from the rewrite.
January 25, 2019
On 1/25/2019 7:44 PM, Manu wrote:
> I never said anything about 'rvalue references',

The DIP mentions them several times in the "forum threads" section. I see you want to distinguish the DIP from that; I recommend a section clearing that up.

However, my points about the serious problems with @disable syntax remain.

A section comparing with the C++ solution is necessary as well, more than the one sentence dismissal. For example, how C++ deals with the:

    void foo(const int x);
    void foo(const int& x);

situation needs to be understood and compared. Failing to understand it can lead to serious oversights. For example, C++ doesn't require an @disable syntax to make it work.


>> [...]
>> Should `s` be promoted to an int temporary, then pass the temporary by
>> reference? I can find no guidance in the DIP. What if `s` is a uint (i.e. the
>> implicit conversion is a type paint and involves no temporary)?
> As per the DIP; yes, that is the point.
> The text you seek is written: "[...]. The user should not experience
> edge cases, or differences in functionality when calling fun(int x) vs
> fun(ref int x)."

I don't see how that addresses implicit type conversion at all.


> Don't accept naked ref unless you want these semantics. There is a
> suite of tools offered to use where this behaviour is undesirable.
> Naked `ref` doesn't do anything particularly interesting in the
> language today that's not *identical* semantically to using a pointer
> and adding a single '&' character at the callsite.

It's not good enough. The DIP needs to specifically address what happens with implicit conversions. The reader should not be left wondering about what is implied. I often read a spec and think yeah, yeah, of course it must be that way. But it is spelled out in the spec, and reading it gives me confidence that I'm understanding the semantics, and it gives me confidence that whoever wrote the spec understood it.

(Of course, writing out the implications sometimes causes the writer to realize he didn't actually understand it at all.)

Furthermore, D has these match levels:

    1. exact
    2. const
    3. conversion
    4. no match

If there are two or more matches at the same level, the decision is made based on partial ordering. How does adding the new ref/value overloading fit into that?


>> It should never have gotten this far without giving a precise explanation of how
> exception safety is achieved when faced with multiple parameters.
> 
> I apologise. I've never used exceptions in any code I've ever written,
> so it's pretty easy for me to overlook that detail.

It's so, so easy to get that wrong. C++ benefits from decades of compiler bug fixes with that.


> Nobody else that did the community reviews flagged it,

That's unfortunately right. Note that 'alias this' was approved and implemented, and then multiple serious conceptual problems have appeared with it. I don't want a repeat of that.


> and that includes you and Andrei, as members of the community.

The idea was that Andrei & I wouldn't get too involved in the DIPs until they are vetted by the community. I.e. delegation.


> That said, this remains infinitely more important to me than an
> rvalue-references DIP. It's been killing me for 10 years, and I'm
> personally yet to feel hindered by our lack of rvalue-reference
> support.

I look forward to a much improved DIP from you (and anyone else who wishes to help you out with the work!).

January 26, 2019
On Saturday, 26 January 2019 at 06:15:22 UTC, Walter Bright wrote:
> On 1/25/2019 7:44 PM, Manu wrote:
>> I never said anything about 'rvalue references',
>
> The DIP mentions them several times in the "forum threads" section. I see you want to distinguish the DIP from that; I recommend a section clearing that up.
>
> However, my points about the serious problems with @disable syntax remain.
>
> A section comparing with the C++ solution is necessary as well, more than the one sentence dismissal. For example, how C++ deals with the:
>
>     void foo(const int x);
>     void foo(const int& x);
>
> situation needs to be understood and compared. Failing to understand it can lead to serious oversights. For example, C++ doesn't require an @disable syntax to make it work.
>
>
>>> [...]
>>> Should `s` be promoted to an int temporary, then pass the temporary by
>>> reference? I can find no guidance in the DIP. What if `s` is a uint (i.e. the
>>> implicit conversion is a type paint and involves no temporary)?
>> As per the DIP; yes, that is the point.
>> The text you seek is written: "[...]. The user should not experience
>> edge cases, or differences in functionality when calling fun(int x) vs
>> fun(ref int x)."
>
> I don't see how that addresses implicit type conversion at all.

Anything that could be implicitly converted to use foo(int) can be implicitly converted to pass a ref to the temporary that was implicitly converted to int into foo(ref int). No rules change in this regard. If you don't see how this address type conversion perhaps a code sample might help? The one that was given with short:

void foo(ref int);
void bar(int);

bar( short(10) ); // is ok
foo( short(10) ); // expected to be ok short->int ; ref to temp passed to foo

Just as bar(int) can be passed a short(10), foo(ref int) can be passed a reference to the temporary that was created as well.


>> Don't accept naked ref unless you want these semantics. There is a
>> suite of tools offered to use where this behaviour is undesirable.
>> Naked `ref` doesn't do anything particularly interesting in the
>> language today that's not *identical* semantically to using a pointer
>> and adding a single '&' character at the callsite.
>
> It's not good enough. The DIP needs to specifically address what happens with implicit conversions. The reader should not be left wondering about what is implied. I often read a spec and think yeah, yeah, of course it must be that way. But it is spelled out in the spec, and reading it gives me confidence that I'm understanding the semantics, and it gives me confidence that whoever wrote the spec understood it.
>
> (Of course, writing out the implications sometimes causes the writer to realize he didn't actually understand it at all.)
>
> Furthermore, D has these match levels:
>
>     1. exact
>     2. const
>     3. conversion
>     4. no match
>
> If there are two or more matches at the same level, the decision is made based on partial ordering. How does adding the new ref/value overloading fit into that?

The DIP goes over this, though not in a lot of detail. All the same rules apply as with the current implementation. Where there would be a compiler error trying to pass an rvalue would instead forward the value.

Effectively what is being implemented is the following (for type matching only):

   void foo( ref int );
   void foo( int value ) { foo( value ); }

Anything that would have been passed to foo(int) is passed to foo(ref int) as a reference to a temporary instead. No rules are changed in this regard for matching, all the same rules apply (as stated in the DIP). It's pretty clear, unless you can give a specific problem faced where this doesn't hold? D is pretty strict to ensure rvalues aren't passed to ref's and that's what makes this relatively simple to implement without changing matching rules.

January 26, 2019
On 1/26/2019 8:28 AM, Rubn wrote:
> [...]

The point is, the DIP needs to spell this out in an organized and complete fashion, like any proper spec does.

We all want a better specified language, let's make it happen.
January 28, 2019
On 1/24/19 2:18 AM, Mike Parker wrote:
> Walter and Andrei have declined to accept DIP 1016, "ref T accepts r-values", on the grounds that it has two fundamental flaws that would open holes in the language. They are not opposed to the feature in principle and suggested that a proposal that closes those holes and covers all the bases will have a higher chance of getting accepted.
> 
> You can read a summary of the Formal Assessment at the bottom of the document:
> 
> https://github.com/dlang/DIPs/blob/master/DIPs/rejected/DIP1016.md

Hi everyone, I've followed the responses to this, some conveying frustration about the decision and some about the review process itself. As the person who carried a significant part of the review, allow me to share a few thoughts of possible interest.

* Fundamentally: a DIP should stand on its own and be judged on its own merit, regardless of rhetoric surrounding it, unstated assumptions, or trends of opinion in the forums. There has been a bit of material in this forum discussion that should have been argued properly as a part of the DIP itself.

* The misinterpretation of the rewrite (expression -> statement vs. statement -> statement) is mine, apologies. (It does not influence our decision and should not be construed as an essential aspect of the review.) The mistake was caused by the informality of the DIP, which shows rewrites as a few simplistic examples instead of a general rewrite rule. Function calls are expressions, so I naturally assumed the path would be to start with the function call expression. Formulating a general rule as a statement rewrite is possible but not easy and fraught with peril, as discussion in this thread has shown. I very much recommend going the expression route (e.g. with the help of lambdas) because that makes it very easy to expand to arbitrarily complex expressions involving function calls. Clarifying what temporaries get names and when in a complex expression is considerably more difficult (probably not impossible but why suffer).

* Arguments of the form: "You say DIP 1016 is bad, but look at how bad DIP XYZ is!" are great when directed at the poor quality of DIP XYZ. They are NOT good arguments in favor of DIP 1016.

* Arguments of the form "Functions that take ref parameters just for changing them are really niche anyway" should be properly made in the DIP, not in the forums and assumed without stating in the DIP. Again, what's being evaluated is "DIP" not "DIP + surrounding rhetoric". A good argument would be e.g. analyzing a number of libraries and assess that e.g. 91% uses of ref is for efficiency purposes, 3% is unclear, and only 6% is for side-effect purpose. All preexisting code using ref parameters written under the current rule assumes that only lvalues will be bound to them. A subset of these functions take by ref for changing them only. The DIP should explain why that's not a problem, or if it is one it is a small problem, etc. My point is - the DIP should _approach_ the matter and build an argument about it. One more example from preexisting code for illustration, from the standard library:

// in the allocators API
bool expand(ref void[] b, size_t delta);
bool reallocate(ref void[] b, size_t s);

These primitives modify their first argument in essential ways. The intent is to fill b with the new slice resulted after expansion/reallocation. Under the current rules, calling these primitives is cumbersome, but usefully so because the processing done requires extra care if typed data is being reallocated. Under DIP 1016, a call with any T[] will silently "succeed" by converting the slice to void[], passing the temporary to expand/reallocate, then return as if all is well - yet the original slice has not been changed. The DIP should create a salient argument regarding these situations (and not only this example, but the entire class). It could perhaps argue that:

- Such code is bad to start with, and should not have been written.
- Such code is so rare, we can take the hit. We then have a recommendation for library writers on how to amend their codebase (use @disable or some other mechanisms).
- The advantages greatly outweigh this problem.
- The bugs caused are minor easy to find.
- ...

Point being: the matter, again should be _addressed_ by the DIP.

* Regarding our recommendation that the proposal is resubmited as a distinct DIP as opposed to a patch on the existing DIP: this was not embracing bureaucracy. Instead, we considered that the DIP was too poor to be easily modified into a strong proposal, and recommended that it be rewritten simply because it would be easier and would engender a stronger DIP.

* Regarding the argument "why not make this an iterative process where concerns are raised and incrementally addressed?" We modeled the DIP process after similar processes - conference papers, journal papers, proposals in other languages. There is a proposal by one or more responsibles, perfected by a community review, and submitted for review. This encourages building a strong proposal - as strong as can be - prior to submission. Washing that down to a negotiation between the proposers and the reviewers leads to a "worst acceptable proposal" state of affairs in which proposers are incentivized to submit the least-effort proposal, reactively change it as issues are raised by reviewers. As anyone who has submitted a conference paper, that's not how it works, and even if the process is highly frustrating (yes, reviewers in so many cases misunderstand parts of the paper...) it does lead to strong work. There are cases in which papers are "accepted with amends" - those are strong submissions that have a few issues that are easily fixable. With apologies, we do not consider this DIP to be in that category.

This result was frustrating and disheartening on our side, too: a stronger DIP should have resulted after all these years. I encourage interested people to make a DIP that is scientifically-argued, clearly formalized, and provides a thorough analysis of the consequences of the proposed design.


Hope this helps,

Andrei