January 28
On 1/24/19 3:01 PM, kinke wrote:
> On Thursday, 24 January 2019 at 09:49:14 UTC, Manu wrote:
>> We discussed and concluded that one mechanism to mitigate this issue
>> was already readily available, and it's just that 'out' gains a much
>> greater sense of identity (which is actually a positive side-effect if
>> you ask me!).
>> You have a stronger motivation to use 'out' appropriately, because it
>> can issue compile errors if you accidentally supply an rvalue.
> 
> `out` with current semantics cannot be used as drop-in replacement for shared in-/output ref params, as `out` params are default-initialized on entry. Ignoring backwards compatibility for a second, I think getting rid of that would actually be beneficial (most args are probably already default-initialized by the callee in the line above the call...) - and I'd prefer an explicitly required `out` at the call site (C# style), to make the side effect clearly visible.
> 
> I'd have otherwise proposed a `@noRVal` param UDA, but redefining `out` is too tempting indeed. ;)

It seems to me that a proposal adding the "@rvalue" attribute in function signatures to each parameter that would accept either an rvalue or an lvalue would be easy to argue.

- No exposing existing APIs to wrong uses
- The function's writer makes the decision ("I'm fine with this function taking an rvalue")
- Appears in the function's documentation
- Syntax is light and localized where it belongs
- Scales well with number of parameters
- Transparent to callers

Whether existing keyword combinations ("in", "out", "ref" etc) could be used is a secondary point.

The advantage is there's a simple and clear path forward for API definition and use.


Andrei
January 28
On 1/28/19 1:00 PM, Andrei Alexandrescu wrote:
> On 1/24/19 3:01 PM, kinke wrote:
>> On Thursday, 24 January 2019 at 09:49:14 UTC, Manu wrote:
>>> We discussed and concluded that one mechanism to mitigate this issue
>>> was already readily available, and it's just that 'out' gains a much
>>> greater sense of identity (which is actually a positive side-effect if
>>> you ask me!).
>>> You have a stronger motivation to use 'out' appropriately, because it
>>> can issue compile errors if you accidentally supply an rvalue.
>>
>> `out` with current semantics cannot be used as drop-in replacement for shared in-/output ref params, as `out` params are default-initialized on entry. Ignoring backwards compatibility for a second, I think getting rid of that would actually be beneficial (most args are probably already default-initialized by the callee in the line above the call...) - and I'd prefer an explicitly required `out` at the call site (C# style), to make the side effect clearly visible.
>>
>> I'd have otherwise proposed a `@noRVal` param UDA, but redefining `out` is too tempting indeed. ;)
> 
> It seems to me that a proposal adding the "@rvalue" attribute in function signatures to each parameter that would accept either an rvalue or an lvalue would be easy to argue.
> 
> - No exposing existing APIs to wrong uses
> - The function's writer makes the decision ("I'm fine with this function taking an rvalue")
> - Appears in the function's documentation
> - Syntax is light and localized where it belongs
> - Scales well with number of parameters
> - Transparent to callers
> 
> Whether existing keyword combinations ("in", "out", "ref" etc) could be used is a secondary point.
> 
> The advantage is there's a simple and clear path forward for API definition and use.
> 
> 
> Andrei

One more thought.

The main danger is restricted to a specific conversion: lvalue of type T is converted to ref of type U. That way both the caller and the function writer believe the value gets updated, when in fact it doesn't. Consider:

real modf(real x, ref real i);

Stores integral part in i, returns the fractional part. At this point there are two liabilities:

1. User passes the wrong parameter type:

double integral;
double frac = modf(x, integral);
// oops, integral is always NaN

The function silently converts integral from double to real and passes the resulting temporary into the function. The temporary is filled and lost, leaving user's value unchanged.

2. The API gets changed:

// Fine, let's use double
real modf(real x, ref double i);

At this point all correct callers are silently broken - everybody who correctly used a real for the integral part now has their call broken (real implicitly converts to a double temporary, and the change does not propagate to the user's value).

(If the example looks familiar it may be because of https://dlang.org/library/std/math/modf.html.)

So it seems that the real problem is that the participants wrongly believe an lvalue is updated.

But let's say the caller genuinely doesn't care about the integral part. To do so is awkward:

real unused;
double frac = modf(x, unused);

That code isn't any better or less dangerous than:

double frac = modf(x, double());

Here the user created willingly created an unnamed temporary of type double. Given that there's no doubt the user is not interested in that value after the call, the compiler could (in a proposed semantics) allow the conversion of the unnamed temporary to ref.

TL;DR: it could be argued that the only dangerous conversions are lvalue -> temp rvalue -> ref, so only disable those. The conversion rvalue -> temp rvalue -> ref is not dangerous because the starting value on the caller side could not be inspected after the call anyway.


Andrei
January 28
On 1/28/19 2:58 PM, Andrei Alexandrescu wrote:
> TL;DR: it could be argued that the only dangerous conversions are lvalue -> temp rvalue -> ref, so only disable those. The conversion rvalue -> temp rvalue -> ref is not dangerous because the starting value on the caller side could not be inspected after the call anyway.

I agree with you. It's one thing for the caller to pass in an rvalue that can clearly no longer be accessed. It's another thing to pass in an lvalue and have it "update" a temporary instead.

I already see this kind of bug all the time with alias this.

-Steve
January 28
On 1/28/19 5:23 PM, Steven Schveighoffer wrote:
> I already see this kind of bug all the time with alias this.

Can you please post more detail? It may be of relevance to future work.
January 28
On 1/28/19 9:15 PM, Andrei Alexandrescu wrote:
> On 1/28/19 5:23 PM, Steven Schveighoffer wrote:
>> I already see this kind of bug all the time with alias this.
> 
> Can you please post more detail? It may be of relevance to future work.

Any time you have the alias this, then you can get confused when calling a function expecting it to be sent as the original type, but the alias this is used instead. In cases where both are seemingly accepted by overloads, then it can be confusing which overload is used. Sometimes it's as simple as the overload that you were expecting to be called won't compile or is deselected by constraints. But the code still compiles and runs, just does something different from what you expected.

I meant that the effect is similar to what you were noting.

-Steve
January 28
On Mon, Jan 28, 2019 at 12:00 PM Andrei Alexandrescu via Digitalmars-d-announce <digitalmars-d-announce@puremagic.com> wrote:
>
> On 1/28/19 1:00 PM, Andrei Alexandrescu wrote:
> > On 1/24/19 3:01 PM, kinke wrote:
> >> On Thursday, 24 January 2019 at 09:49:14 UTC, Manu wrote:
> >>> We discussed and concluded that one mechanism to mitigate this issue
> >>> was already readily available, and it's just that 'out' gains a much
> >>> greater sense of identity (which is actually a positive side-effect if
> >>> you ask me!).
> >>> You have a stronger motivation to use 'out' appropriately, because it
> >>> can issue compile errors if you accidentally supply an rvalue.
> >>
> >> `out` with current semantics cannot be used as drop-in replacement for shared in-/output ref params, as `out` params are default-initialized on entry. Ignoring backwards compatibility for a second, I think getting rid of that would actually be beneficial (most args are probably already default-initialized by the callee in the line above the call...) - and I'd prefer an explicitly required `out` at the call site (C# style), to make the side effect clearly visible.
> >>
> >> I'd have otherwise proposed a `@noRVal` param UDA, but redefining `out` is too tempting indeed. ;)
> >
> > It seems to me that a proposal adding the "@rvalue" attribute in function signatures to each parameter that would accept either an rvalue or an lvalue would be easy to argue.
> >
> > - No exposing existing APIs to wrong uses
> > - The function's writer makes the decision ("I'm fine with this function
> > taking an rvalue")
> > - Appears in the function's documentation
> > - Syntax is light and localized where it belongs
> > - Scales well with number of parameters
> > - Transparent to callers
> >
> > Whether existing keyword combinations ("in", "out", "ref" etc) could be
> > used is a secondary point.
> >
> > The advantage is there's a simple and clear path forward for API definition and use.
> >
> >
> > Andrei
>
> One more thought.
>
> The main danger is restricted to a specific conversion: lvalue of type T is converted to ref of type U. That way both the caller and the function writer believe the value gets updated, when in fact it doesn't. Consider:
>
> real modf(real x, ref real i);
>
> Stores integral part in i, returns the fractional part. At this point there are two liabilities:
>
> 1. User passes the wrong parameter type:
>
> double integral;
> double frac = modf(x, integral);
> // oops, integral is always NaN
>
> The function silently converts integral from double to real and passes the resulting temporary into the function. The temporary is filled and lost, leaving user's value unchanged.
>
> 2. The API gets changed:
>
> // Fine, let's use double
> real modf(real x, ref double i);
>
> At this point all correct callers are silently broken - everybody who correctly used a real for the integral part now has their call broken (real implicitly converts to a double temporary, and the change does not propagate to the user's value).
>
> (If the example looks familiar it may be because of
> https://dlang.org/library/std/math/modf.html.)
>
> So it seems that the real problem is that the participants wrongly believe an lvalue is updated.
>
> But let's say the caller genuinely doesn't care about the integral part. To do so is awkward:
>
> real unused;
> double frac = modf(x, unused);
>
> That code isn't any better or less dangerous than:
>
> double frac = modf(x, double());
>
> Here the user created willingly created an unnamed temporary of type double. Given that there's no doubt the user is not interested in that value after the call, the compiler could (in a proposed semantics) allow the conversion of the unnamed temporary to ref.
>
> TL;DR: it could be argued that the only dangerous conversions are lvalue -> temp rvalue -> ref, so only disable those. The conversion rvalue -> temp rvalue -> ref is not dangerous because the starting value on the caller side could not be inspected after the call anyway.

I started reading this post, and I was compelled to reply with this
same response, and then I realised you got there yourself.
I understand your concern, and it has actually been discussed lightly,
but regardless, you'll find that the issue you describe is not
suggested anywhere in this DIP.
This DIP is about passing rvalues to ref... so the issue you describe
passing lvalues to ref does not apply here.
There is no suggestion to change lvalue rules anywhere in this DIP.
January 28
On Fri, Jan 25, 2019 at 10:20 PM Walter Bright via Digitalmars-d-announce <digitalmars-d-announce@puremagic.com> wrote:
>
> On 1/25/2019 7:44 PM, Manu wrote:
> > I never said anything about 'rvalue references',
>
> The DIP mentions them several times in the "forum threads" section. I see you want to distinguish the DIP from that; I recommend a section clearing that up.
>
> However, my points about the serious problems with @disable syntax remain.

I think the `@disable` semantic is correct; I understand your
criticism that you have to search for the negative to understand the
restruction, but that perspective arises presumably from a presumption
that you want to explicity state inclusion, which is the opposite of
the intent.
The goal is to state exclusion, we are *adding* restrictions (ie,
removing potential calls) from the default more-inclusive behaviour,
and from that perspective, `@disable` is in the proper place.

> A section comparing with the C++ solution is necessary as well, more than the one sentence dismissal. For example, how C++ deals with the:
>
>      void foo(const int x);
>      void foo(const int& x);
>
> situation needs to be understood and compared. Failing to understand it can lead to serious oversights. For example, C++ doesn't require an @disable syntax to make it work.

C++ doesn't desire a @disable semantic, it just works as described in this DIP.

Eg:

```c++
void fun(const int& x) {}
void test()
{
    fun(10);
    fun(short(10)); // <- no problem!
}
```

It's the dlang critics of this functionality that demand explicit
controls on functions accepting one kind or the other.
I personally see no value in all that noise, but I added it in due to
popular demand.

> >> [...]
> >> Should `s` be promoted to an int temporary, then pass the temporary by
> >> reference? I can find no guidance in the DIP. What if `s` is a uint (i.e. the
> >> implicit conversion is a type paint and involves no temporary)?
> > As per the DIP; yes, that is the point.
> > The text you seek is written: "[...]. The user should not experience
> > edge cases, or differences in functionality when calling fun(int x) vs
> > fun(ref int x)."
>
> I don't see how that addresses implicit type conversion at all.

It explicitly permits it as one of the goals of the DIP. Uniformity in function calling is one of the main goals here.

> > Don't accept naked ref unless you want these semantics. There is a suite of tools offered to use where this behaviour is undesirable. Naked `ref` doesn't do anything particularly interesting in the language today that's not *identical* semantically to using a pointer and adding a single '&' character at the callsite.
>
> It's not good enough. The DIP needs to specifically address what happens with implicit conversions. The reader should not be left wondering about what is implied.

As I said above, it couldn't be stated more clearly in the DIP; it is very explicitly permitted, and stated that "the user should not experience any difference in calling semantics when using ref".

> I often read a spec and think yeah, yeah, of course it must be that
> way. But it is spelled out in the spec, and reading it gives me confidence that
> I'm understanding the semantics, and it gives me confidence that whoever wrote
> the spec understood it.

Okay, but it is spelled out. How could I make it clearer?

> (Of course, writing out the implications sometimes causes the writer to realize he didn't actually understand it at all.)
>
> Furthermore, D has these match levels:
>
>      1. exact
>      2. const
>      3. conversion
>      4. no match
>
> If there are two or more matches at the same level, the decision is made based on partial ordering. How does adding the new ref/value overloading fit into that?

I haven't described this well. I can try and improve this.
Where can I find these existing rules detailed comprehensively? I have
never seen them mentioned in the dlang language reference.
It's hard for me to speak in these terms, when I've never seen any
text in the language spec that does so.

Note; this criticism was nowhere to be found in your rejection text,
and it would have been trivial during community reviews to make this
note.
I feel like this is a mostly simple revision to make.

> >> It should never have gotten this far without giving a precise explanation of how
> > exception safety is achieved when faced with multiple parameters.
> >
> > I apologise. I've never used exceptions in any code I've ever written, so it's pretty easy for me to overlook that detail.
>
> It's so, so easy to get that wrong. C++ benefits from decades of compiler bug fixes with that.

I think my revision is water-tight (up a few posts). If the new rewrite were written by hand and had some problem there, then there's a serious problem with core language.

> > Nobody else that did the community reviews flagged it,
>
> That's unfortunately right. Note that 'alias this' was approved and implemented, and then multiple serious conceptual problems have appeared with it. I don't want a repeat of that.

I couldn't have used D to solve so many problems over the years without `alias this`. If we refuse struct inheritence, then without `alias this`, we're screwed.

> > and that includes you and Andrei, as members of the community.
>
> The idea was that Andrei & I wouldn't get too involved in the DIPs until they are vetted by the community. I.e. delegation.

You're still members of the community. If something slipped through
community review that you could have easily spotted, then the strategy
you describe is no good.
If you had have spotted it 4-5 months ago, I would have wasted a LOT
less time... and now you're asking me to sign up for another few
hundred days of do-over from square-1.
I'd suggest that feedback at community-review stage would have been
extremely valuable.

> > That said, this remains infinitely more important to me than an rvalue-references DIP. It's been killing me for 10 years, and I'm personally yet to feel hindered by our lack of rvalue-reference support.
>
> I look forward to a much improved DIP from you (and anyone else who wishes to help you out with the work!).

That's good, but telling me to start over rather than revise is unbelievably annoying and de-motivating.

I'd like you and Andrei to revise the response at the end of the
rejected DIP to remove the text that is erroneous. It's not fair to
present a rejection where a good portion of the detailed reasoning is
just plain wrong.
By my reading, there will only be one paragraph left, and then the
motivation for a hard rejection will look very different.
January 29
On Mon, Jan 28, 2019 at 9:25 AM Andrei Alexandrescu via Digitalmars-d-announce <digitalmars-d-announce@puremagic.com> wrote:
>
> On 1/24/19 2:18 AM, Mike Parker wrote:
> > Walter and Andrei have declined to accept DIP 1016, "ref T accepts r-values", on the grounds that it has two fundamental flaws that would open holes in the language. They are not opposed to the feature in principle and suggested that a proposal that closes those holes and covers all the bases will have a higher chance of getting accepted.
> >
> > You can read a summary of the Formal Assessment at the bottom of the document:
> >
> > https://github.com/dlang/DIPs/blob/master/DIPs/rejected/DIP1016.md
>
> Hi everyone, I've followed the responses to this, some conveying frustration about the decision and some about the review process itself. As the person who carried a significant part of the review, allow me to share a few thoughts of possible interest.
>
> * Fundamentally: a DIP should stand on its own and be judged on its own merit, regardless of rhetoric surrounding it, unstated assumptions, or trends of opinion in the forums. There has been a bit of material in this forum discussion that should have been argued properly as a part of the DIP itself.
>
> * The misinterpretation of the rewrite (expression -> statement vs. statement -> statement) is mine, apologies. (It does not influence our decision and should not be construed as an essential aspect of the review.) The mistake was caused by the informality of the DIP, which shows rewrites as a few simplistic examples instead of a general rewrite rule. Function calls are expressions, so I naturally assumed the path would be to start with the function call expression. Formulating a general rule as a statement rewrite is possible but not easy and fraught with peril, as discussion in this thread has shown. I very much recommend going the expression route (e.g. with the help of lambdas) because that makes it very easy to expand to arbitrarily complex expressions involving function calls. Clarifying what temporaries get names and when in a complex expression is considerably more difficult (probably not impossible but why suffer).
>
> * Arguments of the form: "You say DIP 1016 is bad, but look at how bad DIP XYZ is!" are great when directed at the poor quality of DIP XYZ. They are NOT good arguments in favor of DIP 1016.
>
> * Arguments of the form "Functions that take ref parameters just for changing them are really niche anyway" should be properly made in the DIP, not in the forums and assumed without stating in the DIP. Again, what's being evaluated is "DIP" not "DIP + surrounding rhetoric". A good argument would be e.g. analyzing a number of libraries and assess that e.g. 91% uses of ref is for efficiency purposes, 3% is unclear, and only 6% is for side-effect purpose. All preexisting code using ref parameters written under the current rule assumes that only lvalues will be bound to them. A subset of these functions take by ref for changing them only. The DIP should explain why that's not a problem, or if it is one it is a small problem, etc. My point is - the DIP should _approach_ the matter and build an argument about it. One more example from preexisting code for illustration, from the standard library:
>
> // in the allocators API
> bool expand(ref void[] b, size_t delta);
> bool reallocate(ref void[] b, size_t s);
>
> These primitives modify their first argument in essential ways. The intent is to fill b with the new slice resulted after expansion/reallocation. Under the current rules, calling these primitives is cumbersome, but usefully so because the processing done requires extra care if typed data is being reallocated. Under DIP 1016, a call with any T[] will silently "succeed" by converting the slice to void[], passing the temporary to expand/reallocate, then return as if all is well - yet the original slice has not been changed. The DIP should create a salient argument regarding these situations (and not only this example, but the entire class). It could perhaps argue that:
>
> - Such code is bad to start with, and should not have been written.
> - Such code is so rare, we can take the hit. We then have a
> recommendation for library writers on how to amend their codebase (use
> @disable or some other mechanisms).
> - The advantages greatly outweigh this problem.
> - The bugs caused are minor easy to find.
> - ...
>
> Point being: the matter, again should be _addressed_ by the DIP.
>
> * Regarding our recommendation that the proposal is resubmited as a distinct DIP as opposed to a patch on the existing DIP: this was not embracing bureaucracy. Instead, we considered that the DIP was too poor to be easily modified into a strong proposal, and recommended that it be rewritten simply because it would be easier and would engender a stronger DIP.
>
> * Regarding the argument "why not make this an iterative process where concerns are raised and incrementally addressed?" We modeled the DIP process after similar processes - conference papers, journal papers, proposals in other languages. There is a proposal by one or more responsibles, perfected by a community review, and submitted for review. This encourages building a strong proposal - as strong as can be - prior to submission. Washing that down to a negotiation between the proposers and the reviewers leads to a "worst acceptable proposal" state of affairs in which proposers are incentivized to submit the least-effort proposal, reactively change it as issues are raised by reviewers. As anyone who has submitted a conference paper, that's not how it works, and even if the process is highly frustrating (yes, reviewers in so many cases misunderstand parts of the paper...) it does lead to strong work. There are cases in which papers are "accepted with amends" - those are strong submissions that have a few issues that are easily fixable. With apologies, we do not consider this DIP to be in that category.
>
> This result was frustrating and disheartening on our side, too: a stronger DIP should have resulted after all these years. I encourage interested people to make a DIP that is scientifically-argued, clearly formalized, and provides a thorough analysis of the consequences of the proposed design.
>
>
> Hope this helps,
>
> Andrei

A few things in here...

1. All of this is more useful criticism than the official and final
criticism affixed to the rejection, which when revised to remove the
incorrect criticisms, is basically left with the text "The Language
Maintainers found other issues with the proposal, most of which may
have been remedied through simple revision"
2. All of this criticism could have been given at any point in the
past half a year or so prior to submission, and that would have been
appreciated, rather than wasting our time.
3. "It does not influence our decision and should not be construed as
an essential aspect of the review"  <--  Then why did it feature as
one of just 3 core criticism in the rejection text? And supplied as
one of the 2 reasons that could not "have been remedied through simple
revision".
4. "Under DIP 1016, a call with any T[] will silently "succeed" by
converting the slice to void[]"  <--  Do you mean "... with any T[]
rvalue ..."? What would be the aim of that call? Can you suggest a
particularly sinister construction?
5. "and recommended that it be rewritten simply because it would be
easier and would engender a stronger DIP."  <--  I wrote the DIP I
wrote... your official feedback affixed to the bottom of the DIP was
pretty much entirely unhelpful, almost offensively so. I would just
write the same DIP if I started again. I genuinely hope someone can be
bothered to do this. After 10 years on this, I think I'm over it.
6. "This result was frustrating and disheartening on our side, too: a
stronger DIP ..."  <--  I'm sorry you hated it. You could have
reviewed it at any point, made suggestions at any point, written it
yourself, or encouraged someone competent to do it.
7. Your general tone is superior, as usual.

If it's so terrible as you say, after having survived community treatment and approval, then I suggest there's a serious problem in process. As you've already made clear, I'm not qualified to address this.

> Hope this helps,

Sure.
January 29
On 1/28/2019 10:10 PM, Manu wrote:
>> Furthermore, D has these match levels:
>>
>>       1. exact
>>       2. const
>>       3. conversion
>>       4. no match
>>
>> If there are two or more matches at the same level, the decision is made based
>> on partial ordering. How does adding the new ref/value overloading fit into that?
> 
> I haven't described this well. I can try and improve this.
> Where can I find these existing rules detailed comprehensively? I have
> never seen them mentioned in the dlang language reference.

It's where it should be, under "Function Overloading":

https://dlang.org/spec/function.html#function-overloading


> If you had have spotted it 4-5 months ago, I would have wasted a LOT
> less time... and now you're asking me to sign up for another few
> hundred days of do-over from square-1.

It's pretty clear that just writing the DIP myself will take me less time than point-by-point endlessly arguing about it. But it will take more calendar time, because all this stuff falls to me. It's why I get paid the big bucks.
January 29
On 1/29/19 1:01 AM, Manu wrote:
> This DIP is about passing rvalues to ref... so the issue you describe
> passing lvalues to ref does not apply here.
> There is no suggestion to change lvalue rules anywhere in this DIP.

The problem is with rvalues resulting as temporaries from lvalues. As in:

void bump(ref int x) { ++x; }
short y = 42;
bump(y);
assert(y == 43); // oops

This is a smoking gun. If DIP 1016 does not propose allowing the code above, it has caused a gross misunderstanding. Similarly, you mention in a different response:

> 4. "Under DIP 1016, a call with any T[] will silently "succeed" by
> converting the slice to void[]"  <--  Do you mean "... with any T[]
> rvalue ..."? What would be the aim of that call? Can you suggest a
> particularly sinister construction?

If your intent was to NOT allow rvalues resulting from conversions, it didn't come through at all. On the contrary, some examples suggest DIP 1016 _does_ allow it, as in this example:

========
This inconvenience extends broadly to every manner of rvalue passed to functions, including:

...
fun(my_short); // implicit type conversions (ie, short->int promotion)
========

The reader assumes since the DIP characterizes passing a short lvalue to a ref int is an inconvenience, the DIP has set out to resolve it.

The lowering rules proposed by DIP 1006 prescribe the following:

void bump(ref int x) { ++x; }
short y = 42;
bump(y);
===>
void bump(ref int x) { ++x; }
short y = 42;
{
    int __temp0 = y;
    bump(__temp0);
}

So... yes, lvalues do (undesirably) get converted to rvalues of a different type according to the DIP.

Are you sure the DIP expresses what you are trying to accomplish?


Andrei


1 2 3 4 5 6 7 8 9 10 11 12 13 14 15