December 07, 2014
On 12/7/2014 6:12 AM, Dicebot wrote:
> But from existing cases it doesn't seem working good enough. For example, not
> being able to represent idiom of `scope ref int foo(scope ref int x) { return x;
> }` seems very limiting.

  scope ref int foo(ref int x);

will do it.


> I also don't consider `ref` design as a storage class any kind of success at all
> and generally agree with Manu on this topic. At the same time alternative
> proposals that make it a qualifier (like Marc did) do impact existing language
> much more and this no small concern.

My experience with C++ ref as type qualifier is very, very bad. It's a special case EVERYWHERE. Doing type deduction with it is an exercise in a completely baffling set of rules and a different rule for every occasion - Scott Meyers has a great piece on this.

There are probably only a handful of people on the planet who actually understand C++ ref. I wished very hard to avoid that with D ref.

December 07, 2014
Nick Treleaven:

> This might also make the proposed 'int[$] = [...];' syntax unnecessary.

Or might not. The [$] proposal is very refined.

Bye,
bearophile
December 07, 2014
Walter Bright:

> There are probably only a handful of people on the planet who actually understand C++ ref. I wished very hard to avoid that with D ref.

When C++ programmers say that D-style ranges can't do everything C++ iterators can do, they seem to miss that sometimes it's a good idea to adopt a simpler language feature, that doesn't cover 100% usages, if it covers 80-90% of the cases, and has a simpler syntax, and simpler semantics to understand for the programmer.

(The comment above is not about DIP69).

Bye,
bearophile
December 07, 2014
On 12/7/2014 2:58 PM, bearophile wrote:
> When C++ programmers say that D-style ranges can't do everything C++ iterators
> can do, they seem to miss that sometimes it's a good idea to adopt a simpler
> language feature, that doesn't cover 100% usages, if it covers 80-90% of the
> cases, and has a simpler syntax, and simpler semantics to understand for the
> programmer.

I agree, but it's hard to find that sweet spot. I think Java definitely went too far, and Go went too far for my taste.


> (The comment above is not about DIP69).

Yes, it is :-)

December 08, 2014
On 12/5/14 6:09 PM, Walter Bright wrote:
> On 12/4/2014 1:32 PM, Steven Schveighoffer wrote:
>> On 12/4/14 3:58 PM, Walter Bright wrote:
>>> On 12/4/2014 7:25 AM, Steven Schveighoffer wrote:
>>>> int* bar(scope int*);
>>>> scope int* foo();
>>>>
>>>> bar(foo());           // Ok, lifetime(foo()) > lifetime(bar())
>>>>
>>>> I'm trying to understand how foo can be implemented in any case. It
>>>> has no scope
>>>> ints to return, so where does it get the int from?
>>>
>>> Could be from a global variable. Or a new'd value.
>>
>> Well, OK, but why do that?
>
> Why would a programmer do that? I often ask that question! But the
> language allows it, therefore we must support it.

But you're the programmer that did it, it's YOUR example! :)

However, it's not just that, this is the ONLY example given as to why we support scope returns. The language allows it, therefore we must support it? But it's not allowed now, right? Why add it?

>>> The scope return value does not affect what can be returned. It affects
>>> how that return value can be used. I.e. the return value cannot be used
>>> in such a way that it escapes the lifetime of the expression.
>>
>> I assumed the scope return was so you could do things like:
>>
>> scope int *foo(scope int *x)
>> {
>>     return x;
>> }
>>
>> which would be fine, I assume, right?
>
> No. A scope parameter means the value does not escape the function. That
> means you can't return it.

I think you should eliminate scope returns then. They are not useful. I can't think of a single reason why a newly allocated via GC or global reference return should have to be restricted to exist only within the statement. Both have infinite lifetimes.

-Steve
December 08, 2014
On 12/5/14 3:55 PM, Walter Bright wrote:
> On 12/5/2014 7:27 AM, Steven Schveighoffer wrote:
>> Can someone who knows what this new feature is supposed to do give
>> some Ali
>> Çehreli-like description on the feature? Basically, let's strip out
>> the *proof*
>> in the DIP (the how it works and why we have it), and focus on how it
>> is to be
>> used.
>>
>> I still am having a hard time wrapping my head around the benefits and
>> when to
>> use scope, scope ref, why I would use it. I'm worried that we are
>> adding all
>> this complication and it will confuse the shit out of users, to the
>> point where
>> they won't use it.
>
> The tl;dr version is when a declaration is tagged with 'scope', the
> contents of that variable will not escape the lifetime of that declaration.
>
> It means that this code will be safe:
>
>     void foo(scope int* p);
>
>     p = malloc(n);
>     foo(p);
>     free(p);
>
> The rest is all the nuts and bolts of making that work.
>

This is not what I was asking for. What I wanted to know was, when I see scope ref, why is it there? When should I use it? When should I use scope? What nasty things are prevented if I use it? Examples would be preferable.

Note, in your example above, marking foo pure solves the problem already.

The rules and statements give an inferred benefit. I'd like that benefit to be more fully explained.

-Steve
December 08, 2014
On Monday, 8 December 2014 at 15:12:51 UTC, Steven Schveighoffer wrote:
> I think you should eliminate scope returns then. They are not useful. I can't think of a single reason why a newly allocated via GC or global reference return should have to be restricted to exist only within the statement. Both have infinite lifetimes.

It's for references to objects that are owned by the function (or object of which the function is a method). These don't have infinite lifetimes.
December 08, 2014
On 12/7/14 4:29 PM, Walter Bright wrote:
> On 12/7/2014 6:12 AM, Dicebot wrote:
>> But from existing cases it doesn't seem working good enough. For
>> example, not
>> being able to represent idiom of `scope ref int foo(scope ref int x) {
>> return x;
>> }` seems very limiting.
>
>    scope ref int foo(ref int x);
>
> will do it.

So:

int x;

foo(x) += 1;

will compile? I was under the impression this would be disallowed.

If you do not connect the scope to the parameter, then the caller has to assume that variable can escape, and you can't use scoped variables as arguments at all.

> My experience with C++ ref as type qualifier is very, very bad. It's a
> special case EVERYWHERE. Doing type deduction with it is an exercise in
> a completely baffling set of rules and a different rule for every
> occasion - Scott Meyers has a great piece on this.
>
> There are probably only a handful of people on the planet who actually
> understand C++ ref. I wished very hard to avoid that with D ref.
>

D has so many features that did not exist when ref was created (as inout in D1), that you can ALMOST duplicate ref. The only thing we could not duplicate is the implicit address-taking on construction, which maybe is not such a bad thing.

-Steve
December 08, 2014
On 12/8/14 10:45 AM, "Marc =?UTF-8?B?U2Now7x0eiI=?= <schuetzm@gmx.net>" wrote:
> On Monday, 8 December 2014 at 15:12:51 UTC, Steven Schveighoffer wrote:
>> I think you should eliminate scope returns then. They are not useful.
>> I can't think of a single reason why a newly allocated via GC or
>> global reference return should have to be restricted to exist only
>> within the statement. Both have infinite lifetimes.
>
> It's for references to objects that are owned by the function (or object
> of which the function is a method). These don't have infinite lifetimes.

Why not? An object is allocated on the heap, and has infinite lifetime.

-Steve
December 08, 2014
On Sunday, 7 December 2014 at 21:29:50 UTC, Walter Bright wrote:
> On 12/7/2014 6:12 AM, Dicebot wrote:
>> But from existing cases it doesn't seem working good enough. For example, not
>> being able to represent idiom of `scope ref int foo(scope ref int x) { return x;
>> }` seems very limiting.
>
>   scope ref int foo(ref int x);
>
> will do it.

This isn't the same as it does not propagate scope but just restricts return value. Difference is that it cannot be chained. Let's consider practical example based on Phobos:

there was an issue with byLine range that it has reused same buffer internally which sometimes caught users off guard when trying to save slice. It is a natural fit for `scope` - make it return `scope string` instead to ensure that no slices get stored.

Two issues immediately pop up:

1) scope is not transitive thus it doesn't work at all - you still can store slice of `scope string` as only actual ptr+length struct is protected.

2) even if it worked, existing definition of scope return value makes it impossible to use in typical idiomatic pipeline: `file.byLine.algo1.algo2`. Either algoX is defined to take `scope ref` and thus can't return it or it is defined to take `ref` and can't take another `scope ref` as an argument.

At least this is what I get from reading existing examples in DIP69

>> I also don't consider `ref` design as a storage class any kind of success at all
>> and generally agree with Manu on this topic. At the same time alternative
>> proposals that make it a qualifier (like Marc did) do impact existing language
>> much more and this no small concern.
>
> My experience with C++ ref as type qualifier is very, very bad. It's a special case EVERYWHERE. Doing type deduction with it is an exercise in a completely baffling set of rules and a different rule for every occasion - Scott Meyers has a great piece on this.
>
> There are probably only a handful of people on the planet who actually understand C++ ref. I wished very hard to avoid that with D ref.

While there is no argument that C++ ref is screwed, it is rather hard to say if this is inherent consequence of ref being a type qualifier or just C++ being C++. I mean how many C++ type system features in general are understood my more than a handful of people on the planet? For me `ref` is essentially just a different flavor of `*` - and if the latter can be part of type, I see no reasons why former can't