December 08, 2014
Walter Bright <newshound2@digitalmars.com> wrote:
[...]
> and ref's can still be null in C++!

AFAIK only if you dereference a NULL pointer, which is UB. So not really.

[...]

Tobi
December 08, 2014
On 12/8/2014 1:59 PM, Tobias Müller wrote:
> Walter Bright <newshound2@digitalmars.com> wrote:
> [...]
>> and ref's can still be null in C++!
>
> AFAIK only if you dereference a NULL pointer, which is UB. So not really.

Saying it's UB doesn't help in the slightest. You can still have null ref's, and no compiler will prevent it.

December 08, 2014
On Saturday, 6 December 2014 at 21:57:03 UTC, Walter Bright wrote:
> On 12/5/2014 3:59 PM, deadalnix wrote:
>> The DIP say nothing about scoped rvalue having different behavior
>> than non scoped ones.
>
> Can you propose some new wording?
>

I did:


An infinite lifetime is a lifetime greater or equal than any
other lifetime. Expression of infinite lifetime are:
  - literals
  - GC heap allocated objects
  - statics and enums.
  - rvalues of type that do not contain indirections.
  - non scope rvalues.


Dereference share the lifetime of the dereferenced expression (ie
infinite lifetime unless the expression is scope). Address of
expression shared the lifetime of the base expression, and in
addition gain the scope flag.

>>> Are you suggesting transitive scope?
>>
>> For rvalues, yes. Not for lvalues.
>
> I don't think that is workable.

That is the only way. For rvalue, you got to take the lower
possible lifetime, but for lvalue, you want to consider the
highest possible lifetime.
December 08, 2014
On Saturday, 6 December 2014 at 23:59:01 UTC, Piotrek wrote:
> On Saturday, 6 December 2014 at 11:06:16 UTC, Jacob Carlborg wrote:
>> On 2014-12-06 10:50, Manu via Digitalmars-d wrote:
>>
>>> I've been over it so many times.
>>
>> I suggest you take the time and write down how your vision of "ref" looks like and the issue with the current implementation. A blog post, a DIP or similar. Then you can easily refer to that in cases like this. Then you don't have to repeat yourself so many times. That's what I did when there was a lot of talk about AST macros. I was tired of constantly repeating myself so I created a DIP. It has already saved me more time than it took to write the actual DIP.
>
> @Manu
> Seconded. Please create even short one. I coulnd'd find any example of use case you are referring to (as you said I don't use D"ref" so often) . I plan to apply D for embedded systems, so full control is a must. But so far Water still has the most accurate taste according to my experience.
>
> BTW. I consider game devs to be the most underpaid programmers, so your perspective is very precious to me.
>
> Cheers
> Piotrek

I'd like to not polute this thread with the ref topic.

Long story short:
  - it is hard to know if something is ref, making it hard to
metaprogram.
  - you sometime want to switch ref on and off (for instance, you
may use ref to avoid copies, which is not worthwhile for small
values) which is complex.
  - auto ref do not cut it.

That is the extra short version, please start a thread on the subject. Please, please, this one is complicated enough.
December 08, 2014
On Saturday, 6 December 2014 at 12:38:24 UTC, Ola Fosheim Grøstad
wrote:
> On Saturday, 6 December 2014 at 04:31:48 UTC, Sebastiaan Koppe wrote:
>> What about also adding the inverse of scope? Then scope can be inferred. As in:
>>
>> ```
>> void foo(int* p);
>> void free(P)(consume P* p);
>
>
> Yes, this is much better. When I suggested it, it was rejected because D is too concerned about breaking existing code. Which is a not-very-good argument since this breaking change is concervative (you only have to add "consume" or something similar when the compiler complains).
>
> The obvious solution is to do as you suggest and in addition do all @safe analysis on a high level IR layer using dataflow through and through.
>
> Instead D continues down the rather flimsy path of partially addressing these issues in the type system… which will lead to a more complicated and less complete solution where @safe basically continues to be a leaky cauldron…

This is inherently about ownership. I have a proposal about this.
Scope is about using things without ownership.

Both are linked but different beast.
December 08, 2014
On Sunday, 7 December 2014 at 05:24:20 UTC, Walter Bright wrote:
>> I appreciate that. That was uncontroversial though; I didn't need to
>> spend months or years trying to justify my claims on that issue. I
>> feel like that was a known item that was just somewhere slightly down
>> the list, and I was able to bring it forward.
>> I was never in a position where I had to argue against Andrei to sell that one.
>
> UDA was controversial, and was one of your initiatives.
>

I don't think UDA is controversial, but the way it was done
certainly is controverted.
December 08, 2014
On Sunday, 7 December 2014 at 21:29:50 UTC, Walter Bright wrote:
> My experience with C++ ref as type qualifier is very, very bad. It's a special case EVERYWHERE. Doing type deduction with it is an exercise in a completely baffling set of rules and a different rule for every occasion - Scott Meyers has a great piece on this.
>
> There are probably only a handful of people on the planet who actually understand C++ ref. I wished very hard to avoid that with D ref.

Type qualifier for scope is the wrong tool.

scope(int)[] do not make any sense. The slice cannot outlive its
content. scope as a flag on expressions/symbols is very useful.
December 08, 2014
On Monday, 8 December 2014 at 21:12:47 UTC, Walter Bright wrote:
> On 12/8/2014 12:54 PM, Dicebot wrote:
>> struct ByLine
>> {
>>     scope string front();
>>     // ...
>> }
>>
>> auto byLine(File file)
>> {
>>     return ByLine(file);
>> }
>>
>> scope /* ref */ string foo(scope /* ref */ string input)
>> {
>>     return input[1..$];
>> }
>>
>> void main()
>> {
>>     auto r = file.byLine.map!foo;
>>     string s = r.front; // this should not compile
>>     string s = r.front.dup; // this should compile
>>
>>     // how foo signature should look like for this to work?
>> }
>
> front() should return a 'scope ref string'.

That seems to contradict your other statement:

> A 'scope ref' parameter may not be returned as a 'ref' or a 'scope ref'.

Please check `foo()` once more - it needs to accept scope (ref) to be able to accept ByLine.front as an argument. And it also needs to pass it down the call chain - but returning `input` by reference is illegal according to abovementioned rule.
December 08, 2014
On Monday, 8 December 2014 at 16:25:22 UTC, Dicebot wrote:
> This isn't the same as it does not propagate scope but just restricts return value. Difference is that it cannot be chained. Let's consider practical example based on Phobos:
>
> there was an issue with byLine range that it has reused same buffer internally which sometimes caught users off guard when trying to save slice. It is a natural fit for `scope` - make it return `scope string` instead to ensure that no slices get stored.
>

That would only ensure that slice do not outline the range.

> Two issues immediately pop up:
>
> 1) scope is not transitive thus it doesn't work at all - you still can store slice of `scope string` as only actual ptr+length struct is protected.
>

Yes, that is my whole point whith the scope flag in expression.
Without it, the proposal need hacks to work (&(*e)) and is too
restrictive to be useful.

> While there is no argument that C++ ref is screwed, it is rather hard to say if this is inherent consequence of ref being a type qualifier or just C++ being C++. I mean how many C++ type system features in general are understood my more than a handful of people on the planet? For me `ref` is essentially just a different flavor of `*` - and if the latter can be part of type, I see no reasons why former can't

No, a type constructor is not the right tool. This about it.

scope(int)[] do not make any sense as the slice cannot outlive
its content safely.

You need to have a flag on expression and declarations to make
this work.
December 08, 2014
On Monday, 8 December 2014 at 20:54:54 UTC, Dicebot wrote:
> But was there any reason why those traits (alien to type qualifiers) were pursued? What is the problem with `ref` simply meaning `non-null pointer` and allowing non-idempotent ref(ref(int))?

Please no.

when you do int a; and then use a, you always either refer to a,
to memory storage (lvalue) or a, the value stored in that memory
(the rvalue).

When doing ref int a = xxx;

You specify that you don't create a new storage for a, but that
you must consider xxx as an lvalue, and bind the name a to that
same lvalue.

Once you get that, you get why ref(ref(int)) do not make any
sense, and is generally undesirable.