December 06, 2014
On Friday, 5 December 2014 at 23:58:41 UTC, Walter Bright wrote:
> On 12/5/2014 8:48 AM, "Marc Schütz" <schuetzm@gmx.net>" wrote:
>> 1) Escape detection is limited to `ref`.
>>
>>     T* evil;
>>     ref T func(scope ref T t, ref T u) @safe {
>>       return t; // Error: escaping scope ref t
>>       return u; // ok
>>       evil = &u; // Error: escaping reference
>>     }
>>
>> vs.
>>
>>     T[] evil;
>>     T[] func(scope T[] t, T[] u) @safe {
>>       return t; // Error: cannot return scope
>>       return u; // ok
>>       evil = u; // !!! not good
>
> right, although:
>      evil = t;  // Error: not allowed
>
>>     }
>>
>> As can be seen, `ref T u` is protected from escaping (apart from returning it),
>> while `T[] u` in the second example is not. There's no general way to express
>> that `u` can only be returned from the function, but will not be retained
>> otherwise by storing it in a global variable. Adding `pure` can express this in
>> many cases, but is, of course, not always possible.
>
> As you point out, 'ref' is designed for this.

I wouldn't call it "designed", but "repurposed"...

>>     struct Container(T) {
>>         scope ref T opIndex(size_t index);
>>     }
>>
>>     void bar(scope ref int a);
>>
>>     Container c;
>>     bar(c[42]);            // ok
>>     scope ref tmp = c[42]; // nope
>>
>> Both cases should be fine theoretically; the "real" owner lives longer than
>> `tmp`. Unfortunately the compiler doesn't know about this.
>
> Right, though the compiler can optimize to produce the equivalent.

??? This is a problem on the semantic level, unrelated to optimization:

    // contrived example to illustrated the point
    Container c;
    scope ref x = c[42];   // not
    scope ref y = c[44];   // ...
    scope ref z = c[13];   // allowed
    foo(x, y, z, x+y, y+z, z+x, x+y+z);

    // workaround, but error-prone and has different semantics
    // (opIndex may have side effects, called multiple times)
    foo(c[42], c[44], c[13], c[42]+c[44], c[44]+c[13], c[13]+c[42], c[42]+c[44]+c[13]);

    // another workaround, same semantics, but ugly and unreadable
    (scope ref int x, scope ref int y, scope ref int z) {
        foo(x, y, z, x+y, y+z, z+x, x+y+z);
    }(c[42], c[44], c[13]);

>> 3) `scope` cannot be used for value types.
>>
>> I can think of a few use cases for scoped value types (RC and file descriptors),
>> but they might only be marginal.
>
> I suspect that one can encapsulate such values in a struct where access to them is strictly controlled.

They can, but again at the cost of an indirection (ref).

>> 4) No overloading on `scope`.
>>
>> This is at least partially a consequence of `scope` inference. I think
>> overloading can be made to work in the presence of inference, but I haven't
>> thought it through.
>
> Right. Different overloads can have different semantic implementations, so what should inference do? I also suspect it is bad style to overload on 'scope'.

It only makes sense with scope value types (see the RC example). For references, I don't know any useful applications.

>> 6) There seem to be problems with chaining.
>>
>>     scope ref int foo();
>>     scope ref int bar1(ref int a) {
>>         return a;
>>     }
>>     scope ref int bar2(scope ref int a) {
>>         return a;
>>     }
>>     ref int bar3(ref int a) {
>>         return a;
>>     }
>>     ref int bar4(scope ref int a) {
>>         return a;
>>     }
>>     void baz(scope ref int a);
>>
>> Which of the following calls would work?
>>
>>     foo().bar1().baz();
>
> yes
>
>>     foo().bar2().baz();
>
> no - cannot return scope ref parameter
>
>>     foo().bar3().baz();
>
> yes
>
>>     foo().bar4().baz();
>
> no, cannot return scope ref parameter
>
>
>> I'm not sure I understand this fully yet, but it could be that none of them work...
>
> Well, you're half right :-)

Ok, so let's drop bar2() and bar4().

    scope ref int foo();
    scope ref int bar1(ref int a) {
        return a;
    }
    ref int bar3(ref int a) {
        return a;
    }
    ref int baz_noscope(/*scope*/ ref int a);

    foo().bar1().baz_noscope();
    foo().bar3().baz_noscope();

And now? In particular, will the return value of `bar3` be treated as if it were `scope ref`?
December 06, 2014
On Saturday, 6 December 2014 at 09:51:15 UTC, Manu via Digitalmars-d wrote:
> I've been over it so many times.
> I'll start over if there is actually some possibility I can convince
> you? Is there?
>
> I've lost faith that I am able to have any meaningful impact on issues
> that matter to me, and I'm fairly sure at this point that my
> compounded resentment and frustration actually discredit my cause, and
> certainly, my quality of debate.
> I'm passionate about these things, but I'm also extremely frustrated.
> To date, whenever I have engaged in a topic that I *really* care
> about, it goes the other direction. To participate has proven to be
> something of a (very time consuming) act of masochism.

That's very sad to hear. I think you have brought up some very important points, especially from a practical point of view.

>
>
> I __absolutely objected__ to 'auro ref' when it appeared. I argued
> that it was a massive mistake, and since it's introduction and
> experience with it in the wild, I am more confident in that conviction
> than ever.
>
> As a programmer, I expect control over whether code is a template, or
> not. 'ref' doesn't have anything to do with templates. Confusing these
> 2 concepts was such a big mistake.
> auto ref makes a template out of something that shouldn't be a
> template. It's a particularly crude hack to address a prior mistake.
> ref should have been fixed at that time, not compounded with layers of
> even more weird and special-case/non-uniform behaviours above it.
>
>
> There has been a couple of instances where the situation has been
> appropriate that I've tried to make use of auto ref, but in each case,
> the semantics have never been what I want.
> auto ref presumes to decide for you when something should be a ref or
> not. It prescribes a bunch of rules on how that decision is made, but
> it doesn't know what I'm doing, and it gets it wrong.
> In my experience, auto ref has proven to be, at best, completely
> useless. But in some cases I've found where I bump into it in 3rd
> party code, it's been a nuisance, requiring me to wrap it away.

I tend to agree with this. Here's an example of the dangers:

https://github.com/deadalnix/libd/pull/7

In this case, `auto ref` accepted a class by reference, because it chooses ref-ness based on whether you pass an lvalue or an rvalue. This is not very helpful. I agree that `ref` isn't something that should ever be inferred, because it affects the semantics of the function. Either the function requires `ref` semantics, or not. It doesn't depend on how it is called.
December 06, 2014
Marc Schütz:

> I agree that `ref` isn't something that should ever be inferred, because it affects the semantics of the function. Either the function requires `ref` semantics, or not. It doesn't depend on how it is called.

(Perhaps this was already said) I think Ada used to have something like "auto ref" and later it was removed from the language.

Bye,
bearophile
December 06, 2014
On 12/5/2014 3:59 PM, deadalnix wrote:
> On Friday, 5 December 2014 at 21:32:53 UTC, Walter Bright wrote:
>>>> I don't believe this is correct. Rvalues can be assigned, just like:
>>>>
>>>>   __gshared int x;
>>>>   { int i; x = i; }
>>>>
>>>> i's scope ends at the } but it can still be assigned to x.
>>>>
>>>
>>> It work even better when i has indirections.
>>
>> I understand what you're driving at, but only a scoped rvalue would not be
>> copyable.
>>
>
> The DIP say nothing about scoped rvalue having different behavior
> than non scoped ones.

Can you propose some new wording?



>>> I originally had scope only apply to ref, but that made
>>>> having scoped classes
>>>> impossible.
>>>>
>>>
>>> Promoting scoped class on stack is an ownership problem, and out
>>> of scope (!). It make sense to allow it as an optimization.
>>>
>>> Problem is, lifetime goes to infinite after indirection, so I'm
>>> not sure what the guarantee is.
>>
>> The guarantee is there will be no references to the class instance after the
>> scoped class goes out of scope.
>>
>
> Through use of that view. I see it as follow:
>    - When unconsumed owner goes out of scope, it can (must ?) be
> free automatically.
>    - scope uses do not consume.
>    - When the compiler see a pair of alloc/free, it can promote on
> stack.
>
> That is a much more useful definition as it allow for stack
> promotion after inlining:
> class FooBuilder {
>       Foo build() { return new Foo(); }
> }
>
> class Foo {}
>
> void bar() {
>       auto f = new FooBuilder().build();
>       // Use f and do not consume...
> }
>
> This can be reduced in such a way no allocation happen at all.

Yes, but I think the proposal allows for that.


>>> I cause everything reached through the view to be scope and
>>> obliviate the need for things like &(*e) having special meaning.
>>
>> Are you suggesting transitive scope?
>
> For rvalues, yes. Not for lvalues.

I don't think that is workable.
December 06, 2014
On 12/6/2014 1:50 AM, Manu via Digitalmars-d wrote:
>> I didn't fully understand Manu's issue, but it was about 'ref' not being
>> inferred by template type deduction. I didn't understand why 'auto ref' did
>> not work for him. I got the impression that he was trying to program in D
>> the same way he'd do things in C++, and that's where the trouble came in.
> NO!!
> I barely program C++ at all! I basically write C code with 'enum' and
> 'class'. NEVER 'template', and very rarely 'virtual'.
> Your impression is dead wrong.

I apologize for misunderstanding you.


> It's exactly as Marc says, we have the best tools for dealing with
> types of any language I know, and practically none for dealing with
> 'storage class'.
> In my use of meta in D, 'ref' is the single greatest cause of
> complexity, code bloat, duplication, and text mixins. I've been
> banging on about this for years!
>
> I've been over it so many times.
> I'll start over if there is actually some possibility I can convince
> you? Is there?

I know there's no easy way to derive a storage class from an expression. The difficulty in my understanding is why this is a great cause of problems for you in particular (and by implication not for others). There's something about the way you write code that's different.


> I've lost faith that I am able to have any meaningful impact on issues
> that matter to me, and I'm fairly sure at this point that my
> compounded resentment and frustration actually discredit my cause, and
> certainly, my quality of debate.

This is incorrect, you had enormous influence over the vector type in D! And I wish you had more.


> There has been a couple of instances where the situation has been
> appropriate that I've tried to make use of auto ref, but in each case,
> the semantics have never been what I want.

I don't really know what you want. Well, perhaps a better statement is 'why', not what.


> ref is a bad design. C++'s design isn't fantastic, and I appreciate
> that D made effort to improve on it, but we need to recognise when the
> experiment was a failure. D's design is terrible; it's basically
> orthogonal to the rest of the language. It's created way more
> complicated edge cases for me than C++ references ever have. Anyone
> who says otherwise obviously hasn't really used it much!
> Don't double down on that mistake with scope.

Why does your code need to care so much about to ref or not to ref? That's the central point here, I think.


> My intended reply to this post was to ask you to justify making it a
> storage class, and why the design fails as a type constructor?
> Can we explore that direction to it's point of failure?
>
> As a storage class, it runs the risk of doubling out existing bloat
> caused by ref. As a type constructor, I see no disadvantages.
> It even addresses some of the awkward problems right at the face of
> storage classes, like how/where the attributes actually apply. Type
> constructors use parens; ie, const(T), and scope(T) would make that
> matter a lot  more clear.
>
> Apart from the storage class issue, it looks okay, but it gives me the
> feeling that it kinda stops short.
> Marc's proposal addressed more issues. I feel this proposal will
> result in more edge cases than Marc's proposal.
> The major edge case that I imagine is that since scope return values
> can't be assigned to scope local's, that will result in some awkward
> meta, requiring yet more special cases. I think Mark's proposal may be
> a lot more relaxed in that way.

The disadvantages of making it a type qualifier are:

1. far more complexity. Type constructors interact with everything, often in unanticipated ways. We spent *years* working out issues with the 'const' type qualifier, and are still doing so. Kenji just fixed another one.

2. we are never going to get users to use 'scope' qualifiers pervasively. It's been a long struggle to get 'const' used.

3. we added 'inout' as a type qualifier to avoid code duplication engendered by 'const'. It hurts my brain to even think about how that might interact with 'scope' qualifiers.

Yes, I agree unequivocably, that 'scope' as a type qualifier is more expressive and more powerful than as a storage class. Multiple inheritance is also more expressive and more powerful than single inheritance. But many times, more power perhaps isn't better than redoing the program design to use something simpler and less complex.

December 06, 2014
On 12/6/2014 5:30 AM, "Marc Schütz" <schuetzm@gmx.net>" wrote:
> On Friday, 5 December 2014 at 23:58:41 UTC, Walter Bright wrote:
>> As you point out, 'ref' is designed for this.
> I wouldn't call it "designed", but "repurposed"...

Perhaps, but the original reason to even have 'ref' was so it could be a restricted pointer type that could be passed down a call hierarchy, but not up.


>
>>>     struct Container(T) {
>>>         scope ref T opIndex(size_t index);
>>>     }
>>>
>>>     void bar(scope ref int a);
>>>
>>>     Container c;
>>>     bar(c[42]);            // ok
>>>     scope ref tmp = c[42]; // nope
>>>
>>> Both cases should be fine theoretically; the "real" owner lives longer than
>>> `tmp`. Unfortunately the compiler doesn't know about this.
>>
>> Right, though the compiler can optimize to produce the equivalent.
>
> ??? This is a problem on the semantic level, unrelated to optimization:
>
>      // contrived example to illustrated the point
>      Container c;
>      scope ref x = c[42];   // not
>      scope ref y = c[44];   // ...
>      scope ref z = c[13];   // allowed
>      foo(x, y, z, x+y, y+z, z+x, x+y+z);
>
>      // workaround, but error-prone and has different semantics
>      // (opIndex may have side effects, called multiple times)
>      foo(c[42], c[44], c[13], c[42]+c[44], c[44]+c[13], c[13]+c[42],
> c[42]+c[44]+c[13]);
>
>      // another workaround, same semantics, but ugly and unreadable
>      (scope ref int x, scope ref int y, scope ref int z) {
>          foo(x, y, z, x+y, y+z, z+x, x+y+z);
>      }(c[42], c[44], c[13]);

You are correct, and it remains to be seen if these occur enough to be a problem or not. The workarounds do exist, though. Another workaround:

    scope ref tmp = c[42];

becomes:

    auto scope ref tmp() { return c[42]; }


> Ok, so let's drop bar2() and bar4().
>
>      scope ref int foo();
>      scope ref int bar1(ref int a) {
>          return a;
>      }
>      ref int bar3(ref int a) {
>          return a;
>      }
>      ref int baz_noscope(/*scope*/ ref int a);
>
>      foo().bar1().baz_noscope();
>      foo().bar3().baz_noscope();
>
> And now? In particular, will the return value of `bar3` be treated as if it were
> `scope ref`?

Yes.

December 06, 2014
On Saturday, 6 December 2014 at 11:06:16 UTC, Jacob Carlborg wrote:
> On 2014-12-06 10:50, Manu via Digitalmars-d wrote:
>
>> I've been over it so many times.
>
> I suggest you take the time and write down how your vision of "ref" looks like and the issue with the current implementation. A blog post, a DIP or similar. Then you can easily refer to that in cases like this. Then you don't have to repeat yourself so many times. That's what I did when there was a lot of talk about AST macros. I was tired of constantly repeating myself so I created a DIP. It has already saved me more time than it took to write the actual DIP.

@Manu
Seconded. Please create even short one. I coulnd'd find any example of use case you are referring to (as you said I don't use D"ref" so often) . I plan to apply D for embedded systems, so full control is a must. But so far Water still has the most accurate taste according to my experience.

BTW. I consider game devs to be the most underpaid programmers, so your perspective is very precious to me.

Cheers
Piotrek
December 07, 2014
On 7 December 2014 at 08:15, Walter Bright via Digitalmars-d <digitalmars-d@puremagic.com> wrote:
> On 12/6/2014 1:50 AM, Manu via Digitalmars-d wrote:
>>>
>>> I didn't fully understand Manu's issue, but it was about 'ref' not being
>>> inferred by template type deduction. I didn't understand why 'auto ref'
>>> did
>>> not work for him. I got the impression that he was trying to program in D
>>> the same way he'd do things in C++, and that's where the trouble came in.
>>
>> NO!!
>> I barely program C++ at all! I basically write C code with 'enum' and
>> 'class'. NEVER 'template', and very rarely 'virtual'.
>> Your impression is dead wrong.
>
>
> I apologize for misunderstanding you.
>
>
>> It's exactly as Marc says, we have the best tools for dealing with
>> types of any language I know, and practically none for dealing with
>> 'storage class'.
>> In my use of meta in D, 'ref' is the single greatest cause of
>> complexity, code bloat, duplication, and text mixins. I've been
>> banging on about this for years!
>>
>> I've been over it so many times.
>> I'll start over if there is actually some possibility I can convince
>> you? Is there?
>
>
> I know there's no easy way to derive a storage class from an expression. The difficulty in my understanding is why this is a great cause of problems for you in particular (and by implication not for others). There's something about the way you write code that's different.

Perhaps it's the tasks I typically perform with meta?
I'm a fairly conservative user of templates, but one place where they
shine, and I'm always very tempted to use them, is the task of
serialisation, and cross-language bindings.
Those tasks typically involve mountains of boilerplate, and D is the
only language expressive enough to start to really automate that
mechanical mess.
I've done extensive work bindings to C/C++, Lua, and C#. They're all
subtly different tasks, but share a lot in common, and the major
characteristic which reveals my problems with things like ref, and
auto ref, is that these things aren't open to interpretation. They're
not cases where the compiler can make decisions for you (ie, auto ref
fails), and the ABI + API are strict (ie, ref must match correctly).

There's a lot more more to it, but perhaps I'm among few who have had such extensive need to engage in these tasks?


>> I've lost faith that I am able to have any meaningful impact on issues that matter to me, and I'm fairly sure at this point that my compounded resentment and frustration actually discredit my cause, and certainly, my quality of debate.
>
>
> This is incorrect, you had enormous influence over the vector type in D! And I wish you had more.

I appreciate that. That was uncontroversial though; I didn't need to
spend months or years trying to justify my claims on that issue. I
feel like that was a known item that was just somewhere slightly down
the list, and I was able to bring it forward.
I was never in a position where I had to argue against Andrei to sell that one.

I have std.simd sitting here, and I really want to finish it, but I
still don't have the tools to do so.
I need, at least, forceinline to complete it, but that one *is*
controversial - we've talked about this for years.

GDC and LDC both have a forceinline, so I could theoretically support those compilers, but then I can't practically make use of them without some sort of attribute aliasing system, otherwise I need to triplicate the code for each compiler, just to insert a different (compiler specific) forceinline attribute name. It'd be really great if we agreed on just one.

There is also a practical problem with GCC (perhaps it's an
incompatbility with my std.simd design), where I need to change the
sse-level between functions.
There is a GCC attribute to do this ('target'), but it would rely on,
at least, a few subtle tweaks. In this case, I need to be able to feed
a template argument to a UDA, which doesn't work because UDA
declarations seem to be parsed prior to knowledge of functions
template args. There may be a further problem with GCC though which I
haven't been able to prove yet though.

The reason for this is that an important design goal for my work was
to be able to semi-automatically generate multiple code paths for
different SSE versions, which can be selected at runtime based on
available hardware.
It's a particularly awkward issue in C/C++, usually requiring you to
have multiple modules which are each built with different compile
flags, and then do some magic while linking to resolve the runtime
selection problem.

Anyway, I have been able to use SIMD directly in my own software with
the support we have, but I haven't been able to complete the library
that I want to produce.
I need a little more language support. I've reached out a few times,
but there hasn't been too much interest in the past. I really do wanna
finish it one of these days though!


>> There has been a couple of instances where the situation has been appropriate that I've tried to make use of auto ref, but in each case, the semantics have never been what I want.
>
>
> I don't really know what you want. Well, perhaps a better statement is 'why', not what.

Because a function is a function. It's a fundamental and ultimately *simple* element of a language, with simple and reliable in behaviour; you write a function, compiler emits some code with a matching symbol name. There is perhaps nothing more simple or fundamental to the language.

A template is not a function. It may *generate* a function, but now
we've added many extra details which require careful consideration and
handling, like where the code is, or isn't, emitted? How many
permutations are there? What about static/dynamic libraries? Calling
with various, or arbitrary types, always required careful
consideration, more so than an explicit type.
Templates are a powerful (often dangerous) **tool**, which should only
be applied deliberately and very carefully.
Obviously auto-ref gives hard answers to some of those questions, but
it is still a template, and that instantly makes it much more complex.

Making a function that has nothing to do with templates into a template is not what I want... it couldn't be further from what I ever wanted!

The ABI is different; there are 2 functions now (or zero, until one is
called). Thich means taking the address of a function with auto-ref is
a complex issue.
ref-ness isn't a decision I ever want the compiler to make for me. I
either want ref, or I don't, and I will state that to the compiler
explicitly.
This is especially true when interacting with other languages; while D
remains in relative infancy, I think that is an overwhelmingly common
situation when a project is modestly large.

In the situation where templates are involved, it would be nice to be
able to make that explicit statement that some type is ref or not at
the point of template instantiation, and the resolution should work
according to the well-defined rules that we are all familiar with.
Within a complex template, if logic should be performed, we have
powerful tools to do that already; nobody ever complains about
Unqual!, or PointerTarget!... these things are very easy to read and
understand. ref should fit right in there with the others, is(x ==
ref), UnRef!T, etc.
Instead, it's practically orthogonal to the language. Wrangling
anything involving storage classes is really, really tedious, and
almost always results in extensive code duplication, or text mixins
when duplication just get too much.

This issue really surprised me at the time, because the whole thing
was in response to a topic that I consistently raised and pushed.
Who was the customer welcoming that solution? I could never work it
out; there was nobody else that seemed particularly invested in the
problem, except the usual suite of language enthusiasts that took it
as an interesting intellectual problem to discuss. The person who
cared about that issue the most (afaict) was completely unsatisfied
with the solution. I said at the time that I would have preferred that
no action was taken, than to compound, and further set in stone, an
issue that I was already extremely critical of.


Don't double down on this mistake with scope!


>> ref is a bad design. C++'s design isn't fantastic, and I appreciate
>> that D made effort to improve on it, but we need to recognise when the
>> experiment was a failure. D's design is terrible; it's basically
>> orthogonal to the rest of the language. It's created way more
>> complicated edge cases for me than C++ references ever have. Anyone
>> who says otherwise obviously hasn't really used it much!
>> Don't double down on that mistake with scope.
>
>
> Why does your code need to care so much about to ref or not to ref? That's the central point here, I think.

That same logic could ask why I want to have 'short', or 'int' in some places, but not others. There's no absolute answer. As a software engineer, and particularly, a native software engineer, it's a fundamental part of the type system (sorry, not part of the type system!) that I must have control over.

Why offer 'ref' if you don't intend people to use it?
I put to you, why do you hate 'ref' so much? Remove it from the
language if people aren't meant to use it.

ref is just a pointer with some semantics removed (pointer re-assignment, indexing). It's use facilitates some forms of generic code which would fail otherwise, and also reduces some possibilities for end-user misuse or mistakes when using pointers (ie, user may intend an assignment, but unsuspectingly re-assign a pointer; result looks the same, but exposes bug somewhere else).

It's also used as an optimisation where I require control over the ref-ness of things, but don't want to retrofit the code with a sea of '*' and '&' operators.

It's also used when interfacing C++, where it appears in API's extensively.

I'm sure there are many more common and useful cases where ref is a good tool.


>> My intended reply to this post was to ask you to justify making it a
>> storage class, and why the design fails as a type constructor?
>> Can we explore that direction to it's point of failure?
>>
>> As a storage class, it runs the risk of doubling out existing bloat
>> caused by ref. As a type constructor, I see no disadvantages.
>> It even addresses some of the awkward problems right at the face of
>> storage classes, like how/where the attributes actually apply. Type
>> constructors use parens; ie, const(T), and scope(T) would make that
>> matter a lot  more clear.
>>
>> Apart from the storage class issue, it looks okay, but it gives me the
>> feeling that it kinda stops short.
>> Marc's proposal addressed more issues. I feel this proposal will
>> result in more edge cases than Marc's proposal.
>> The major edge case that I imagine is that since scope return values
>> can't be assigned to scope local's, that will result in some awkward
>> meta, requiring yet more special cases. I think Mark's proposal may be
>> a lot more relaxed in that way.
>
>
> The disadvantages of making it a type qualifier are:
>
> 1. far more complexity. Type constructors interact with everything, often in unanticipated ways. We spent *years* working out issues with the 'const' type qualifier, and are still doing so. Kenji just fixed another one.

What you describe is is the opposite of 'complexity'. Uniform
behaviour is what people expect from a language.
I understand complexity may arise as a result of interaction with
other features, but that's a worthwhile issue to explore if you ask
me. That's complexity that can actually be addressed, rather than
pushed to the side.
Edge cases which are orthogonal to the rest of the language (ie, ref),
are a terrible idea. You don't buffer against complexity by creating a
whole new class of complexity, which we have no effective tools to
manage or mitigate.


> 2. we are never going to get users to use 'scope' qualifiers pervasively. It's been a long struggle to get 'const' used.

I don't think scope will be as important as const. I think we should
*try it*, to experimentally see how it plays out.
That said, scope should be able to be inferred quite effectively in
many most cases. Again, I think we will only be able to measure the
effectiveness of this when we try it.


> 3. we added 'inout' as a type qualifier to avoid code duplication engendered by 'const'. It hurts my brain to even think about how that might interact with 'scope' qualifiers.

inout was an interesting idea (with a terrible name!). I'm still not sure if I think it was a good idea or not, but I have found it very useful.

What's the issue? I don't quite see how inout and scope overlap (no differently than const or immutable?).


> Yes, I agree unequivocably, that 'scope' as a type qualifier is more expressive and more powerful than as a storage class. Multiple inheritance is also more expressive and more powerful than single inheritance. But many times, more power perhaps isn't better than redoing the program design to use something simpler and less complex.

I don't think that's a reasonable comparison. Multiple inheritance is (I think quite well agreed) a wildly unpopular and super-complex disaster.

Comparing scope-as-a-storage-class or scope-as-a-type-constructor is nothing like comparing multiple inheritence and interfaces... interfaces aren't orthogonal to the language for a start! And they're well understood, and precedented in other languages.

Don't introduce red herrings which incite unrelated, yet strong emotional response.

My argument is that scope as storage class is MORE COMPLEX in that it
is orthogonal to the language, than scope as type constructor. I think
users will also find it extremely unintuitive, when they realise that
all the usual tools for interacting with types in the language are
unavailable in this special case.
I think scope will also prove to be more popular than ref, so, while
you don't hear so much about how much of a disaster ref is, you'll
start to hear all the exact same problems arise when people are trying
to use scope, because it's a far more interesting type qualifier (and
should probably be the default).

...I can't wait for 'auto scope'! ;)
December 07, 2014
On 12/6/2014 4:49 PM, Manu via Digitalmars-d wrote:
> On 7 December 2014 at 08:15, Walter Bright via Digitalmars-d
> There's a lot more more to it, but perhaps I'm among few who have had
> such extensive need to engage in these tasks?

I don't know, but it's a mystery to me what you're doing that there's no reasonable alternative, or why this is such an extensive issue for you.


>> This is incorrect, you had enormous influence over the vector type in D! And
>> I wish you had more.
>
> I appreciate that. That was uncontroversial though; I didn't need to
> spend months or years trying to justify my claims on that issue. I
> feel like that was a known item that was just somewhere slightly down
> the list, and I was able to bring it forward.
> I was never in a position where I had to argue against Andrei to sell that one.

UDA was controversial, and was one of your initiatives.


> I have std.simd sitting here, and I really want to finish it, but I
> still don't have the tools to do so.
> I need, at least, forceinline to complete it, but that one *is*
> controversial - we've talked about this for years.

I proposed a DIP to fix that, but it could not get reasonable consensus.

  http://wiki.dlang.org/DIP56

You did, after all, convince me that we need an "always inline" and a "never inline" method.



> There is also a practical problem with GCC (perhaps it's an
> incompatbility with my std.simd design), where I need to change the
> sse-level between functions.
> There is a GCC attribute to do this ('target'), but it would rely on,
> at least, a few subtle tweaks. In this case, I need to be able to feed
> a template argument to a UDA, which doesn't work because UDA
> declarations seem to be parsed prior to knowledge of functions
> template args. There may be a further problem with GCC though which I
> haven't been able to prove yet though.

I don't know why this needs to be a deduced template argument.


> The reason for this is that an important design goal for my work was
> to be able to semi-automatically generate multiple code paths for
> different SSE versions, which can be selected at runtime based on
> available hardware.

No argument there.


> I need a little more language support. I've reached out a few times,
> but there hasn't been too much interest in the past. I really do wanna
> finish it one of these days though!

I want it done, too :-)


>> I don't really know what you want. Well, perhaps a better statement is
>> 'why', not what.
>
> Because a function is a function. It's a fundamental and ultimately
> *simple* element of a language, with simple and reliable in behaviour;
> you write a function, compiler emits some code with a matching symbol
> name. There is perhaps nothing more simple or fundamental to the
> language.
>
> A template is not a function. It may *generate* a function, but now
> we've added many extra details which require careful consideration and
> handling, like where the code is, or isn't, emitted? How many
> permutations are there? What about static/dynamic libraries? Calling
> with various, or arbitrary types, always required careful
> consideration, more so than an explicit type.
> Templates are a powerful (often dangerous) **tool**, which should only
> be applied deliberately and very carefully.
> Obviously auto-ref gives hard answers to some of those questions, but
> it is still a template, and that instantly makes it much more complex.

I view a template function as a function that takes both compile-time and run-time arguments, not as a function generator. (The latter is an implementation artifact.)


> Making a function that has nothing to do with templates into a
> template is not what I want... it couldn't be further from what I ever
> wanted!

I don't see any problem with:

   T func()(runtime args ...) { ... }

That idiom has found various uses in practice.


> The ABI is different; there are 2 functions now (or zero, until one is
> called). Thich means taking the address of a function with auto-ref is
> a complex issue.
> ref-ness isn't a decision I ever want the compiler to make for me.

But why?

> I either want ref, or I don't, and I will state that to the compiler
> explicitly.
> This is especially true when interacting with other languages; while D
> remains in relative infancy, I think that is an overwhelmingly common
> situation when a project is modestly large.

The set of other languages that support ref parameters is [C++]. None others I know of that are accessible from D. Is it commonplace in C++ to have these overloads:

    T foo(S s);
    T foo(S& s);

? Such a practice would at least raise an eyebrow for me.


> In the situation where templates are involved, it would be nice to be
> able to make that explicit statement that some type is ref or not at
> the point of template instantiation, and the resolution should work
> according to the well-defined rules that we are all familiar with.
> Within a complex template, if logic should be performed, we have
> powerful tools to do that already; nobody ever complains about
> Unqual!, or PointerTarget!... these things are very easy to read and
> understand. ref should fit right in there with the others, is(x ==
> ref), UnRef!T, etc.

http://dlang.org/traits.html#isRef

> Instead, it's practically orthogonal to the language. Wrangling
> anything involving storage classes is really, really tedious, and
> almost always results in extensive code duplication, or text mixins
> when duplication just get too much.
>
> This issue really surprised me at the time, because the whole thing
> was in response to a topic that I consistently raised and pushed.
> Who was the customer welcoming that solution? I could never work it
> out; there was nobody else that seemed particularly invested in the
> problem, except the usual suite of language enthusiasts that took it
> as an interesting intellectual problem to discuss. The person who
> cared about that issue the most (afaict) was completely unsatisfied
> with the solution. I said at the time that I would have preferred that
> no action was taken, than to compound, and further set in stone, an
> issue that I was already extremely critical of.
>
>
> Don't double down on this mistake with scope!

The thing is, I don't understand *why* you want to wrangle storage classes. What is the coding pattern?


>> Why does your code need to care so much about to ref or not to ref? That's
>> the central point here, I think.
>
> That same logic could ask why I want to have 'short', or 'int' in some
> places, but not others. There's no absolute answer. As a software
> engineer, and particularly, a native software engineer, it's a
> fundamental part of the type system (sorry, not part of the type
> system!) that I must have control over.

I can give practical technical reasons why for 'short' or 'int'.


> Why offer 'ref' if you don't intend people to use it?
> I put to you, why do you hate 'ref' so much? Remove it from the
> language if people aren't meant to use it.

That's the wrong question. I could ask "why do you hate ref as a storage class"?


> ref is just a pointer with some semantics removed (pointer
> re-assignment, indexing). It's use facilitates some forms of generic
> code which would fail otherwise, and also reduces some possibilities
> for end-user misuse or mistakes when using pointers (ie, user may
> intend an assignment, but unsuspectingly re-assign a pointer; result
> looks the same, but exposes bug somewhere else).
>
> It's also used as an optimisation where I require control over the
> ref-ness of things, but don't want to retrofit the code with a sea of
> '*' and '&' operators.
>
> It's also used when interfacing C++, where it appears in API's extensively.
>
> I'm sure there are many more common and useful cases where ref is a good tool.

And D's ref works for all that.


>> The disadvantages of making it a type qualifier are:
>>
>> 1. far more complexity. Type constructors interact with everything, often in
>> unanticipated ways. We spent *years* working out issues with the 'const'
>> type qualifier, and are still doing so. Kenji just fixed another one.
>
> What you describe is is the opposite of 'complexity'. Uniform
> behaviour is what people expect from a language.
> I understand complexity may arise as a result of interaction with
> other features, but that's a worthwhile issue to explore if you ask
> me. That's complexity that can actually be addressed, rather than
> pushed to the side.
> Edge cases which are orthogonal to the rest of the language (ie, ref),
> are a terrible idea. You don't buffer against complexity by creating a
> whole new class of complexity, which we have no effective tools to
> manage or mitigate.

Uniform behavior is a nice ideal, and sounds good, but it never happens in practice with programming languages. We can't even get 'int' to behave uniformly! (Quick: what is -int.min ?)

You might also consider the "uniformity" of the ref type in C++. It's awful - it's a special case EVERYWHERE in the C++ type system! It just does not fit as a type qualifier.


>> 2. we are never going to get users to use 'scope' qualifiers pervasively.
>> It's been a long struggle to get 'const' used.
>
> I don't think scope will be as important as const. I think we should
> *try it*, to experimentally see how it plays out.
> That said, scope should be able to be inferred quite effectively in
> many most cases. Again, I think we will only be able to measure the
> effectiveness of this when we try it.

'scope' is so important we may even consider it to be the default.


>> 3. we added 'inout' as a type qualifier to avoid code duplication engendered
>> by 'const'. It hurts my brain to even think about how that might interact
>> with 'scope' qualifiers.
>
> inout was an interesting idea (with a terrible name!). I'm still not
> sure if I think it was a good idea or not, but I have found it very
> useful.
>
> What's the issue? I don't quite see how inout and scope overlap (no
> differently than const or immutable?).

Every type will have scope and nonscope variants, combinatorialy with the other type qualifiers.


>> Yes, I agree unequivocably, that 'scope' as a type qualifier is more
>> expressive and more powerful than as a storage class. Multiple inheritance
>> is also more expressive and more powerful than single inheritance. But many
>> times, more power perhaps isn't better than redoing the program design to
>> use something simpler and less complex.
>
> I don't think that's a reasonable comparison. Multiple inheritance is
> (I think quite well agreed) a wildly unpopular and super-complex
> disaster.
>
> Comparing scope-as-a-storage-class or scope-as-a-type-constructor is
> nothing like comparing multiple inheritence and interfaces...
> interfaces aren't orthogonal to the language for a start! And they're
> well understood, and precedented in other languages.
>
> Don't introduce red herrings which incite unrelated, yet strong
> emotional response.

Nobody has successfully introduced a type qualifier like scope.


> My argument is that scope as storage class is MORE COMPLEX in that it
> is orthogonal to the language, than scope as type constructor. I think
> users will also find it extremely unintuitive, when they realise that
> all the usual tools for interacting with types in the language are
> unavailable in this special case.
> I think scope will also prove to be more popular than ref, so, while
> you don't hear so much about how much of a disaster ref is, you'll
> start to hear all the exact same problems arise when people are trying
> to use scope, because it's a far more interesting type qualifier (and
> should probably be the default).
>
> ...I can't wait for 'auto scope'! ;)

I guess we'll see! I agree that this design is untried, and we don't really know how it will work out.

December 07, 2014
"Walter Bright"  wrote in message news:m60oa4$kd5$1@digitalmars.com...

> > I have std.simd sitting here, and I really want to finish it, but I
> > still don't have the tools to do so.
> > I need, at least, forceinline to complete it, but that one *is*
> > controversial - we've talked about this for years.
>
> I proposed a DIP to fix that, but it could not get reasonable consensus.
>
>    http://wiki.dlang.org/DIP56
>
> You did, after all, convince me that we need an "always inline" and a "never inline" method.

From the DIP:

> If a pragma specifies always inline, whether or not the target function(s) are actually inlined is
> implementation defined, although the implementation will be expected to inline it if practical.
> Implementations will likely vary in their ability to inline.

I expect a proposal in which 'always' means "inline or give an error" would be much better received.

> The thing is, I don't understand *why* you want to wrangle storage classes. What is the coding pattern?

> That's the wrong question. I could ask "why do you hate ref as a storage class"?

I suspect the answer is that sometimes it is useful to ensure some parameters are passed by reference, not by value.  With ref as a type, this would be trivial:

template MyParamType(T)
{
   static if (someCriteria!T)
       alias MyParamType = ref(T);
   else
       alias MyParamType = T;
}

void myFunction(T, U, V)(const MyParamType!T, const MyParamType!U, const MyParamType!V)
{
   ...
} 

2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18