July 25, 2015
On Saturday, 25 July 2015 at 03:11:59 UTC, Walter Bright wrote:
> On 7/24/2015 7:28 PM, Jonathan M Davis wrote:
>> I confess that I've always thought that QueryInterface was a _horrible_ idea,
>
> Specifying every interface that a type must support at the top of the hierarchy is worse. Once again, Exception Specifications.

Well, in most code, I would think that you should be getting the actual object with its full type and converting that to an interface to use for whatever you're using rather than trying to convert something that's one interface into another. It usually doesn't even make sense to attempt it. So, the only place that has all of the functions is the actual class which _has_ to have every function that it implements. I certainly wouldn't argue for trying to combine the interfaces themselves in most cases, since they're usually distinct, and combining them wouldn't make sense. But similarly, it doesn't usually make sense to convert on interface to a totally distinct one and have any expectation that that conversion is going to work, because they're distinct.

Most code I've dealt with that is at all clean doesn't cast from a base type to a derived type except in rare circumstances, and converting across the interface hierarchy never happens.

I've never seen QueryInterface used in a way that I wouldn't have considered messy or simply an annoyance that you have to deal with because you're dealing with COM and can't use pure C++ and just implicitly cast from the derived type to the interface/abstract type that a particular section of code is using.

But maybe I'm just not looking at the problem the right way. I don't know.

> I suspect that 3 or 4 years after concepts and traits go into wide use, there's going to be a quiet backlash against them. Where, once again, they'll be implementing D's semantics. Heck, C++17 could just as well be renamed C++D :-) given all the D senabtucs they're quietly adopting.

Well, even if concepts _were_ where it was at, at least D basically lets you implement them or do something far more lax or ad hoc, because template constraints and static if give you a _lot_ flexibility. We're not tied down in how we go about writing template constraints or even in using the function level to separate out functionality, because we can do that internally with static if where appropriate. So, essentially, we're in a great place regardless.

On the other hand, if we had built template constraints around concepts or interfaces or anything like that, then our hands would be tied. By making them accept any boolean expression that's known at compile time, we have maximum flexibility. What we do with them then becomes a matter of best practices.

The downside is that it's sometimes hard to decipher why a template constraint is failing, and the compiler is less able to help us with stuff like that, since it's not rigid like it would be with a concept supported directly by the language, but the sheer simplicity and flexibility of it makes it a major win anyway IMHO.

- Jonathan M Davis
July 25, 2015
On Saturday, 25 July 2015 at 02:55:16 UTC, Bruno Queiroga wrote:
> On Saturday, 25 July 2015 at 01:15:52 UTC, Walter Bright wrote:
>> On 7/24/2015 4:19 PM, Bruno Queiroga wrote:
>>> Could not the compiler just issue a warning
>>
>> Please, no half features. Code should be correct or not.
>>
>  ...
>> ... (consider if bar was passed as an alias)
>
>
> trait S1 { void m1(); }
> trait S2 : S1 { void m2(); }
>

For completeness:

struct Struct1 { void m1(){}; void m2(){}; void m3(){}; void m4(){}; }

trait S1 { void m1(); }
trait S2 : S1 { void m2(); }
trait S3 : S2 { void m3(); }
trait S4 { void m4(); } // no "inheritance"

void bar(S : S2)(S s) {
    s.m1(); // ok...
    s.m2(); // ok...
    s.m3(); // ERROR!
    (cast(S3) s).m3();    // OK!  (Struct1 has m3())
    // (cast(S4) s).m4(); // OK!! (Struct1 has m4())
}

template foo(S : S1) {
    static void foo(alias void f(S))(S s) {
        s.m1(); // ok...
        s.m2(); // ERROR: S1 is the base trait of S
        f(s);   // OK! typeof(s) is (compatible with) S of f(S)
                //                  (structurally or nominally)
    }
}

void main(string[] args) {
    Struct1 struct1;

    alias foo!Struct1 fooSt1;
    alias bar!Struct1 barSt1;
	
    fooSt1!barSt1(struct1);
}


Is this reasonable?

July 25, 2015
On Friday, 24 July 2015 at 04:42:59 UTC, Walter Bright wrote:
> On 7/23/2015 3:12 PM, Dicebot wrote:
>> On Thursday, 23 July 2015 at 22:10:11 UTC, H. S. Teoh wrote:
>>> OK, I jumped into the middle of this discussion so probably I'm speaking
>>> totally out of context...
>>
>> This is exactly one major advantage of Rust traits I have been trying to
>> explain, thanks for putting it up in much more understandable way :)
>
> Consider the following:
>
>     int foo(T: hasPrefix)(T t) {
>        t.prefix();    // ok
>        bar(t);        // error, hasColor was not specified for T
>     }
>
>     void bar(T: hasColor)(T t) {
>        t.color();
>     }
>
> Now consider a deeply nested chain of function calls like this. At the bottom, one adds a call to 'color', and now every function in the chain has to add 'hasColor' even though it has nothing to do with the logic in that function. This is the pit that Exception Specifications fell into.
>
> I can see these possibilities:
>
> 1. Require adding the constraint annotations all the way up the call tree. I believe that this will not only become highly annoying, it might make generic code impractical to write (consider if bar was passed as an alias).
>
> 2. Do the checking only for 1 level, i.e. don't consider what bar() requires. This winds up just pulling the teeth of the point of the constraint annotations.
>
> 3. Do inference of the constraints. I think that is indistinguishable from not having annotations as being exclusive.
>
>
> Anyone know how Rust traits and C++ concepts deal with this?

I don't know about this. The problem is that if you don't list everything in the constraint, then the user is going to get an error buried in your templated code somewhere rather than in their code, which is _not_ user friendly and is why we usually try and put everything required in the template constraint. On the other hand, you're very much right in that this doesn't scale if you have enough levels of template constraints, especially if some of the constraints in the functions being called internally change. And yet, the caller needs to know what the requirements are of the template or templated function actually are when they pass it something. So, it does kind of need to be at the top level from that aspect of usability as well. So, this is just plain ugly regardless.

One option which would work at least some of the time would be to do something like

void foo(T)(T t)
    if(hasPrefix!T && is(typeof(bar(t))))
{
    t.prefix();
    bar(t);
}

void bar(T)(T t)
    if(hasColor!T)
{
    t.color();
}

then you don't have to care what the current constraint for bar is, and it still gets checked in foo's template constraint.

...

Actually, I just messed around with some of this to see what error messages you get when foo doesn't check for bar's constraints in its template constraint, and it's a _lot_ better than it used to be. This code

void foo(T)(T t)
    if(hasPrefix!T)
{
    t.prefix();
    bar(t);
}

void bar(T)(T t)
    if(hasColor!T)
{
    t.color();
}

struct Both { void prefix() { } void color() { } }

struct OneOnly { void prefix() { } }

enum hasPrefix(T) = __traits(hasMember, T, "prefix");
enum hasColor(T) = __traits(hasMember, T, "color");

void main()
{
    foo(Both.init);
    bar(Both.init);
    foo(OneOnly.init);
}

results in these error messages:

q.d(5): Error: template q.bar cannot deduce function from argument types !()(OneOnly), candidates are:
q.d(8):        q.bar(T)(T t) if (hasColor!T)
q.d(25): Error: template instance q.foo!(OneOnly) error instantiating

It tells you exactly which line in your code is wrong (which it didn't used to when the error was inside the template), and it clearly gives you the template constraint which is failing, whereas if you foo tests for bar in its template constraint, you get this

q.d(25): Error: template q.foo cannot deduce function from argument types !()(OneOnly), candidates are:
q.d(1):        q.foo(T)(T t) if (hasPrefix!T && is(typeof(bar(t))))

And that doesn't tell you anything about what bar requires. Actually putting bar's template constraint in foo's template constraint would fix that, but then you wouldn't necessarily know which is failing, and you have the maintenance problem caused by having to duplicate bar's constraint.

So, I actually think that how the current implementation reports errors makes it so that maybe it's _not_ a good idea to put all of the sub-constraints within the top-level constraint, because it actually makes it harder to figure out what you've done wrong. Unfortunately, it probably requires that you look at the source code of the templated function that you're calling regardless, since the error message doesn't actually make it clear that it's the argument that you passed to foo that's being passed to bar rather than an actual bug in foo (and to make matters more complicated, it could actually be something that came from what you passed to foo rather than actually being what you passed in). So, maybe we could improve the error messages further to make it clear that it was what you passed in or something about where it came from so that you wouldn't necessarily have to look at the source code, and if so, I think that that solves the problem reasonably well. It would avoid the maintenance problem of having to propagate the constraints, and it would actually give clearer error messages than propagating the constraints. And having overly complicated template constraints is one of the most annoying aspects of dealing with template constraints, because it makes it a lot harder to figure out why they're failing. So, _not_ putting the sub-constraints in the top-level constraint could make it easier to figure out what's gone wrong.

So, honestly, I think that we have the makings here of a far better solution than trying to put everything in the top-level template constraint. This could be a good part of the solution that we've needed to improve error-reporting associated with template constraints.

In any case, looking at this, I have to agree with you that this is the same problem you get with checked exceptions / exceptions specifications - only worse really, because you can't do "throws Exception" and be done with it like you can in Java (as hideous as that is). Rather, you're forced to do the equivalent of listing all of the exception types being thrown and maintain that list as the code changes - i.e. you have to make the top-level template constraint list all of the sub-constraints and keep that list up-to-date as the sub-constraints change, which is a mess, especially with deep call stacks of templated functions.

- Jonathan M Davis
July 25, 2015
On Saturday, 25 July 2015 at 05:27:48 UTC, Walter Bright wrote:
> On 7/24/2015 11:50 AM, H. S. Teoh via Digitalmars-d wrote:
>> The only way to achieve this is to explicitly
>> negate every condition in all other overloads:
>
> Another way is to list everything you accept in the constraint, and then separate out the various implementations in the template body using static if.
>
> It's a lot easier making the documentation for that, too.

I've considered off and on arguing that a function like find should have a top level template that has the constraints that cover all of the overloads, and then either putting each of the individual functions with their own constraints internally or use separate static ifs within a single function (or some combination of the two). That way, you end up with a simple template constraint that the user sees rather than the huge mess that you get now - though if you still have individual functions within that outer template, then that doesn't really fix the overloading problem except insomuch as the common portion of their template constraints (which is then in the outer template's constraint) would then not have to be repeated.

However, when anyone has brought up anything like this, Andrei has argued against it, though I think that those arguments had to do primarily with the documentation, because the person suggesting the change was looking for simplified documentation, and Andrei thought that the ddoc generation should be smart enough to be able to combine things for you. So, maybe it wouldn't be that hard to convince him of what I'm suggesting, but I don't know. I haven't tried yet. It's just something that's occurred to me from time to time, and I've wondered if we should change how we go about things in a manner along those lines. It could help with the documentation and understanding the template constraint as well as help reduce the pain with the overloads. Andrei has definitely been against overloading via static if though whenever that suggestion has been made. I think that he thinks that if you do that, it's a failure of template constraints - though if you use an outer template and then overload the function internally, then you're still using template constraints rather than static if, and you get simplified template constraints anyway.

So, maybe we should look at something along those lines rather than proliferating the top-level function overloading like we're doing now.

- Jonathan M Davis
July 25, 2015
On Saturday, 25 July 2015 at 00:28:19 UTC, Walter Bright wrote:
> On 7/24/2015 3:07 PM, Jonathan M Davis wrote:

> D has done a great job of making unit tests the rule, rather than the exception.

Yeah. I wonder what would happen with some of the folks that I've worked with who were anti-unit testing if they were programming in D. It would be more or less shoved in their face at that point rather than having it in a separate set of code somewhere that they could ignore, and it would be so easy to put them in there that it would have to be embarrassing on some level at least if they didn't write them. But they'd probably still argue against them and argue that D was stupid for making them so prominent... :(

I do think that our built-in unit testing facilities are a huge win for us though. It actually seems kind of silly at this point that most other languages don't have something similar given how critical they are to high quality, maintainable code.

>> We should be ashamed when our code is not as close to 100% code coverage as is
>> feasible (which is usually 100%).
>
> Right on, Jonathan!

I must say that this is a rather odd argument to be having though, since normally I'm having to argue that 100% test coverage isn't enough rather than that code needs to have 100% (e.g. how range-based algorithms need to be tested with both value type ranges and reference type ranges, which doesn't increase the code coverage at all but does catch bugs with how save is used, and without that, those bugs won't be caught). So, having to argue that all code should have 100% code coverage (or as close to it as is possible anyway) is kind of surreal. I would have thought that that was a given at this point. The real question is how far you need to go past that to ensure that your code works correctly.

- Jonathan M Davis
July 25, 2015
On Saturday, 25 July 2015 at 05:29:39 UTC, Jonathan M Davis wrote:
> Well, even if concepts _were_ where it was at, at least D basically lets you implement them or do something far more lax or ad hoc, because template constraints and static if give you a _lot_ flexibility. We're not tied down in how we go about writing template constraints or even in using the function level to separate out functionality, because we can do that internally with static if where appropriate. So, essentially, we're in a great place regardless.

The point of having a type system is to catch as many mistakes at compile time as possible. The primary purpose of a type system is to reduce flexibility.

July 25, 2015
On 7/24/2015 11:10 PM, Jonathan M Davis wrote:
> So, maybe we should look at something along those lines rather than
> proliferating the top-level function overloading like we're doing now.

Consider the following pattern, which I see often in Phobos:

    void foo(T)(T t) if (A) { ... }
    void foo(T)(T t) if (!A && B) { ... }

from a documentation (i.e. user) perspective. Now consider:

    void foo(T)(T t) if (A || B)
    {
         static if (A) { ... }
         else static if (B) { ... }
         else static assert(0);
    }

Makes a lot more sense to the user, who just sees one function that needs A or B, and doesn't see the internal logic.

July 25, 2015
On 7/25/2015 12:08 AM, Jonathan M Davis wrote:
> I must say that this is a rather odd argument to be having though, since
> normally I'm having to argue that 100% test coverage isn't enough rather than
> that code needs to have 100% (e.g. how range-based algorithms need to be tested
> with both value type ranges and reference type ranges, which doesn't increase
> the code coverage at all but does catch bugs with how save is used, and without
> that, those bugs won't be caught). So, having to argue that all code should have
> 100% code coverage (or as close to it as is possible anyway) is kind of surreal.
> I would have thought that that was a given at this point. The real question is
> how far you need to go past that to ensure that your code works correctly.

It's still unusual to have 100% coverage in Phobos, and this is not because it is hard. Most of the time, it is easy to do. It's just that nobody checks it.

Although we have succeeded in making unit tests part of the culture, the next step is 100% coverage.

I know that 100% unit test coverage hardly guarantees code correctness. However, since I started using code coverage analyzers in the 1980s, the results are surprising - code with 100% test coverage has at LEAST an order of magnitude fewer bugs showing up in the field. It's surprisingly effective.

I would have had a LOT more trouble shipping the Warp project if I hadn't gone with 100% coverage from the ground up. Nearly all the bugs it had in the field were due to my misunderstandings of the peculiarities of gpp - the code had worked as I designed it.

This is a huge reason why I want to switch to ddmd. I want to improve the quality of the compiler with unit tests. The various unit tests schemes I've tried for C++ are all ugly, inconvenient, and simply a bitch. It's like trying to use a slide rule after you've been given a calculator.

(I remember the calculator revolution. It happened my freshman year at college. September 1975 had $125 slide rules in the campus bookstore. December they were at $5 cutout prices, and were gone by January. I never saw anyone use a slide rule again. I've never seen a technological switchover happen so fast, before or since.)

July 25, 2015
On Saturday, 25 July 2015 at 08:48:40 UTC, Walter Bright wrote:
> On 7/24/2015 11:10 PM, Jonathan M Davis wrote:
>> So, maybe we should look at something along those lines rather than
>> proliferating the top-level function overloading like we're doing now.
>
> Consider the following pattern, which I see often in Phobos:
>
>     void foo(T)(T t) if (A) { ... }
>     void foo(T)(T t) if (!A && B) { ... }
>
> from a documentation (i.e. user) perspective. Now consider:
>
>     void foo(T)(T t) if (A || B)
>     {
>          static if (A) { ... }
>          else static if (B) { ... }
>          else static assert(0);
>     }
>
> Makes a lot more sense to the user, who just sees one function that needs A or B, and doesn't see the internal logic.

Yeah, though, I believe that Andrei has argued against that every time that someone suggests doing that. IIRC, he wants ddoc to that for you somehow rather than requiring that we write code that way.

And from a test perspective, it's actually a bit ugly to take function overloads and turn them into static ifs, because instead of having separate functions that you can put unittest blocks under, you have to put all of those tests in a single unittest block or put the unittest blocks in a row with comments on them to indicate which static if branch they go with. It also has the problem that the function can get _way_ too long (e.g. putting all of the overloads of find in one function would be a really bad idea).

Alternatively, you could do something like

template foo(T)
    if(A || B)
{
    void foo()(T t)
        if(A)
    {}

    void foo()(T t)
        if(B)
    {}
}

which gives you the simplified template constraint for the documentation, though for better or worse, you'd still get the individual template constraints listed separately for each overload - though given how often each overload needs an explanation, that's not necessarily bad.

And in many cases, what you really have is overlapping constraints rather than truly distinct ones. So, you'd have something like

auto foo(alias pred, R)(R r)
    if(testPred!pred && isInputRange!R && !isForwardRange!R)
{}

auto foo(alias pred, R)(R r)
    if(testPred!pred && isForwardRange!R)
{}

and be turning it into something like

template foo(alias pred)
    if(testPred!pred)
{
    auto foo(R)(R r)
        if(isInputRange!R && !isForwardRange!R)
    {}

    auto foo(R)(R r)
        if(isForwardRange!R)
    {}
}

So, part of the template constraint gets factored out completely. And if you want to factor it out more than that but still don't want to use static if because of how it affects the unit tests, or because you don't want the function to get overly large, then you can just forward it to another function. e.g.

auto foo(alias pred, R)(R r)
    if(testPred!pred && isInputRange!R)
{
    return _foo(pred, r);
}

auto _foo(alias pred, R)(R r)
    if(!isForwardRange!R)
{}

auto _foo(alias pred, R)(R r)
    if(isForwardRange!R)
{}

or go for both the outer template and forwarding, and do

template foo(alias pred)
    if(testPred!pred)
{
    auto foo(R)(R r)
        if(isInputRange!R)
    {
        return _foo(pred, r);
    }

    auto _foo(R)(R r)
        if(!isForwardRange!R)
    {}

    auto _foo(R)(R r)
        if(isForwardRange!R)
    {}
}

We've already created wrapper templates for at least some of the functions in Phobos so that you can partially instantiate them - e.g.

alias myMap = map!(a => a.func());

So, it we're already partially moving stuff up a level in some cases. We just haven't used it as a method to simplify the main template constraint that user sees or to simplify overloads.

I do think that it can make sense to put very similar overloads in a single function with static if branches like you're suggesting, but I do think that it's a bit of a maintenance issue to do it for completely distinct overloads - especially if there are several of them rather than just a couple. But it's still possible to combine their template constraints at a higher level and have overloaded functions rather than simply using static ifs.

- Jonathan M Davis
July 25, 2015
On 7/25/2015 2:14 AM, Jonathan M Davis wrote:
> I do think that it can make sense to put very similar overloads in a single
> function with static if branches like you're suggesting, but I do think that
> it's a bit of a maintenance issue to do it for completely distinct overloads -
> especially if there are several of them rather than just a couple. But it's
> still possible to combine their template constraints at a higher level and have
> overloaded functions rather than simply using static ifs.

I also sometimes see:

   void foo(T)(T t) if (A && B) { ... }
   void foo(T)(T t) if (A && !B) { ... }

The user should never have to see the B constraint in the documentation. This should be handled internally with static if.