July 25, 2015
On 7/24/15 2:56 AM, Walter Bright wrote:
> On 7/23/2015 10:49 PM, Tobias Müller wrote:
>> Walter Bright <newshound2@digitalmars.com> wrote:
>>> I know a lot of the programming community is sold on exclusive
>>> constraints (C++ concepts, Rust traits) rather than inclusive ones (D
>>> constraints). What I don't see is a lot of experience actually using
>>> them
>>> long term. They may not turn out so well, like ES.
>>
>> Haskell has type classes since ~1990.
>
> Haskell is sometimes described as a bondage-and-discipline language.
> Google it if you don't believe me :-) Such languages have their place
> and adherents, but I don't think D is directed that way.
>
> Exception Specifications were proposed for Java and C++ by smart,
> experienced programmers. It looked great on paper, and in the simple
> examples in the proposals. The unfit nature of it only emerged years
> later. Concepts and traits appear to me to suffer from the same fault.

FWIW I think traits are better than concepts. -- Andrei

July 25, 2015
On 7/24/15 2:50 PM, H. S. Teoh via Digitalmars-d wrote:
> Maybe as a Phobos*user*  you perceive that overloading with sig
> constraints is nice and clean... But as someone who was foolhardy enough
> once to attempt to sort out the tangled mess that is the sig constraints
> of toImpl overloads, I'm getting a rather different perception of the
> situation.

I think we're in good shape there. -- Andrei
July 25, 2015
On Saturday, 25 July 2015 at 09:40:52 UTC, Walter Bright wrote:
> On 7/25/2015 12:19 AM, Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= <ola.fosheim.grostad+dlang@gmail.com> wrote:
>> The point of having a type system is to catch as many mistakes at compile time
>> as possible. The primary purpose of a type system is to reduce flexibility.
>
> Again, the D constraint system *is* a compile time system, and if the template body uses an interface not present in the type and not checked for in the constraint, you will *still* get a compile time error.
>
> The idea that Rust traits check at compile time and D does not is a total misunderstanding.
>
>
>
> BTW, you might want to remove the UTF-8 characters from your user name. Evidently, NNTP doesn't do well with them.

I think the point is that trait based constraints force compilation errors to be raised at the call site, and not potentially from deep within a template expansion. Template errors are stack traces coming from duck typed, compile time programs. Library authors can't rely on the typechecker to pick up on mistakes that may only appear at expansion time in client programs.
July 25, 2015
On 7/24/15 3:26 PM, Walter Bright wrote:
> On 7/24/2015 11:39 AM, Jacob Carlborg wrote:
>> Perhaps it might be good idea to allow to set a predefined version
>> identifier,
>> i.e. set "linux" on Windows just to see that it compiles. Think of it
>> like the
>> "debug" statement can be used as an escape hatch for pure functions.
>
>
> I don't want to encourage "if it compiles, ship it!" I've strongly
> disagreed with the C++ concepts folks on that issue, and they've
> downvoted me to hell on it, too :-)
>
> I get the impression that I'm the only one who thinks exclusive traits
> is more of a problem than a solution. It's deja vu all over again with
> Exception Specifications. So, one of:
>
> 1. I'm dead wrong.
> 2. I fail to explain my argument properly (not the first time that's
> happened, fer sure).
> 3. People strongly want to believe in traits.
> 4. Smart people say it tastes great and is less filling, so there's a
> bandwagon effect.
> 5. The concepts/traits people have done a fantastic job convincing
> people that the emperor is wearing the latest fashion :-)
>
> It's also clear that traits work very well "in the small", i.e. in
> specifications of the feature, presentation slide decks, tutorials, etc.
> Just like Exception Specifications did. It's the complex hierarchies
> where it fell apart.

It would be a mistake to put concepts and traits together. Traits have been used at large scale in Scala to great results (my understanding is they're similar to Rust's). Scala-style traits would marginally improve D but we already have competing mechanisms in the form of template constraints. I consider them more powerful; Odersky seems to think they're about as powerful. -- Andrei
July 25, 2015
On 7/24/15 6:09 PM, Artur Skawina via Digitalmars-d wrote:
> On 07/24/15 23:32, Jonathan M Davis via Digitalmars-d wrote:
>> On Friday, 24 July 2015 at 20:57:34 UTC, Artur Skawina wrote:
>>> The difference is that right now the developer has to write a unit-test per function that uses `hasPrefix`, otherwise the code might not even be verified to compile. 100% unit-test coverage is not going to happen in practice, and just like with docs, making things easier and reducing boilerplate to a minimum would improve the situation dramatically.
>>
>> But you see. This is exactly wrong attitude. Why on earth should we make life easier for folks who don't bother to get 100% unit test coverage?
>
> How exactly does making it harder to write tests translate into
> having better coverage? Why is requiring the programmer to write
> unnecessary, redundant, and potentially buggy tests preferable?

False choice. -- Andrei

July 25, 2015
Sorry for somewhat delayed answer - not sure if anyone has answered to your questions in the meanwhile.

On Friday, 24 July 2015 at 00:19:50 UTC, Walter Bright wrote:
> On 7/23/2015 2:08 PM, Dicebot wrote:
>> It does not protect from errors in definition
>>
>> void foo (R) (Range r)
>>      if (isInputRange!Range)
>> { r.save(); }
>>
>> unittest
>> {
>>      SomeForwardRange r;
>>      foo(r);
>> }
>>
>> This will compile and show 100% test coverage. Yet when user will try using it
>> with real input range, it will fail.
>
> That is correct. Some care must be taken that the mock types used in the unit tests actually match what the constraint is, rather than being a superset of them.

This is absolutely impractical. I will never even consider such attitude as a solution for production projects. If test coverage can't be verified automatically, it is garbage, period. No one will ever manually verify thousands lines of code after some trivial refactoring just to make sure compiler does its job.

By your attitude `-cov` is not necessary at all - you can do the same manually anyway, with some help of 3d party tool. Yet you advertise it as crucial D feature (and are being totally right about it).

>> There is quite a notable difference in clarity between error message coming from
>> some arcane part of function body and referring to wrong usage (or even totally
>> misleading because of UFCS) and simple and straightforward "Your type X does not
>> implement method X necessary for trait Y"
>
> I believe they are the same. "method X does not exist for type Y".

Well, the difference is that you "believe" and I actually write code and read those error messages. They are not the same at all. In D error message gets evaluated in context of function body and is likely to be completely misleading in all but most trivial methods. For example, if there is a global UFCS function available with the same name but different argument list, you will get an error about wrong arguments and not about missing methods.

>> Coverage does not work with conditional compilation:
>>
>> void foo (T) ()
>> {
>>      import std.stdio;
>>      static if (is(T == int))
>>          writeln("1");
>>      else
>>          writeln("2");
>> }
>>
>> unittest
>> {
>>      foo!int();
>> }
>>
>> $ dmd -cov=100 -unittest -main ./sample.d
>
> Let's look at the actual coverage report:
> ===============================
>        |void foo (T) ()
>        |{
>        |    import std.stdio;
>        |    static if (is(T == int))
>       1|        writeln("1");
>        |    else
>        |        writeln("2");
>        |}
>        |
>        |unittest
>        |{
>       1|    foo!int();
>        |}
>        |
> foo.d is 100% covered
> ============================
>
> I look at these all the time. It's pretty obvious that the second writeln is not being compiled in.

Again, this is impractical. You may be capable of reading with speed of light but this not the normal industry case. Programs are huge, changesets are big, time pressure is real. If something can't be verified in automated way at least for basic sanity, it is simply not good enough. This is the whole point of CI revolution.

In practice I will only look into .cov files when working on adding new tests to improve the coverage and will never be able to do it more often (unless compiler notifies me to do so). This is real-world constraint one needs to deal with, not matter what your personal preferences about good development process are.

> Now, if I make a mistake in the second writeln such that it is syntactically correct yet semantically wrong, and I ship it, and it blows up when the customer actually instantiates that line of code,
>
>    -- where is the advantage to me? --
>
> How am I, the developer, better off? How does "well, it looks syntactically like D code, so ship it!" pass any sort of professional quality assurance?

?

If compiler would actually show 0 coverage for non-instantiated lines, than automatic coverage control check in CI would complain and code would never be shipped unless it gets covered with tests (which check the semantics). Your are putting it totally backwards.
July 25, 2015
On 7/24/15 6:12 PM, deadalnix wrote:
> The most intriguing part of this conversation is that the argument made
> about unitests and complexity are the very same than for dynamic vs
> strong typing (and there is hard data that strong typing is better).

No, that's not the case at all. There is a distinction: in dynamic typing the error is deferred to run time, in this discussion the error is only deferred to instantiation time. -- Andrei
July 25, 2015
On 7/24/15 6:58 PM, Justin Whear wrote:
> I agree that there's really no excuse and think we ought to orient the
> language towards serious professionals who will produce quality code.
> Bad code is bad code, regardless of the language.

YES! Amen to that. -- Andrei
July 25, 2015
On Saturday, 25 July 2015 at 13:37:15 UTC, Andrei Alexandrescu wrote:
> On 7/24/15 6:12 PM, deadalnix wrote:
>> The most intriguing part of this conversation is that the argument made
>> about unitests and complexity are the very same than for dynamic vs
>> strong typing (and there is hard data that strong typing is better).
>
> No, that's not the case at all. There is a distinction: in dynamic typing the error is deferred to run time, in this discussion the error is only deferred to instantiation time. -- Andrei

Runtime errors are a usability problem for users and maintianability problem for developers. Instatiation time errors are a maintianability problem for library authors and a usability problem for developers. I would argue that the latter is better than the former, but the poor developer experience of using Phobos is what made me move away from D a couple of years ago.
July 25, 2015
On 7/24/15 9:16 PM, Tofu Ninja wrote:
> Current template types work like duck typing, which works, but its error
> prone, and your argument of unittests is obviously bad in the context of
> duck typing.

Could you please make the obvious explicit?

> We want a real type system for our template types.

Every time this (or really any apology of C++ concepts) comes up, the discussion has a similar shape:

1. Concepts are great because they're a type system for the type system! And better error messages! And look at these five-liners! And Look at iterators! And other nice words!

2. I destroy them.

3. But we want concepts because they're a type system for the type system! And ... etc. etc.

I have no idea how people can simply ignore the fact that their arguments have been systematically dismantled.


Andrei