July 24, 2015
On Friday, 24 July 2015 at 22:07:14 UTC, Jonathan M Davis wrote:
> On Friday, 24 July 2015 at 21:48:23 UTC, Tofu Ninja wrote:
>> On Friday, 24 July 2015 at 21:32:19 UTC, Jonathan M Davis wrote:
>>> This is exactly wrong attitude. Why on earth should we make life easier for folks who don't bother to get 100% unit test coverage?
>>
>> Because that is 99% of D users...
>
> If so, they have no excuse. D has made it ridiculously easy to unit test your code. And I very much doubt that 99% of D users don't unit test their code.
>
> There are cases where 100% isn't possible - e.g. because of an assert(0) or because you're dealing with UI code or the like where it simply isn't usable without running the program - but even then, the test coverage should be as close to 100% as can be achieved, which isn't usually going to be all that far from 100%.
>
> We should be ashamed when our code is not as close to 100% code coverage as is feasible (which is usually 100%).
>
> - Jonathan M Davis

I ment 99% don't 100% unit tests, but even close to 100% is still probably not that common, most D users are hobbyists I think(though I could be wrong), and hobbyists are lazy.
July 24, 2015
On Friday, 24 July 2015 at 19:10:33 UTC, Walter Bright wrote:
> On 7/24/2015 11:42 AM, Jacob Carlborg wrote:
>> I don't see the difference compared to a regular parameter. If you don't specify
>> any constraints/traits/whatever it like using "Object" for all your parameter
>> types in Java.
>
> So constraints then will be an all-or-nothing proposition? I believe that would make them essentially useless.
>
> I suspect I am not getting across the essential point. If I have a call tree, and at the bottom I add a call to interface X, then I have to add a constraint that additionally specifies X on each function up the call tree to the root. That is antiethical to writing generic code, and will prove to be more of a nuisance than an asset.
>
> Exactly what sunk Exception Specifications.

In many language you have an instaceof keyword or something similar. You'd get :

if (foo instanceof X) {
  // You can use X insterface on foo.
}

vs

static if (foo instanceof X) {
  // You can use X insterface on foo.
}

The whole runtime vs compile time is essentially an implementation detail. The idea is the very same.

The most intriguing part of this conversation is that the argument made about unitests and complexity are the very same than for dynamic vs strong typing (and there is hard data that strong typing is better).

Yet, if someone would make the very same argument in the case of dynamic typing, both Walter and Andrei would not give it a second though (and rightly so). Yet, nowhere the reason why this differs in ay that make the cost/benefit ratio shift is mentioned. It is simply asserted as such.

July 24, 2015
On Friday, 24 July 2015 at 21:27:09 UTC, Tofu Ninja wrote:
> On Friday, 24 July 2015 at 19:10:33 UTC, Walter Bright wrote:
>> On 7/24/2015 11:42 AM, Jacob Carlborg wrote:
>>> I don't see the difference compared to a regular parameter. If you don't specify
>>> any constraints/traits/whatever it like using "Object" for all your parameter
>>> types in Java.
>>
>> So constraints then will be an all-or-nothing proposition? I believe that would make them essentially useless.
>>
>> I suspect I am not getting across the essential point. If I have a call tree, and at the bottom I add a call to interface X, then I have to add a constraint that additionally specifies X on each function up the call tree to the root. That is antiethical to writing generic code, and will prove to be more of a nuisance than an asset.
>>
>> Exactly what sunk Exception Specifications.
>
> But thats exactly how normal interfaces work...
>
> eg:
> interface Iface{ void foo(){} }
>
> void func1(Iface x){ func2(x); }
> void func2(Iface x){ func3(x); }
> void func3(Iface x){ x.bar(); } // ERROR no bar in Iface
>
> Only options here are A: update Iface to have bar() or B: make a new interface and change it on the whole tree. The same "problem" would exist for the concepts, but its the reason why people want it.

C: do a runtime check or downcast.
July 24, 2015
On Friday, 24 July 2015 at 22:09:24 UTC, Artur Skawina wrote:
> On 07/24/15 23:32, Jonathan M Davis via Digitalmars-d wrote:
>> On Friday, 24 July 2015 at 20:57:34 UTC, Artur Skawina wrote:
>>> The difference is that right now the developer has to write a unit-test per function that uses `hasPrefix`, otherwise the code might not even be verified to compile. 100% unit-test coverage is not going to happen in practice, and just like with docs, making things easier and reducing boilerplate to a minimum would improve the situation dramatically.
>> 
>> But you see. This is exactly wrong attitude. Why on earth should we make life easier for folks who don't bother to get 100% unit test coverage?
>
> How exactly does making it harder to write tests translate into having better coverage? Why is requiring the programmer to write unnecessary, redundant, and potentially buggy tests preferable?

And how are we making it harder to write tests? We're merely saying that you have to actually instantiate your template and test those instantiations. If someone don't catch a bug in their template, because they didn't try the various combinations of stuff that it supports (and potentially verifying that it doesn't compile with stuff that it's not supposed to support), then they didn't test it enough. Having the compiler tell you that you're using a function that you didn't require in your template constraint might be nice, but if the programmer didn't catch that anyway, then they didn't test enough. And if you don't test enough, you're bound to have other bugs. So, the folks this helps are the folks that aren't testing their code sufficiently and thus likely have buggy code anyway.

- Jonathan M Davis
July 24, 2015
On Fri, 24 Jul 2015 22:07:12 +0000, Jonathan M Davis wrote:

> On Friday, 24 July 2015 at 21:48:23 UTC, Tofu Ninja wrote:
>> On Friday, 24 July 2015 at 21:32:19 UTC, Jonathan M Davis wrote:
>>> This is exactly wrong attitude. Why on earth should we make life easier for folks who don't bother to get 100% unit test coverage?
>>
>> Because that is 99% of D users...
> 
> If so, they have no excuse. D has made it ridiculously easy to unit test your code. And I very much doubt that 99% of D users don't unit test their code.
> 
> There are cases where 100% isn't possible - e.g. because of an assert(0) or because you're dealing with UI code or the like where it simply isn't usable without running the program - but even then, the test coverage should be as close to 100% as can be achieved, which isn't usually going to be all that far from 100%.
> 
> We should be ashamed when our code is not as close to 100% code coverage as is feasible (which is usually 100%).
> 
> - Jonathan M Davis

Commercial (though in-house) D library and tools writer here.  We run
code-coverage as part of our CI process and report results back to Gitlab
(our self-hosted Github-like).  Merge requests all report the code
coverage of the pull (haven't figured out how to do a delta against the
old coverage yet).  I regularly test code to 100% of coverable lines,
where coverable lines are all but:
  assert(0, ...)
  Test case lines that aren't supposed to execute (e.g. lambdas in a
predSwitch)

I agree that there's really no excuse and think we ought to orient the language towards serious professionals who will produce quality code. Bad code is bad code, regardless of the language.
July 24, 2015
On Friday, 24 July 2015 at 04:42:59 UTC, Walter Bright wrote:
> On 7/23/2015 3:12 PM, Dicebot wrote:
>> On Thursday, 23 July 2015 at 22:10:11 UTC, H. S. Teoh wrote:
>>> OK, I jumped into the middle of this discussion so probably I'm speaking
>>> totally out of context...
>>
>> This is exactly one major advantage of Rust traits I have been trying to
>> explain, thanks for putting it up in much more understandable way :)
>
> Consider the following:
>
>     int foo(T: hasPrefix)(T t) {
>        t.prefix();    // ok
>        bar(t);        // error, hasColor was not specified for T
>     }
>
>     void bar(T: hasColor)(T t) {
>        t.color();
>     }
>
> Now consider a deeply nested chain of function calls like this. At the bottom, one adds a call to 'color', and now every function in the chain has to add 'hasColor' even though it has nothing to do with the logic in that function. This is the pit that Exception Specifications fell into.
>
> I can see these possibilities:
>
> 1. Require adding the constraint annotations all the way up the call tree. I believe that this will not only become highly annoying, it might make generic code impractical to write (consider if bar was passed as an alias).
>

Could not the compiler just issue a warning of implicit use of properties/functions like the C's implicit function declaration warning? And some form of "cast" (or test) could be done to make the use explicit:

int foo(T: hasPrefix)(T t) {
   t.prefix();    // ok
   t.color();     // Compiler warning: implicit.
   bar(t);        // Compiler warning: implicit.
}

void bar(T: hasColor)(T t) {
   t.color();
   t.prefix();    // Compiler warning: implicit.
}

-------------

int foo(T: hasPrefix)(T t) {
   t.prefix();                  // ok
   (cast(hasColor) t).color();  // ok: explicit.
   bar(cast(hasColor) t);       // ok: explicit.
}

void bar(T: hasColor)(T t) {
   t.color();
}

This seems enough to avoid bugs.

Note: sorry for the bad english writing.

Best regards,
Bruno Queiroga.

July 25, 2015
On 7/24/2015 3:07 PM, Jonathan M Davis wrote:
> If so, they have no excuse. D has made it ridiculously easy to unit test your
> code. And I very much doubt that 99% of D users don't unit test their code.

D has done a great job of making unit tests the rule, rather than the exception.

> There are cases where 100% isn't possible - e.g. because of an assert(0) or
> because you're dealing with UI code or the like where it simply isn't usable
> without running the program - but even then, the test coverage should be as
> close to 100% as can be achieved, which isn't usually going to be all that far
> from 100%.
>
> We should be ashamed when our code is not as close to 100% code coverage as is
> feasible (which is usually 100%).

Right on, Jonathan!

July 25, 2015
On 7/24/2015 3:12 PM, deadalnix wrote:
> On Friday, 24 July 2015 at 19:10:33 UTC, Walter Bright wrote:
>> If I have a call tree,
>> and at the bottom I add a call to interface X, then I have to add a constraint
>> that additionally specifies X on each function up the call tree to the root.
>> That is antiethical to writing generic code, and will prove to be more of a
>> nuisance than an asset.
>>
>> Exactly what sunk Exception Specifications.
>
> In many language you have an instaceof keyword or something similar. You'd get :
>
> if (foo instanceof X) {
>    // You can use X insterface on foo.
> }
>
> vs
>
> static if (foo instanceof X) {
>    // You can use X insterface on foo.
> }
>
> The whole runtime vs compile time is essentially an implementation detail. The
> idea is the very same.
>
> The most intriguing part of this conversation is that the argument made about
> unitests and complexity are the very same than for dynamic vs strong typing (and
> there is hard data that strong typing is better).
>
> Yet, if someone would make the very same argument in the case of dynamic typing,
> both Walter and Andrei would not give it a second though (and rightly so). Yet,
> nowhere the reason why this differs in ay that make the cost/benefit ratio shift
> is mentioned. It is simply asserted as such.


I don't see how this addresses my point at all. This is very frustrating.
July 25, 2015
On 7/24/2015 2:27 PM, Tofu Ninja wrote:
> But thats exactly how normal interfaces work...

No it isn't. Google QueryInterface(). Nobody lists all the interfaces at the top level functions, which is that Rust traits and C++ concepts require.


> eg:
> interface Iface{ void foo(){} }
>
> void func1(Iface x){ func2(x); }
> void func2(Iface x){ func3(x); }
> void func3(Iface x){ x.bar(); } // ERROR no bar in Iface
>
> Only options here are A: update Iface to have bar() or B: make a new interface
> and change it on the whole tree. The same "problem" would exist for the
> concepts, but its the reason why people want it.

Sigh. Nothing I post here is understood.
July 25, 2015
On 7/24/2015 4:19 PM, Bruno Queiroga wrote:
> Could not the compiler just issue a warning

Please, no half features. Code should be correct or not.


> of implicit use of properties/functions like the C's implicit function declaration warning?

C warnings are not part of Standard C. They're extensions only, and vary widely from compiler to compiler.