August 02, 2014
Daniel Murphy:

> This doesn't need a lot of text to explain either: "when possible, the compiler will check preconditions at compile time"

What defines "possible"? A compiler switch that allows to define the amount of compile time?


> I feel like you're confusing this pull request with another enhancement. The discussion should be about whether this exact feature is worthwhile, not if some other feature would solve some other problems.

The comparison of the two ideas is useful, because they try to solve the same problem.


> Do you really think the compiler opportunistically checking
> preconditions would be a bad thing for D?

I think that adding an undefined, growing, compiled-dependent list of static checks that can cause unpredictable compilation times is not a good idea. A more general solution has some advantages:
- It allows the user to choose what to run at compile-time and what at run-time;
- You can defined _exactly_ what it does in a book, unlike the growing list;
- Will not grow in features and implementation complexity as compilers improve (and I think it's sufficiently simple);
- It's going to be mostly the same in all D compilers;
- It allows to specify and run arbitrary tests in CTFE instead of a set of predefined by unknown simple tests. So you can use it for SafeIntegers literals, and many other future and unpredictable purposes;
- It's explicitly visible in the code, because there is a syntax to denote such static testing parts in the code.

Implementing a special-cased solution instead of a more general and principled solution could make the implementation of the better solution more unlikely :-)

Bye,
bearophile
August 02, 2014
By the way, I will be absent for more than a week. My comments or answers in bug reports and threads will have to wait.

Bye,
bearophile
August 02, 2014
"bearophile"  wrote in message news:nadqefjfnjfzghopeqyf@forum.dlang.org...

> What defines "possible"? A compiler switch that allows to define the amount of compile time?

At the moment it's the criteria I listed in the bug report.  Beyond this, it will be limited by what constfolding is capable of.  What the exact limitations of constfolding are is an evolving part of the language.

> > I feel like you're confusing this pull request with another enhancement. The discussion should be about whether this exact feature is worthwhile, not if some other feature would solve some other problems.
>
> The comparison of the two ideas is useful, because they try to solve the same problem.

I don't think they do.  If you force evaluation of preconditions at compile time, you force arguments (or properties of arguments) to be known at compile time.  With best-effort checking, you simply get to promote run-time errors to compile-time.

> I think that adding an undefined, growing, compiled-dependent list of static checks that can cause unpredictable compilation times is not a good idea. A more general solution has some advantages:
> - It allows the user to choose what to run at compile-time and what at run-time;

Again, these are orthagonal.  I don't care if you define conditions that must be met at compile-time, these are about promoting run-time conditions to compile time.

> - You can defined _exactly_ what it does in a book, unlike the growing list;

IMO this is only a mild problem.  If they can't be caught at compile time, they will still be caught at run-time.

> - Will not grow in features and implementation complexity as compilers improve (and I think it's sufficiently simple);

I don't understand this.  You don't want static checking to improve over time?  This is simply a type of static checking.

> - It's going to be mostly the same in all D compilers;

Again, it's a promotion.  It doesn't matter if every frontend doesn't implement it, lesser frontends will just mean you catch it at runtime.

> - It allows to specify and run arbitrary tests in CTFE instead of a set of predefined by unknown simple tests. So you can use it for SafeIntegers literals, and many other future and unpredictable purposes;
> - It's explicitly visible in the code, because there is a syntax to denote such static testing parts in the code.

This has nothing to do with promoting run-time checks to compile-time.

> Implementing a special-cased solution instead of a more general and principled solution could make the implementation of the better solution more unlikely :-)

Maybe.  But I think you should be seeing this a stepping stone, not competition.

eg all developers like it when the compiler points out guaranteed wrong code:

ubyte x = 0xFFF;

but many (including me) find it very annoying when the compiler makes you alter code that it thinks _might_ be wrong:

ubyte x = y; // where y is a uint, that you know to be < 256 

August 02, 2014
On Saturday, 2 August 2014 at 05:35:26 UTC, Walter Bright wrote:
> On 8/1/2014 7:13 PM, David Bregman wrote:
>> OK, I think I have an idea how to be more convincing (I wish I'd thought of this
>> earlier):
>>
>> is this
>> http://www.cplusplus.com/reference/cassert/assert/
>>
>> the same as this?
>> http://msdn.microsoft.com/en-us/library/1b3fsfxw.aspx
>>
>> can you see the difference now?
>
> What I see is Microsoft attempting to bring D's assert semantics into C++. :-)

OK, I'm done. It's clear now that you're just being intellectually dishonest in order to "win" what amounts to a trivial argument. So much for professionalism.

> As I've mentioned before, there is inexorable pressure for this to happen, and it will happen.

What will happen is you will find out the hard way whether code breakage, undefined behavior and security holes are a worthy tradeoff for some trivial performance gains which likely can't even be measured except in toy benchmarks.
August 02, 2014
On 8/1/2014 4:24 PM, bearophile wrote:
> H. S. Teoh:
>
>> IMO the correct solution is for the compiler to insert preconditions at the
>> calling site,
>
> I have seen this suggestion tens of times, but nothing has happened. (delegates
> make the management of contract blame more compex).

There's a bugzilla issue on it, considerable discussion there, and consensus that it's the correct solution. But nobody has implemented it yet.

August 02, 2014
On 8/1/2014 3:58 PM, Jonathan M Davis wrote:
> But even if I
> could think of a better name, I think that we're stuck with -debug at this point.

One thing that people hate even more than breaking existing code is breaking their makefile (or equivalent).

Makefiles typically are cut&pasted from some other project, with the switches copied over without review. Worse, the switches sometimes come in from macros, and scripts are written to drive the makefiles, and it becomes a wholly unmaintainable tangle.

Programmers have gotten a lot better about writing clear, maintainable code, but I can't say the same for the build process. Those are worse than ever.

https://www.youtube.com/watch?v=CHgUN_95UAw
August 02, 2014
Am Sat, 02 Aug 2014 12:34:54 -0700
schrieb Walter Bright <newshound2@digitalmars.com>:

> On 8/1/2014 3:58 PM, Jonathan M Davis wrote:
> > But even if I
> > could think of a better name, I think that we're stuck with -debug
> > at this point.
> 
> One thing that people hate even more than breaking existing code is breaking their makefile (or equivalent).
> 
> Makefiles typically are cut&pasted from some other project, with the switches copied over without review. Worse, the switches sometimes come in from macros, and scripts are written to drive the makefiles, and it becomes a wholly unmaintainable tangle.
> 
> Programmers have gotten a lot better about writing clear, maintainable code, but I can't say the same for the build process. Those are worse than ever.
> 
> https://www.youtube.com/watch?v=CHgUN_95UAw

Wait - are you arguing that we can't change the name of a compiler flag because users would have to do a 'search&replace' in their Makefiles, but breaking existing code (assert/assume) which will be very hard to track down for users is OK, cause 'that code was broken anyway'?


We could also argue that Makefiles using '-debug' are broken anyway, cause the name debug doesn't match the behavior of the compiler and users will likely expect a different behavior.
August 03, 2014
On Saturday, 2 August 2014 at 09:46:57 UTC, Marc Schütz wrote:
> On Friday, 1 August 2014 at 21:50:59 UTC, Jonathan M Davis wrote:
>> And why is that a problem? By definition, if an assertion fails, your code is in an invalid state,
>
> Only in an ideal world. In practice, the condition in the assertion could itself be incorrect. It could be a leftover after a refactoring, for instance.

Then it's a bug, and bugs make programs do wrong and invalid things. I certainly don't see any reason to not optimize based on assertions just in case someone screwed up their assertion any more than I see a reason to avoid optimizing based on something that's wrong in normal code.

- Jonathan M Davis
August 03, 2014
On Sunday, 3 August 2014 at 02:27:16 UTC, Jonathan M Davis wrote:
> On Saturday, 2 August 2014 at 09:46:57 UTC, Marc Schütz wrote:
>> On Friday, 1 August 2014 at 21:50:59 UTC, Jonathan M Davis wrote:
>>> And why is that a problem? By definition, if an assertion fails, your code is in an invalid state,
>>
>> Only in an ideal world. In practice, the condition in the assertion could itself be incorrect. It could be a leftover after a refactoring, for instance.
>
> Then it's a bug, and bugs make programs do wrong and invalid things. I certainly don't see any reason to not optimize based on assertions just in case someone screwed up their assertion any more than I see a reason to avoid optimizing based on something that's wrong in normal code.

Yes, it's a bug. The purpose of asserts is to detect these kinds of bugs, not to make it harder to detect them, and their effects worse.
August 03, 2014
On Friday, 1 August 2014 at 16:04:21 UTC, Daniel Murphy wrote:
> "Johannes Pfau"  wrote in message news:lrgar7$1vrr$1@digitalmars.com...
>
>> Do you know if linkers actually guarantee that behaviour? AFAICS dmd
>> doesn't do anything special, it always emits weak symbols and just calls
>> gcc to link. The linker usually uses the first symbol it sees, but I
>> have not seen any guarantees for that.
>
> Yes, they do.  Or at least, it's the classic linker behaviour and I expect they all implement it.  It's relied upon for nofloat and a few other things. It's one of the reasons linkers are hard to parallelize.

Though your linker specification is not the same as Johannes said.

>> Also what happens if your main application doesn't use the template,
>> but two libraries use the same template. Then which instances are
>> actually used? I'd guess those from the 'first' library?
>
> Well, the first one that it encounters after getting the undefined reference.

That's just how ld ended up working, and it's known to be pretty stupid in comparison with other linkers. And it's used rather stupidly, e.g. gcc just repeats libs several times to ensure ld sees all symbols it needs.

> eg
> module application
> {
> undef x
> ...
> }
>
>
> lib1:
> module a
> {
> undef tmpl
> def a
> }
> module tmpl
> {
> def tmpl
> }
>
> lib2:
> module x
> {
> undef tmpl
> def x
> }
> module tmpl
> {
> def tmpl
> }
>
> Here nothing that the application references will cause the tmpl in lib1 to get pulled in, but when it processes lib2 it will pull in x, and then tmpl.

lib2:
module x
{
undef tmpl
def x
}

phobos:
module tmpl
{
def tmpl
}

Then tmpl is picked from phobos instead of your code. Template emission optimization is a simple tool: if you realistically link against a module, which already has an instantiated template, template emission can be skipped and the linker will reasonably link whatever it finds.