August 04, 2014
Am 04.08.2014 02:28, schrieb Andrei Alexandrescu:
> On 8/3/14, 3:35 PM, Daniel Gibson wrote:
>> Am 04.08.2014 00:15, schrieb Andrei Alexandrescu:
>>>
>>> That said, should we proceed carefully about realizing this advantage?
>>> Of course; that's a given. But I think it's very important to fully
>>> understand the advantages of gaining an edge over the competition.
>>>
>>
>> Gaining an edge over the competition?
>
> Yes, as I explained.
>
>> "A new DMD release broke my code in a totally unexpected way and people
>> tell me it's because I'm using assert() wrong.
>
> This has been discussed several times, and I agree there's downside. All
> I want to do is raise awareness of the upside, which is solid but
> probably less obvious to some. There's no need to trot again in response
> the downside that has been mentioned many times already.

Ok, so you agree that there's a downside and code (that you consider incorrect, but that most probably exists and works ok so far) will *silently* break (as in: the compiler won't tell you about it).
So when should this change be introduced? In 2.x or in D3?

I don't really like the idea of introducing a silently breaking change in a minor version - and it destroys the trust into future decisions for D.

Cheers,
Daniel
August 04, 2014
Am 04.08.2014 02:30, schrieb Andrei Alexandrescu:
> On 8/3/14, 4:01 PM, Timon Gehr wrote:
>> On 08/04/2014 12:15 AM, Andrei Alexandrescu wrote:
>>>> I suspect it is one of those ideas of Walter's that has consequences
>>>> that reach further than anyone foresees..... but that's OK, because it
>>>> is fundamentally the correct course of action, it's implications
>>>> foreseen and unforeseen will be correct.
>>>
>>> Agreed.
>>
>> No, please hold on. Walter is not a supernatural being.
>
> There's something to be said about vision and perspective.
>
>>> Walter has always meant assert the way he discusses it today.
>>
>> This argument has no merit. Please stop bringing it up.
>
> Actually it does offer value: for a large fragment of the discussion,
> Walter has been confused that people have a very different understanding
> of assert than his.

Yes, this kinda helps understanding Walters point.

But as his point has only been communicated to *you*, not D users in general, you (and Walter) could be more understanding towards them being surprised and confused by this change of asserts()'s semantics.

Instead you insist that your interpretation of what assert() should *mean* is the ultimate truth, even though assert() doesn't *behave* like that way in current D or any popular programming language I know of.
BTW, TCPL ("KnR") (Second Edition) defines assert as:
"The assert macro is used to add diagnostics to programs:
  void assert(int expression)
If expression is zero when assert(expression) is executed, the assert macro will print on stderr a message, (...)
It then calls abort to terminate execution. (...)
If NDEBUG is defined at the time <assert.h> is included, the assert macro is ignored."

Of course KnR is not the ultimate truth, but it shows that there have been definitions (by clever people!) of assert() that contradict yours for a long time.

Cheers,
Daniel
August 04, 2014
On Monday, 4 August 2014 at 00:24:19 UTC, Andrei Alexandrescu wrote:
> On 8/3/14, 3:26 PM, David Bregman wrote:
>> On Sunday, 3 August 2014 at 22:15:52 UTC, Andrei Alexandrescu wrote:
>>> One related point that has been discussed only a little is the
>>> competitive aspect of it all. Generating fast code is of paramount
>>> importance for D's survival and thriving in the market. Competition in
>>> language design and implementation is acerbic and only getting more
>>> cutthroat. In the foreseeable future efficiency will become more
>>> important at scale seeing as data is growing and frequency scaling has
>>> stalled.
>>
>> Would you care to address the questions about performance raised in the OP?
>
> I thought I just did.

You made some generic statements about performance being good. This is obvious and undisputed. You did not answer any concerns raised in the OP. I am left to wonder if you even read it.

>
>>> Availing ourselves of a built-in "assert" that has a meaning and
>>> informativeness unachievable to e.g. a C/C++ macro is a very important
>>> and attractive competitive advantage compared to these and other
>>> languages.
>>
>> Not really, you can redefine the C macro to behave exactly as proposed,
>> using compiler specific commands to invoke undefined behavior. Didn't
>> you say in the other thread that you tried exactly that?
>
> That might be possible, but one thing I was discussing with Walter (reverse flow analysis) may be more difficult with the traditional definition of assert. Also I'm not sure whether the C and C++ standards require assert to do nothing in NDEBUG builds.
>
>>> Walter has always meant assert the way he discusses it today. Has he
>>> (and subsequently he and I) been imprecise in documenting it? Of
>>> course, but that just means it's Tuesday.
>>>
>>> That said, should we proceed carefully about realizing this advantage?
>>> Of course; that's a given. But I think it's very important to fully
>>> understand the advantages of gaining an edge over the competition.
>>
>> Please comment on the concerns raised by the OP.
>
> Probably not - there's little motivation to do so. The original post is little else than a self-important rehash of a matter in which everybody has stated their opinion, several times, in an exchange that has long ran its course. Having everyone paste their thoughts once again seems counterproductive.

Wow. Don't pretend like the questions are all "asked and answered". The concerns are legitimate, but the responses so far have been mostly arrogant handwaving. The fact that you believe you answered the performance concerns by merely stating "performance is important to make D competitive" is case in point.

There has been no evidence presented that there are any nontrivial performance gains to be had by reusing information from asserts.

There has been no case made that the performance gains (if they exist) justify code breakage and other issues.

There has been no effort to determine if there are alternate ways to achieve the goals which satisfy all groups.

I could go on, and on, but I refer you back to the OP. I really believe this whole thing could be handled much better, it doesn't have to be a zero sum game between the two sides of this issue. That's why I bothered to write the post, to try to achieve that.
August 04, 2014
On 8/3/14, 5:40 PM, Daniel Gibson wrote:
> Ok, so you agree that there's a downside and code (that you consider
> incorrect, but that most probably exists and works ok so far) will
> *silently* break (as in: the compiler won't tell you about it).

Yes, I agree there's a downside. I missed the part where you agreed there's an upside :o).

> So when should this change be introduced? In 2.x or in D3?

More aggressive optimizations should be introduced gradually in future releases of the D compilers. I think your perception of the downside is greater, and that of the upside is lesser, than mine.

> I don't really like the idea of introducing a silently breaking change
> in a minor version - and it destroys the trust into future decisions for D.

I understand. At some point there are judgment calls to be made that aren't going to please everybody.


Andrei

August 04, 2014
Am 04.08.2014 03:02, schrieb Andrei Alexandrescu:
> On 8/3/14, 5:40 PM, Daniel Gibson wrote:
>> Ok, so you agree that there's a downside and code (that you consider
>> incorrect, but that most probably exists and works ok so far) will
>> *silently* break (as in: the compiler won't tell you about it).
>
> Yes, I agree there's a downside. I missed the part where you agreed
> there's an upside :o).

I see a small upside in the concept of "syntax that tells the compiler it can take something for granted for optimization and that causes an error in debug mode".
For me this kind of optimization is similar to GCC's __builtin_expect() to aid branch-prediction: probably useful to get even more performance, but I guess I wouldn't use it in everyday code.
However, I see no upside in redefining an existent keyword (that had a different meaning.. or at least behavior.. before and in most programming languages) to achieve this.

/Maybe/ an attribute for assert() would also be ok, so we don't need a new keyword:
  @optimize
  assert(x);
or @hint, @promise, @always, @for_real, whatever.

Cheers,
Daniel

August 04, 2014
On Sunday, 3 August 2014 at 19:47:27 UTC, David Bregman wrote:

> 2. Semantic change.
> The proposal changes the meaning of assert(), which will result in breaking existing code. Regardless of philosophizing about whether or not the code was "already broken" according to some definition of assert, the fact is that shipping programs that worked perfectly well before may no longer work after this change.

Subject to the caveat suggesting having two assert's with different names and different meanings, I am in the position to comment on this one from experience.

So assuming we do have a "hard assert" that is used within the standard libraries and a "soft assert" in user code (unless they explicitly choose to use the "hard assert"....)

What happens?

Well, I'm the dogsbody who has the job of upgrading the toolchain and handling the fallout of doing so.

So I have been walking multimegaline code bases through every gcc version in the last 15 years.

This is relevant because on every new version they have added stricter warnings, and more importantly, deeper optimizations.

It's especially the deeper optimizations that are interesting here.

They are often better data flow analysis which result in more "insightful" warnings.

So given I'm taking megalines of C/C++ code from a warnings free state on gcc version N to warnings free on version N+1, I'll make some empirical observations.

* They have _always_ highlighted dodgy / non-portable / non-standard compliant code.
* They have quite often highlighted existing defects in the code.
* They have quite often highlighted error handling code as "unreachable", because it is... and the only sane thing to do is delete it.
* They have often highlighted the error handling code of "defensive programmers" as opposed to DbC programmers.

Why? Because around 30% of the code of a defensive programmer is error handling crud that has never been executed, not even in development and hence is untested and unmaintained.

The clean up effort was often fairly largish, maybe a week or two, but always resulted in better code.

Customer impacting defects introduced by the new optimizations have been....

a) Very very rare.
b) Invariably from really bad code that was blatantly defective, non-standard compliant and non-portable.

So what do I expect, from experience from Walter's proposed change?


Another guy in this thread complained about the compiler suddenly relying on thousands of global axioms from the core and standard libraries.

Yup.

Exactly what is going to happen.

As you get...

* more and more optimization passes that rely on asserts,
* in particular pre and post condition asserts within the standard libraries,
* you are going to have flocks of user code that used to compile without warning
* and ran without any known defect...

...suddenly spewing error messages and warnings.

But that's OK.

Because I bet 99.999% of those warnings will be pointing straight at bone fide defects.

And yes, this will be a regular feature of life.

New version of compiler, new optimization passes, new warnings... That's OK, clean 'em up, and a bunch of latent defects won't come back as customer complaints.
August 04, 2014
On 8/3/14, 5:57 PM, David Bregman wrote:
> On Monday, 4 August 2014 at 00:24:19 UTC, Andrei Alexandrescu wrote:
>> On 8/3/14, 3:26 PM, David Bregman wrote:
>>> On Sunday, 3 August 2014 at 22:15:52 UTC, Andrei Alexandrescu wrote:
>>>> One related point that has been discussed only a little is the
>>>> competitive aspect of it all. Generating fast code is of paramount
>>>> importance for D's survival and thriving in the market. Competition in
>>>> language design and implementation is acerbic and only getting more
>>>> cutthroat. In the foreseeable future efficiency will become more
>>>> important at scale seeing as data is growing and frequency scaling has
>>>> stalled.
>>>
>>> Would you care to address the questions about performance raised in
>>> the OP?
>>
>> I thought I just did.
>
> You made some generic statements about performance being good. This is
> obvious and undisputed. You did not answer any concerns raised in the
> OP. I am left to wonder if you even read it.

I did read it. Forgive me, but I don't have much new to answer to it.

It seems you consider the lack of a long answer accompanied by research and measurements offensive, and you also find my previous answers arrogant. This, to continue what I was mentioning in another post, is the kind of stuff I find difficult to answer meaningfully.


Andrei

August 04, 2014
On 8/3/14, 6:17 PM, John Carter wrote:
> Well, I'm the dogsbody who has the job of upgrading the toolchain and
> handling the fallout of doing so.
>
> So I have been walking multimegaline code bases through every gcc
> version in the last 15 years.

Truth. This man speaks it.

Great post, thanks!


Andrei

August 04, 2014
Am 04.08.2014 03:17, schrieb John Carter:
> As you get...
>
> * more and more optimization passes that rely on asserts,
> * in particular pre and post condition asserts within the standard
> libraries,
> * you are going to have flocks of user code that used to compile without
> warning
> * and ran without any known defect...
>
> ...suddenly spewing error messages and warnings.
>
> But that's OK.
>
> Because I bet 99.999% of those warnings will be pointing straight at
> bone fide defects.
>

Well, that would make the problem more acceptable..
However, it has been argued that it's very hard to warn about code that will be eliminated, because that code often only become dead or redundant due to inlining, template instantiation, mixin, ... and you can't warn in those cases.
So I doubt that the compiler will warn every time it removes checks that are considered superfluous because of a preceding assert().

Cheers,
Daniel
August 04, 2014
On Monday, 4 August 2014 at 01:17:36 UTC, Andrei Alexandrescu wrote:
> On 8/3/14, 5:57 PM, David Bregman wrote:
>> On Monday, 4 August 2014 at 00:24:19 UTC, Andrei Alexandrescu wrote:
>>> On 8/3/14, 3:26 PM, David Bregman wrote:
>>>> On Sunday, 3 August 2014 at 22:15:52 UTC, Andrei Alexandrescu wrote:
>>>>> One related point that has been discussed only a little is the
>>>>> competitive aspect of it all. Generating fast code is of paramount
>>>>> importance for D's survival and thriving in the market. Competition in
>>>>> language design and implementation is acerbic and only getting more
>>>>> cutthroat. In the foreseeable future efficiency will become more
>>>>> important at scale seeing as data is growing and frequency scaling has
>>>>> stalled.
>>>>
>>>> Would you care to address the questions about performance raised in
>>>> the OP?
>>>
>>> I thought I just did.
>>
>> You made some generic statements about performance being good. This is
>> obvious and undisputed. You did not answer any concerns raised in the
>> OP. I am left to wonder if you even read it.
>
> I did read it. Forgive me, but I don't have much new to answer to it.
>
> It seems you consider the lack of a long answer accompanied by research and measurements offensive, and you also find my previous answers arrogant. This, to continue what I was mentioning in another post, is the kind of stuff I find difficult to answer meaningfully.

Well, I don't want this to devolve to ad hominem level. I never used the word offensive by the way, though I will admit to being temporarily offended by your description of my carefully constructed post as a self important rehash :)

Basically, I didn't find your reply useful because, as I said, you were simply stating a generality about performance (which I agree with), and not addressing any concerns at all.

If you don't have time to address this stuff right now, I completely understand, you are an important and busy person. But please don't give a generality or dodge the question, and then pretend the issue is addressed. This is what I call arrogant and it is worse than no reply at all.

w.r.t the one question about performance justification: I'm not necessarily asking for research papers and measurements, but based on these threads I'm not aware that there is any justification at all. For all I know this is all based on a wild guess that it will help performance "a lot", like someone who optimizes without profiling first. That certainly isn't enough to justify code breakage and massive UB injection, is it? I hope we can agree on that much at least!