The main argument seems to revolve around whether this is actually a change
or not.  In my (and others') view, the treatment of assert as 'assume' is
not a change at all.  It was in there all along, we just needed the wizard
to tell us.


How can there be any question? This is a change in the compiler, a change in the docs, change in what your program does, change of the very bytes in the executable. If my program worked before and doesn't now, how is that not a change? This must be considered a change by any reasonable definition of the word change.

I don't think I can take seriously this idea that someone's unstated, unmanifested intentions define change more so than things that are .. you know.. actual real changes.


Yes, sorry, there will be actual consequences if the optimizations are implemented.  What I meant with the somewhat facetious statement was that there is no semantic change - broken code will still be broken, it will just be broken in a different way.

If you subscribe to the idea that a failed assertion indicates all subsequent code is invalid, then subsequent code is broken (and undefined, if the spec says it is undefined).  The change would be clarifying this in the spec, and dealing with the fallout of previously broken-but-still-working code behaving differently under optimization.

At least, that's how I understand it... I hope I an not mischaracterizing others' positions here (let me know, and I'll shut up).




Well I think I outlined the issues in the OP. As for solutions, there have been some suggestions in this thread, the simplest is to leave things as is and introduce the optimizer hint as a separate function, assume().

I don't think there was any argument presented against a separate function besides that Walter couldn't see any difference between the two behaviors, or the intention thing which doesn't really help us here.


An argument against the separate function: we already have a function, called assert.  Those that want the nonoptimizing version (a disable-able 'if false throw' with no wider meaning) should get their own method darnit.
 

I guess the only real argument against it is that pre-existing asserts contain significant optimization information that we can't afford to not reuse.

Yessss.  This gets to the argument - asserts contain information about the program.  Specifically, a statement about the valid program state at a given point.  So we should treat them as such.
 
But this is a claim I'm pretty skeptical of.

Ah man, thought I had you.
 
Andrei admitted it's just a hunch at this point. Try looking through your code base to see how many asserts would be useful for optimizing.

Ironically, I don't tend to use asserts at all in my code.  I do not want code that will throw or not throw based on a compiler flag.  Why am I even arguing about this stuff?
 
If asserts were used for optimizing, I would look at actually using them more.


(Can we programmatically (sp?) identify and flag/resolve
issues that occur from a mismatch of expectations?)

I'm not an expert on this, but my guess is it's possible in theory but would never happen in practice. Such things are very complex to implement, if Walter won't agree to a simple and easy solution, I'm pretty sure there's no way in hell he would agree to a complex one that takes a massive amount of effort.

If gcc et all do similar optimizations, how do they handle messaging?