On 5 May 2012 11:42, Alex Rønne Petersen <xtzgzorex@gmail.com> wrote:
On 05-05-2012 10:23, Era Scarecrow wrote:
On Saturday, 5 May 2012 at 07:10:28 UTC, Alex Rønne Petersen wrote:

Right, but the question was whether the language guarantees what I
described. C and C++ don't, and IMO, we should strive to fix that.

I can't see why it wouldn't, unless the compiler adds in checks and
changes it's behavior or the assembler does it's own quirky magic. The
bit patterns of how they end up are pretty fixed, it's just how we
interpret them.

It all depends. GCC (and thus GDC) can target very exotic architectures where any assumptions may not, for whatever reason, hold true. This is a language design issue more than it's a "how does architecture X or compiler Y work" issue.

An interesting problem with undefined behavior for integer overflow and underflow in C/C++ is that optimizers are basically free to do anything with regards to them, and often do whatever is more convenient for them.

With regard to code-gen on such colourful architectures, would stating a defined behaviour for overflow/underflow affect the common case where an over/underflow did not occur?
Short of an architecture that causes hardware exception on over/underflow, I suspect that it would interfere with the common case (additional code generated around every add/sub/etc to handle the overflow behaviour), and on such an architecture, every single numerical integer operation would become inefficient.

I believe this is why C doesn't define the behaviour, because C is still effectively a macro language, and shouldn't produce 'unexpected' inefficient code. ('unexpected' from the perspective of the architecture you're targeting)

I would personally rather see it remain undefined like C, but with convention approved by common/conventional architectures where cross platform porting is likely to occur.