November 13, 2015
On Friday, 13 November 2015 at 09:09:33 UTC, Don wrote:
> Suppose we made it an error. We'd be in a much better position than C. We could easily add a check for integer overflow into CTFE. We could allow compilers and static analysis tools to implement runtime checks for integer overflow, as well.
> Are we certain that we want to disallow this?

In C allowed undefined behavior resulted in questionable aggressive optimizations forced on everyone. That's what's disallowed.
November 13, 2015
On Friday, 13 November 2015 at 09:09:33 UTC, Don wrote:
> At the very least, we should change the terminology on that page. The word "overflow" should not be used when referring to both signed and unsigned types. On that page, it is describing two very different phenomena, and gives the impression that it was written by somebody who does not understand what they are talking about.
> The usage of the word "wraps" is sloppy.
>
> That page should state something like:
> For any unsigned integral type T, all arithmetic is performed modulo (T.max + 1).
> Thus, for example, uint.max + 1 == 0.
> There is no reason to mention the highly misleading word "overflow".
>
> For a signed integral type T, T.max + 1 is not representable in type T.
> Then, we have a choice of either declaring it to be an error, as C does; or stating that the low bits of the infinitely-precise result will be interpreted as a two's complement value. For example, T.max + 1 will be negative.
>
> (Note that unlike the unsigned case, there is no simple explanation of what happens).
>
> Please let's be precise about this.

I don't understand what you think is so complicated about it?

It's just circular boundary conditions. Unsigned has the boundaries at 0 and 2^n - 1, signed has them at -2^(n-1) and 2^(n-1) - 1.

Less straightforwardly, but if you like modular arithmetic:
After arithmetic operations f is applied
unsigned: f(v) = v mod 2^n - 1
signed: f(v) = ((v + 2^(n-1)) mod (2^n - 1)) - 2^(n-1)
November 13, 2015
On Friday, 13 November 2015 at 09:33:51 UTC, John Colvin wrote:
> I don't understand what you think is so complicated about it?
>

It is not that it is complicated, but that signed wraparound is almost always a bug. In C/C++, that result in very questionable optimizations. But defining the thing as wraparound is also preventing it to become an error. On the other hand, detection the overflow is expensive on most machines.

I think Don has a point and the spec should say something like :
signed integer overflow is defined as being a runtime error. For performance reasons, the compiler may choose to not emit error checking code and use wraparound semantic instead.

Or something along these lines.
November 13, 2015
On 11/13/2015 1:10 AM, Iain Buclaw via Digitalmars-d wrote:
> We are not.  For gdc, the fwrapv flag is enabled by default.

Good!
November 13, 2015
On 11/13/2015 1:09 AM, Don wrote:
> Please let's be precise about this.

I'd be happy if you contributed the precise wording we need!
November 13, 2015
On Friday, 13 November 2015 at 09:37:41 UTC, deadalnix wrote:
> On Friday, 13 November 2015 at 09:33:51 UTC, John Colvin wrote:
>> I don't understand what you think is so complicated about it?

>> After arithmetic operations f is applied
>> signed: f(v) = ((v + 2^(n-1)) mod (2^n - 1)) - 2^(n-1)

Complicated in the sense that: when are those semantics useful? The answer of course, is, pretty much never. They are very bizarre.

>
> It is not that it is complicated, but that signed wraparound is almost always a bug. In C/C++, that result in very questionable optimizations. But defining the thing as wraparound is also preventing it to become an error. On the other hand, detection the overflow is expensive on most machines.
>
> I think Don has a point and the spec should say something like :
> signed integer overflow is defined as being a runtime error. For performance reasons, the compiler may choose to not emit error checking code and use wraparound semantic instead.
>
> Or something along these lines.

Oh, I like that! That does seem to be the best of both worlds. Then, as a QOI issue, the compiler can try to detect the error. If it does not detect the error, it MUST provide the two's complement result. It is not allowed to do any weird stuff.



November 13, 2015
On Friday, 13 November 2015 at 09:09:33 UTC, Don wrote:
> (Note that unlike the unsigned case, there is no simple explanation of what happens).

Well, negative overflow for unsigned probably should be illegal too. Ada got this right by having:

32 bit signed integers monotonic
31 bit unsigned integers monotonic

That way you can transition between unsigned and signed without having negative values turned into positive ones and vice versa and have violations detected by verifier.

In addition Ada also provides explicit modular integers in user specified ranges.

November 13, 2015
On Friday, 13 November 2015 at 10:20:53 UTC, Don wrote:
> Oh, I like that! That does seem to be the best of both worlds. Then, as a QOI issue, the compiler can try to detect the error. If it does not detect the error, it MUST provide the two's complement result. It is not allowed to do any weird stuff.

That would be a silly restriction that nobody would need to care about.  If the user cannot assume wrapping then compiler vendors will make more aggressive optimizations available.

November 13, 2015
On Friday, 13 November 2015 at 06:00:08 UTC, Walter Bright wrote:
> It's worth checking how LDC and GDC deal with this deep in their optimizer - is it considering it undefined behavior?

Signed types will wrap around correctly for LDC.

 — David
November 13, 2015
On Friday, 13 November 2015 at 09:37:41 UTC, deadalnix wrote:
> It is not that it is complicated, but that signed wraparound is almost always a bug. In C/C++, that result in very questionable optimizations. But defining the thing as wraparound is also preventing it to become an error.

What about unsigned integers? Most of the time they are used as positive numbers, positive number overflow is the same bug.