September 17, 2019
On Tuesday, 17 September 2019 at 17:51:59 UTC, Johan Engelen wrote:
> I briefly looked for it, but couldn't find how to do that with GCC/clang (other than #pragma diagnostic push/pop).

It does not appear to me that either GCC* or Clang warns about wrapping/overflow unless you're directly invoking undefined behavior. In that case, of course, the proper solution is to fix the broken code.

*compiling C code
September 17, 2019
On 09/17/2019 10:23 AM, Brett wrote:

> First I'm told that enum's are ints

Enums can be ints, in which case the following rather lengthy rules apply (lister after the grammer spec):

  https://dlang.org/spec/lex.html#integerliteral

Ali

September 17, 2019
On 17.09.19 19:34, Brett wrote:
> 
> What's more concerning to me is how many people defend the compilers behavior.
> ...

What you apparently fail to understand is that there are trade-offs to be considered, and your use case is not the only one supported by the language. Clearly, any wraparound behavior in an "integer" type is stupid, but the hardware has a fixed word size, programmers are lazy, compilers are imperfect and efficiency of the generated code matters.

> Why
> 
> enum x = 100000000000000000;
> enum y = 10^^17;
> 
> should produce two different results is moronic to me. I realize that 10^^17 is a computation but at the compile time the compiler should use the maximum precision to compute values since it actually can do this without issue(up to the a point).

The reason why you get different results is that someone argued, not unlike you, that the compiler should be "smart" and implicitly promote the 100000000000000000 literal to type 'long'. This is why you now observe this apparently inconsistent behavior. If we really care about the inconsistency you are complaining about, the right fix is to remove 'long' literals without suffix L. Trying to address it by introducing additional inconsistencies in how code is interpreted in CTFE and at runtime is plain stupid. (D currently does things like this with floating point types, and it is annoying.)
September 17, 2019
On 17.09.19 18:49, Vladimir Panteleev wrote:
> On Tuesday, 17 September 2019 at 01:53:12 UTC, Brett wrote:
>> 10^^16 = 1874919424    ???
>>
>> 10L^^16 is valid, but
>>
>> enum x = 10^^16 gives wrong value.
>>
>> I didn't catch this ;/
> 
> The same can be observed with multiplication:
> 
> // This compiles, but the result is "non-sensical" due to oveflow.
> enum n = 1_000_000 * 1_000_000;
> 
> The same can happen with C:
> 
> static const int n = 1000000 * 1000000;
> 
> However, C compilers warn about this:
> 
> gcc:
> 
> test.c:1:30: warning: integer overflow in expression of type ‘int’ results in ‘-727379968’ [-Woverflow]
>      1 | static const int n = 1000000 * 1000000;
>        |                              ^
> 
> clang:
> 
> test.c:1:30: warning: overflow in expression; result is -727379968 with type 'int' [-Winteger-overflow]
> static const int n = 1000000 * 1000000;
>                               ^
> 1 warning generated.
> 
> I think D should warn about any overflows which happen at compile-time too.
> 

It's not the same. C compilers warn about overflows that are UB. They don't complain about overflows that have defined behavior:

static const int n = 1000000u * 1000000u; // no warning

In D, all overflows in operations on basic integer types have defined behavior, not just those operating on unsigned integers.
September 17, 2019
On Tuesday, 17 September 2019 at 19:19:46 UTC, Timon Gehr wrote:
> On 17.09.19 19:34, Brett wrote:
>> 
>> What's more concerning to me is how many people defend the compilers behavior.
>> ...
>
> What you apparently fail to understand is that there are trade-offs to be considered, and your use case is not the only one supported by the language. Clearly, any wraparound behavior in an "integer" type is stupid, but the hardware has a fixed word size, programmers are lazy, compilers are imperfect and efficiency of the generated code matters.

And this is why compilers should do everything they can to reduce problems... it doesn't just effect one person but everyone that uses the compiler. If the onus is on the programmer then it means that a very large percentage of people(thousands, 10's of thousands, millions) are going to have to deal with it as you've already said, they are lazy, so they won't.

>
>> Why
>> 
>> enum x = 100000000000000000;
>> enum y = 10^^17;
>> 
>> should produce two different results is moronic to me. I realize that 10^^17 is a computation but at the compile time the compiler should use the maximum precision to compute values since it actually can do this without issue(up to the a point).
>
> The reason why you get different results is that someone argued, not unlike you, that the compiler should be "smart" and implicitly promote the 100000000000000000 literal to type 'long'. This is why you now observe this apparently inconsistent behavior. If we really care about the inconsistency you are complaining about, the right fix is to remove 'long' literals without suffix L. Trying to address it by introducing additional inconsistencies in how code is interpreted in CTFE and at runtime is plain stupid. (D currently does things like this with floating point types, and it is annoying.)

No, that is not the right behavior because you've already said that wrapping is *defined* behavior... and it is not! One if we multiply two numbers together that may be generated at ctfe using mixins or by using a complex constant expression that may be near the upper bound and it happens to overflow? Then what?

You are saying it is ok for undefined behavior to exist in a program and that is never true! Undefined behavior accounts for 100% of all program bugs. Even a perfectly written program is undefined behavior if it doesn't do what the user wants/programmer wants.

The compiler can warn us at compile time for ambiguous cases, that is the best solution. To say it is not because wrapping is "defined behavior" is the thing that creates inconsistencies.




September 17, 2019
On Tuesday, 17 September 2019 at 19:22:44 UTC, Timon Gehr wrote:
> It's not the same. C compilers warn about overflows that are UB. They don't complain about overflows that have defined behavior:

I'm not so sure that's the actual distinction.

The error messages do not mention undefined behavior.

The GCC source code for this does not mention undefined behavior:

https://github.com/gcc-mirror/gcc/blob/5fe20025f581fb0c215611434d76696161d4cbd3/gcc/c-family/c-warn.c#L70

The clang source code does not mention anything about undefined behavior:

https://github.com/CyberShadow/llvm-project/blob/6e4932ebe9448b9bab922b225a8012669972ff0c/clang/lib/AST/ExprConstant.cpp#L2310

It seems to me that the more likely explanation is that making the operands unsigned is a method of squelching the warning.

> In D, all overflows in operations on basic integer types have defined behavior, not just those operating on unsigned integers.

Regardless of what other languages do, or the pedantic details involved, it seems to me that warning on detectable overflows would simply be more useful for D users (provided there is a way to squelch the warning). Therefore, D should do it.

September 17, 2019
On Tuesday, 17 September 2019 at 19:31:49 UTC, Brett wrote:
> On Tuesday, 17 September 2019 at 19:19:46 UTC, Timon Gehr wrote:
>> On 17.09.19 19:34, Brett wrote:
>>> 
>>> What's more concerning to me is how many people defend the compilers behavior.
>>> ...
>>
>> What you apparently fail to understand is that there are trade-offs to be considered, and your use case is not the only one supported by the language. Clearly, any wraparound behavior in an "integer" type is stupid, but the hardware has a fixed word size, programmers are lazy, compilers are imperfect and efficiency of the generated code matters.
>
> And this is why compilers should do everything they can to reduce problems... it doesn't just effect one person but everyone that uses the compiler. If the onus is on the programmer then it means that a very large percentage of people(thousands, 10's of thousands, millions) are going to have to deal with it as you've already said, they are lazy, so they won't.
>
>>
>>> Why
>>> 
>>> enum x = 100000000000000000;
>>> enum y = 10^^17;
>>> 
>>> should produce two different results is moronic to me. I realize that 10^^17 is a computation but at the compile time the compiler should use the maximum precision to compute values since it actually can do this without issue(up to the a point).
>>
>> The reason why you get different results is that someone argued, not unlike you, that the compiler should be "smart" and implicitly promote the 100000000000000000 literal to type 'long'. This is why you now observe this apparently inconsistent behavior. If we really care about the inconsistency you are complaining about, the right fix is to remove 'long' literals without suffix L. Trying to address it by introducing additional inconsistencies in how code is interpreted in CTFE and at runtime is plain stupid. (D currently does things like this with floating point types, and it is annoying.)
>
> No, that is not the right behavior because you've already said that wrapping is *defined* behavior... and it is not! One if we multiply two numbers together that may be generated at ctfe using mixins or by using a complex constant expression that may be near the upper bound and it happens to overflow? Then what?
>
> You are saying it is ok for undefined behavior to exist in a program and that is never true! Undefined behavior accounts for 100% of all program bugs. Even a perfectly written program is undefined behavior if it doesn't do what the user wants/programmer wants.
>
> The compiler can warn us at compile time for ambiguous cases, that is the best solution. To say it is not because wrapping is "defined behavior" is the thing that creates inconsistencies.

Brett, read the fine manual. The promotion rules [1] and the usual arithmetic conversions [2] are explained in detail.
The reason why the grammar is as it is, has to do that the language was not defined in a void. One of the goals of the development of D is to be a successor of C. To reach that goal, the language has to balance between fixing what is wrong with its predecessor and maintaining its legacy (i.e. not estranging developers coming from it by modifying rules willi nilli).
The thing with integer promotion and arithmetic conversions is that there is NO absolute right or wrong approach to it. The C developers chose to privilege the approach that tended to maintain the sign when mixing signed and unsigned types, other languages took other choices. One of the stated goals of the D language that Walter has stated several times, is that D expression that are also valid in C, behave like C, to minimize the surprize for people coming from C (or C++).


[1]: https://dlang.org/spec/type.html#integer-promotions
[2]: https://dlang.org/spec/type.html#usual-arithmetic-conversions
September 17, 2019
On Tuesday, 17 September 2019 at 19:36:14 UTC, Vladimir Panteleev wrote:
> I'm not so sure that's the actual distinction.
>
> The error messages do not mention undefined behavior.

Formally, operations with unsigned integers can never overflow in C and you can therefore not warn about overflow. Since the warning can then only occur for signed integers (as observed), any such warning directly implies undefined behavior as per the C standard.
September 17, 2019
On Tuesday, 17 September 2019 at 20:13:21 UTC, lithium iodate wrote:
> Formally, operations with unsigned integers can never overflow in C and you can therefore not warn about overflow. Since the warning can then only occur for signed integers (as observed), any such warning directly implies undefined behavior as per the C standard.

No, you're implying causation from a correlation.

In any case, compiler warnings are not governed by what's defined behavior or not. Compilers can and do warn about many code fragments which are fully defined, and the world is a better place for that. A warning here would be useful, so there should be one.

September 18, 2019
On Tuesday, 17 September 2019 at 19:31:49 UTC, Brett wrote:
> On Tuesday, 17 September 2019 at 19:19:46 UTC, Timon Gehr wrote:
>> On 17.09.19 19:34, Brett wrote:
>>> 
>>> What's more concerning to me is how many people defend the compilers behavior.
>>> ...
>>
>> What you apparently fail to understand is that there are trade-offs to be considered, and your use case is not the only one supported by the language. Clearly, any wraparound behavior in an "integer" type is stupid, but the hardware has a fixed word size, programmers are lazy, compilers are imperfect and efficiency of the generated code matters.
>
> And this is why compilers should do everything they can to reduce problems... it doesn't just effect one person but everyone that uses the compiler. If the onus is on the programmer then it means that a very large percentage of people(thousands, 10's of thousands, millions) are going to have to deal with it as you've already said, they are lazy, so they won't.

Carelessly doing everything you can to reduce problems is a good way to create lots of problems. For example, there can be a trade-off between consistently (and therefore predictably) wrong and inconsistently right.

> No, that is not the right behavior because you've already said that wrapping is *defined* behavior... and it is not! One if we multiply two numbers together that may be generated at ctfe using mixins or by using a complex constant expression that may be near the upper bound and it happens to overflow? Then what?
>
> You are saying it is ok for undefined behavior to exist in a program and that is never true! Undefined behavior accounts for 100% of all program bugs. Even a perfectly written program is undefined behavior if it doesn't do what the user wants/programmer wants.
>
> The compiler can warn us at compile time for ambiguous cases, that is the best solution. To say it is not because wrapping is "defined behavior" is the thing that creates inconsistencies.

Just to make sure you don't misunderstand:

For better or worse, integer overflow is defined behaviour in D, the reality of the overwhelming majority of CPU hardware is encoded in the language.

That is using the meaning of the term "defined" as it used in e.g. the C standard.