September 17, 2019
On Tuesday, 17 September 2019 at 14:21:33 UTC, John Colvin wrote:
> On Tuesday, 17 September 2019 at 13:48:02 UTC, Brett wrote:
>> On Tuesday, 17 September 2019 at 02:38:03 UTC, jmh530 wrote:
>>> On Tuesday, 17 September 2019 at 01:53:12 UTC, Brett wrote:
>>>> 10^^16 = 1874919424	???
>>>>
>>>> 10L^^16 is valid, but
>>>>
>>>> enum x = 10^^16 gives wrong value.
>>>>
>>>> I didn't catch this ;/
>>>
>>> 10 and 16 are ints. The largest int is 2147483647, which is several orders of magnitude below 1e16. So you can think of it as wrapping around multiple times and that is the remainder: 1E16 - (10214748367 + 1) * 4656612 = 1874919424
>>>
>>> Probably more appropriate for the Learn forum.
>>
>>
>> Um, duh, but the problem why are they ints?
>>
>> It is a compile time constant, it doesn't matter the size, there are no limitations in type size at compile time(in theory).
>>
>> For it to wrap around silently is error prone and can introduce bugs in to programs.
>>
>> The compiler should always use the largest value possible and if appropriate cast down, an enum is not appropriate to cast down to int.
>>
>> The issue is not how 32-bit math works BUT that it is using 32-bit math by default(and my app is 64-bit).
>>
>> Even if I use ulong as the type it still computes it in 32-bit. It should not do that, that is the point. It's wrong and bad behavior.
>>
>> Else, what is the difference of it first calculating in L and then casting down and wrapping silently? It's the same problem yet if I do that in a program it will complain about precision, yet it does not do that here.
>>
>> Again, just so it is clear, it has nothing to do with 32-bit arithmetic but that 32-bit arithmetic is used as instead of 64-bit. I could potentially go with it in a 32-bit program but not in 64-bit, but even then it would be difficult because it is a constant... it's shorthand for writing out the long version, it shouldn't silently wrap, If I write out the long version it craps out so why not the computation itself?
>>
>>
>> Of course I imagine you still don't get it or believe me so I can prove it:
>>
>>
>> enum x = 100000000000000000;
>> enum y = 10^^17;
>>
>> void main()
>> {
>>    ulong a = x;
>>    ulong b = y;
>>
>> }
>>
>> What do you think a and b are, do you think they are the same or different?
>>
>> Do you think they *should* be the same or different?
>
> integer literals without any suffixes (e.g. L) are typed int or long based on their size. Any arithmetic done after that is is done according to the same rules as as at runtime.
>
> Roughly speaking:
>
> The process is not:
>     we have an enum, let's work out any and all calculations leading to it with arbitrary size integers and then infer the type of the enum as the smallest that fits it.
>
> The process is:
>     we have an enum, lets calculate it's value using the same logic as at runtime and then type of the enum is the type of the answer.

it doesn't matter, I've already proved that the same mathematical equivalence gives two different results... your claim that it is an int is unfounded... did you look at the code I gave?

You can make claims about whatever you want but facts are facts.

>> enum x = 100000000000000000;
>> enum y = 10^^17;

Those we should have x==y, no ands buts or anything to justify the difference.

no matter how you want to justify the compilers behavior, it is wrong. It is ok to accept it, it actually  makes the world a better place to accept when something is wrong, that is is the only way things can get fixed.
September 17, 2019
On Tuesday, 17 September 2019 at 13:59:54 UTC, jmh530 wrote:
> On Tuesday, 17 September 2019 at 13:48:02 UTC, Brett wrote:
>> [snip]
>>
>>
>> Um, duh, but the problem why are they ints?
>> [snip]
>
> They are ints because that is how enums work in the D language. See 17.3 [1].
>
> [1] https://dlang.org/spec/enum.html#named_enums

Then why does

>> enum x = 100000000000000000;
>> enum y = 10^^17;

x store 10000000000000000?

If it were an int then it would wrap, it doesn't.

Did you try the code?

import std.stdio;
enum x = 100000000000000000;
enum y = 10^^17;

void main()
{
    ulong xx = x;
    ulong yy = y;
    writeln(x);
    writeln(y);
    writeln(xx);
    writeln(yy);
}

100000000000000000
1569325056
100000000000000000
1569325056

You seem to either make stuff up, misunderstand the compiler, or trust the docs to much. I have code that proves I'm right, why is it so hard for you to accept it?

You can make your claims, but it is meaningless if they are not true.

September 17, 2019
On Tuesday, 17 September 2019 at 16:16:44 UTC, bachmeier wrote:
> On Tuesday, 17 September 2019 at 13:48:02 UTC, Brett wrote:
>
>> it's shorthand for writing out the long version, it shouldn't silently wrap, If I write out the long version it craps out so why not the computation itself?
>
> I think you should be using https://dlang.org/phobos/std_experimental_checkedint.html rather than getting into the weeds of the best language design choices long ago. My thought is that it's relatively easy to work with long if that's what I want:
>
> 10L^^16
> long(10)^^16
>
> I have to be explicit, but it's not Java levels of verbosity. Using long doesn't solve overflow problems. A different default would be better in your example, but it's not clear to me why that would always be better - the proper default would be checkedint.

Wrong:
import std.stdio;
enum x = 100000000000000000;
enum y = 10^^17;

void main()
{
    ulong xx = x;
    ulong yy = y;
    writeln(x);
    writeln(y);
    writeln(xx);
    writeln(yy);
}

100000000000000000
1569325056
100000000000000000
1569325056

I gave code to prove that I was right, why is it so difficult for people to accept? All I see is people trying to justify the compilers current behavior rather than think for themselves and realize something wrong!

This not a difficult issue.


September 17, 2019
On Tuesday, 17 September 2019 at 16:50:29 UTC, Brett wrote:

> Wrong:
> import std.stdio;
> enum x = 100000000000000000;
> enum y = 10^^17;
>
> void main()
> {
>     ulong xx = x;
>     ulong yy = y;
>     writeln(x);
>     writeln(y);
>     writeln(xx);
>     writeln(yy);
> }
>
> 100000000000000000
> 1569325056
> 100000000000000000
> 1569325056
>
> I gave code to prove that I was right, why is it so difficult for people to accept? All I see is people trying to justify the compilers current behavior rather than think for themselves and realize something wrong!
>
> This not a difficult issue.

That output looks correct to me.

September 17, 2019
On Tuesday, 17 September 2019 at 17:05:33 UTC, bachmeier wrote:
> On Tuesday, 17 September 2019 at 16:50:29 UTC, Brett wrote:
>
>> Wrong:
>> import std.stdio;
>> enum x = 100000000000000000;
>> enum y = 10^^17;
>>
>> void main()
>> {
>>     ulong xx = x;
>>     ulong yy = y;
>>     writeln(x);
>>     writeln(y);
>>     writeln(xx);
>>     writeln(yy);
>> }
>>
>> 100000000000000000
>> 1569325056
>> 100000000000000000
>> 1569325056
>>
>> I gave code to prove that I was right, why is it so difficult for people to accept? All I see is people trying to justify the compilers current behavior rather than think for themselves and realize something wrong!
>>
>> This not a difficult issue.
>
> That output looks correct to me.

enum x = 100000000000000000;
enum y = 10^^17;

Why do you think 10^^17 and 100000000000000000

should be different?

First I'm told that enum's are ints and so 10^17 should wrap... yet 100000000000000000 is not wrapped(yet you say it looks correct)...

then I'm told I have to use L's to get it to not wrap, yet

100000000000000000

does not have L... and it doesn't wrap(so the L is implicit).

So which is it?

Do you not understand that something is going on that makes no sense and this creates problems? It doesn't make sense... even if you think it does.

Either the compiler needs to warn or there has to be a consistent behavior and there clearly is not consistent behavior... just because it makes sense to you it only means that you are choosing the behavior the compiler uses, but the compiler can be wrong and hence that means you would be wrong too.




September 17, 2019
On Tuesday, 17 September 2019 at 16:49:46 UTC, Vladimir Panteleev wrote:
> On Tuesday, 17 September 2019 at 01:53:12 UTC, Brett wrote:
>> 10^^16 = 1874919424	???
>>
>> 10L^^16 is valid, but
>>
>> enum x = 10^^16 gives wrong value.
>>
>> I didn't catch this ;/
>
> The same can be observed with multiplication:
>
> // This compiles, but the result is "non-sensical" due to oveflow.
> enum n = 1_000_000 * 1_000_000;
>
> The same can happen with C:
>
> static const int n = 1000000 * 1000000;
>
> However, C compilers warn about this:
>
> gcc:
>
> test.c:1:30: warning: integer overflow in expression of type β€˜int’ results in β€˜-727379968’ [-Woverflow]
>     1 | static const int n = 1000000 * 1000000;
>       |                              ^
>
> clang:
>
> test.c:1:30: warning: overflow in expression; result is -727379968 with type 'int' [-Winteger-overflow]
> static const int n = 1000000 * 1000000;
>                              ^
> 1 warning generated.
>
> I think D should warn about any overflows which happen at compile-time too.

I have no problem with warnings, at least it would then be detected rather than a silent fall through that can make things unsafe.

What's more concerning to me is how many people defend the compilers behavior.

Why

enum x = 100000000000000000;
enum y = 10^^17;

should produce two different results is moronic to me. I realize that 10^^17 is a computation but at the compile time the compiler should use the maximum precision to compute values since it actually can do this without issue(up to the a point).

If enums actually are suppose to be int's then it should give an error about overflow. If enums can scale depending on what the compiler see's fit then it should use L here and when the values are used in the program it should then error because they will be to large when stuck in to ints.

Regardless of the behavior, it shouldn't produce silent undetectable errors, which is what I have seen at least 4 people advocate in here right of the bat. rather than have a sane solution that prevents those errors. That is very concerning... why would anyone think allowing undetectable errors to be reasonable behavior? I actually don't care how it works, as long as I know how it works. If it forces me to add an L, so be it, not a big deal. If it causes crashes in my application and I have to spend hours trying to figure out because I made a logical assumption and the compiler made a different logical assumption but both are equally viable, then that is a problem and it should be understood as a problem, not my problem, not but the compilers problem. Compilers are suppose to make our lives easier, not harder.




September 17, 2019
On Tuesday, 17 September 2019 at 17:34:18 UTC, Brett wrote:
> Why
>
> enum x = 100000000000000000;
> enum y = 10^^17;
>
> should produce two different results

I think the biggest argument would be that computing an expression at runtime and compile-time should produce the same result, because CTFE is expected to only simulate the effect of running something at run-time.

> Regardless of the behavior, it shouldn't produce silent undetectable errors,

I agree, a warning or error for overflows at compile-time would be appropriate. We already have a precedent for a similar diagnostic - out-of-bounds array access where the index is known at compile-time. I suggest filing an enhancement request, if one isn't filed for this already.

September 17, 2019
Calm down Brett :-)
People are only trying to help here, and as far as I can tell they fully understood what you wrote.

On Tuesday, 17 September 2019 at 17:23:06 UTC, Brett wrote:
> 
> enum x = 100000000000000000;
> enum y = 10^^17;
>
> Why do you think 10^^17 and 100000000000000000
>
> should be different?
>
> First I'm told that enum's are ints

That is not what was meant. Enums are not always ints. The type of the initializing expression determines the type of the enum.
Numbers are by default `int`, unless it must be another type.
10 --->  is an `int`
17 --->  is an `int`
100000000000000000 --> cannot be an `int`, so is a larger type

> and so 10^17 should wrap...

10^17 is equal to  "number ^^ number". What's the first number? 10. So that's an `int`. What's the second number? 17, also an `int`.  `int ^^ int` results in another `int`. Thus the type of the expression "10^^17" is `int` --> the enum that is initialized by 10^^17 will also be an `int`. The wrapping that you see is not that the enum is wrapping. It is the wrapping of the calculation `int ^^ int`. That wrapped calculation result is then used as initializer for the enum. Again, the fact that 10^^17 is wrapping has nothing to do with enum.

-Johan

September 17, 2019
On Tuesday, 17 September 2019 at 17:41:23 UTC, Vladimir Panteleev wrote:
>
> I agree, a warning or error for overflows at compile-time would be appropriate.

Do you have a suggestion for the syntax to write overflowing CTFE code without triggering the warning? What I mean is: how can the programmer tell the compiler that overflow is acceptable in a particular case.
I briefly looked for it, but couldn't find how to do that with GCC/clang (other than #pragma diagnostic push/pop).

-Johan



September 17, 2019
On Tuesday, 17 September 2019 at 17:51:59 UTC, Johan Engelen wrote:
> On Tuesday, 17 September 2019 at 17:41:23 UTC, Vladimir Panteleev wrote:
>>
>> I agree, a warning or error for overflows at compile-time would be appropriate.
>
> Do you have a suggestion for the syntax to write overflowing CTFE code without triggering the warning?

When a bigger type exists which fits the non-overflown result, the obvious solution is to make one of the operands of that type, then explicitly cast the result back to the smaller one.

When a bigger type does not exist, explicit overflow could be indicated by using binary-and with the type's full bit mask, i.e. `(1_000_000 * 1_000_000) & 0xFFFFFFFF`. This fits with D's existing range propagation logic, i.e. the following is not an error:

	uint i = void;
	ubyte b = i & 0xFF;