Jump to page: 1 24  
Page
Thread overview
Bug in ^^
Sep 17, 2019
Brett
Sep 17, 2019
jmh530
Sep 17, 2019
Brett
Sep 17, 2019
jmh530
Sep 17, 2019
Brett
Sep 17, 2019
John Colvin
Sep 17, 2019
Adam D. Ruppe
Sep 17, 2019
John Colvin
Sep 17, 2019
Brett
Sep 17, 2019
bachmeier
Sep 17, 2019
Brett
Sep 17, 2019
bachmeier
Sep 17, 2019
Brett
Sep 17, 2019
Johan Engelen
Sep 17, 2019
Ali Çehreli
Sep 17, 2019
Vladimir Panteleev
Sep 17, 2019
Brett
Sep 17, 2019
Vladimir Panteleev
Sep 17, 2019
Johan Engelen
Sep 17, 2019
Vladimir Panteleev
Sep 17, 2019
lithium iodate
Sep 17, 2019
Timon Gehr
Sep 17, 2019
Brett
Sep 17, 2019
Patrick Schluter
Sep 18, 2019
John Colvin
Sep 19, 2019
Brett
Sep 17, 2019
Timon Gehr
Sep 17, 2019
Vladimir Panteleev
Sep 17, 2019
lithium iodate
Sep 17, 2019
Vladimir Panteleev
September 17, 2019
10^^16 = 1874919424	???

10L^^16 is valid, but

enum x = 10^^16 gives wrong value.

I didn't catch this ;/


September 17, 2019
On Tuesday, 17 September 2019 at 01:53:12 UTC, Brett wrote:
> 10^^16 = 1874919424	???
>
> 10L^^16 is valid, but
>
> enum x = 10^^16 gives wrong value.
>
> I didn't catch this ;/

10 and 16 are ints. The largest int is 2147483647, which is several orders of magnitude below 1e16. So you can think of it as wrapping around multiple times and that is the remainder: 1E16 - (10214748367 + 1) * 4656612 = 1874919424

Probably more appropriate for the Learn forum.
September 17, 2019
On Tuesday, 17 September 2019 at 02:38:03 UTC, jmh530 wrote:
> On Tuesday, 17 September 2019 at 01:53:12 UTC, Brett wrote:
>> 10^^16 = 1874919424	???
>>
>> 10L^^16 is valid, but
>>
>> enum x = 10^^16 gives wrong value.
>>
>> I didn't catch this ;/
>
> 10 and 16 are ints. The largest int is 2147483647, which is several orders of magnitude below 1e16. So you can think of it as wrapping around multiple times and that is the remainder: 1E16 - (10214748367 + 1) * 4656612 = 1874919424
>
> Probably more appropriate for the Learn forum.


Um, duh, but the problem why are they ints?

It is a compile time constant, it doesn't matter the size, there are no limitations in type size at compile time(in theory).

For it to wrap around silently is error prone and can introduce bugs in to programs.

The compiler should always use the largest value possible and if appropriate cast down, an enum is not appropriate to cast down to int.

The issue is not how 32-bit math works BUT that it is using 32-bit math by default(and my app is 64-bit).

Even if I use ulong as the type it still computes it in 32-bit. It should not do that, that is the point. It's wrong and bad behavior.

Else, what is the difference of it first calculating in L and then casting down and wrapping silently? It's the same problem yet if I do that in a program it will complain about precision, yet it does not do that here.

Again, just so it is clear, it has nothing to do with 32-bit arithmetic but that 32-bit arithmetic is used as instead of 64-bit. I could potentially go with it in a 32-bit program but not in 64-bit, but even then it would be difficult because it is a constant... it's shorthand for writing out the long version, it shouldn't silently wrap, If I write out the long version it craps out so why not the computation itself?


Of course I imagine you still don't get it or believe me so I can prove it:


enum x = 100000000000000000;
enum y = 10^^17;

void main()
{
   ulong a = x;
   ulong b = y;

}

What do you think a and b are, do you think they are the same or different?

Do you think they *should* be the same or different?

September 17, 2019
On Tuesday, 17 September 2019 at 13:48:02 UTC, Brett wrote:
> [snip]
>
>
> Um, duh, but the problem why are they ints?
> [snip]

They are ints because that is how enums work in the D language. See 17.3 [1].

[1] https://dlang.org/spec/enum.html#named_enums
September 17, 2019
On Tuesday, 17 September 2019 at 13:48:02 UTC, Brett wrote:

> enum x = 100000000000000000;
this is of type long, because the literal is too large to be an int

> enum y = 10^^17;
this is of type int (the default)
the exponentiation operator (like any other operator) produces a result of same type as the input, so still an int.
if you want long, you should write

enum y = 10L^^17

You should have a look at the language specification.
D inherits C's bad behaviour of defaulting to int (not even uint)
and even large literals are per default signed (sigh!)

Anyway, nothing can always prevent you from overflow, or what should be the result of

enum z = 10L ^^ 122

automatically import bigInt from a library? And even then,
how about 1000_000_000 ^^ 1000_000_000 --> try it and throw some outOfMemory error?

September 17, 2019
On Tuesday, 17 September 2019 at 13:48:02 UTC, Brett wrote:
> On Tuesday, 17 September 2019 at 02:38:03 UTC, jmh530 wrote:
>> On Tuesday, 17 September 2019 at 01:53:12 UTC, Brett wrote:
>>> 10^^16 = 1874919424	???
>>>
>>> 10L^^16 is valid, but
>>>
>>> enum x = 10^^16 gives wrong value.
>>>
>>> I didn't catch this ;/
>>
>> 10 and 16 are ints. The largest int is 2147483647, which is several orders of magnitude below 1e16. So you can think of it as wrapping around multiple times and that is the remainder: 1E16 - (10214748367 + 1) * 4656612 = 1874919424
>>
>> Probably more appropriate for the Learn forum.
>
>
> Um, duh, but the problem why are they ints?
>
> It is a compile time constant, it doesn't matter the size, there are no limitations in type size at compile time(in theory).
>
> For it to wrap around silently is error prone and can introduce bugs in to programs.
>
> The compiler should always use the largest value possible and if appropriate cast down, an enum is not appropriate to cast down to int.
>
> The issue is not how 32-bit math works BUT that it is using 32-bit math by default(and my app is 64-bit).
>
> Even if I use ulong as the type it still computes it in 32-bit. It should not do that, that is the point. It's wrong and bad behavior.
>
> Else, what is the difference of it first calculating in L and then casting down and wrapping silently? It's the same problem yet if I do that in a program it will complain about precision, yet it does not do that here.
>
> Again, just so it is clear, it has nothing to do with 32-bit arithmetic but that 32-bit arithmetic is used as instead of 64-bit. I could potentially go with it in a 32-bit program but not in 64-bit, but even then it would be difficult because it is a constant... it's shorthand for writing out the long version, it shouldn't silently wrap, If I write out the long version it craps out so why not the computation itself?
>
>
> Of course I imagine you still don't get it or believe me so I can prove it:
>
>
> enum x = 100000000000000000;
> enum y = 10^^17;
>
> void main()
> {
>    ulong a = x;
>    ulong b = y;
>
> }
>
> What do you think a and b are, do you think they are the same or different?
>
> Do you think they *should* be the same or different?

integer literals without any suffixes (e.g. L) are typed int or long based on their size. Any arithmetic done after that is is done according to the same rules as as at runtime.

Roughly speaking:

The process is not:
    we have an enum, let's work out any and all calculations leading to it with arbitrary size integers and then infer the type of the enum as the smallest that fits it.

The process is:
    we have an enum, lets calculate it's value using the same logic as at runtime and then type of the enum is the type of the answer.
September 17, 2019
On Tuesday, 17 September 2019 at 14:21:33 UTC, John Colvin wrote:
> The process is:

It might be a good idea to change that process. It hasn't worked as well in practice as we hoped earlier - leading to all kinds of weird stuff.
September 17, 2019
On Tuesday, 17 September 2019 at 14:29:32 UTC, Adam D. Ruppe wrote:
> On Tuesday, 17 September 2019 at 14:21:33 UTC, John Colvin wrote:
>> The process is:
>
> It might be a good idea to change that process. It hasn't worked as well in practice as we hoped earlier - leading to all kinds of weird stuff.

It would lead to a strange difference between CTFE and runtime, or a strange difference between the evaluation of some constants and the rest of CTFE.
September 17, 2019
On Tuesday, 17 September 2019 at 13:48:02 UTC, Brett wrote:

> it's shorthand for writing out the long version, it shouldn't silently wrap, If I write out the long version it craps out so why not the computation itself?

I think you should be using https://dlang.org/phobos/std_experimental_checkedint.html rather than getting into the weeds of the best language design choices long ago. My thought is that it's relatively easy to work with long if that's what I want:

10L^^16
long(10)^^16

I have to be explicit, but it's not Java levels of verbosity. Using long doesn't solve overflow problems. A different default would be better in your example, but it's not clear to me why that would always be better - the proper default would be checkedint.
September 17, 2019
On Tuesday, 17 September 2019 at 01:53:12 UTC, Brett wrote:
> 10^^16 = 1874919424	???
>
> 10L^^16 is valid, but
>
> enum x = 10^^16 gives wrong value.
>
> I didn't catch this ;/

The same can be observed with multiplication:

// This compiles, but the result is "non-sensical" due to oveflow.
enum n = 1_000_000 * 1_000_000;

The same can happen with C:

static const int n = 1000000 * 1000000;

However, C compilers warn about this:

gcc:

test.c:1:30: warning: integer overflow in expression of type ‘int’ results in ‘-727379968’ [-Woverflow]
    1 | static const int n = 1000000 * 1000000;
      |                              ^

clang:

test.c:1:30: warning: overflow in expression; result is -727379968 with type 'int' [-Winteger-overflow]
static const int n = 1000000 * 1000000;
                             ^
1 warning generated.

I think D should warn about any overflows which happen at compile-time too.

« First   ‹ Prev
1 2 3 4