Thread overview
enforce (i > 0) for i = int.min does not throw
Jan 27, 2018
kdevel
Jan 27, 2018
Ali Çehreli
Jan 27, 2018
kdevel
Jan 27, 2018
ag0aep6g
Jan 29, 2018
Seb
Jan 30, 2018
kdevel
Jan 31, 2018
Azi Hassan
January 27, 2018
I would expect this code

enforce3.d
---
import std.exception;

void main ()
{
   int i = int.min;
   enforce (i > 0);
}
---

to throw an "Enforcement failed" exception, but it doesn't:

$ dmd enforce3.d
$ ./enforce3
[nothing]


January 27, 2018
On 01/27/2018 06:13 AM, kdevel wrote:
> I would expect this code
> 
> enforce3.d
> ---
> import std.exception;
> 
> void main ()
> {
>     int i = int.min;
>     enforce (i > 0);
> }
> ---
> 
> to throw an "Enforcement failed" exception, but it doesn't:
> 
> $ dmd enforce3.d
> $ ./enforce3
> [nothing]
> 
> 

Looks like a major issue to me.

But enforce is a red herring there. This prints true with 2.078 as well:

import std.stdio;

void main ()
{
    int i = int.min;
    writeln(i > 0);    // prints 'true' with 2.078
}

Ali
January 27, 2018
On 01/27/2018 03:13 PM, kdevel wrote:
> I would expect this code
> 
> enforce3.d
> ---
> import std.exception;
> 
> void main ()
> {
>     int i = int.min;
>     enforce (i > 0);
> }
> ---
> 
> to throw an "Enforcement failed" exception, but it doesn't:
> 
> $ dmd enforce3.d
> $ ./enforce3
> [nothing]
> 
> 

Wow, that looks really bad.

Apparently, dmd implements `i < 0` as a `i >> 31`. I.e., it shifts the bits to the right so far that only the sign bit is left. This is ok.

But it implements `i > 0` as `(-i) >> 31`. That would be correct if negation would always flip the sign bit. But it doesn't for `int.min`. `-int.min` is `int.min` again.

So dmd emits wrong code for `i > 0`. O_O

I've filed an issue:
https://issues.dlang.org/show_bug.cgi?id=18315
January 27, 2018
On Saturday, 27 January 2018 at 14:49:52 UTC, Ali Çehreli wrote:
> But enforce is a red herring there. This prints true with 2.078 as well:
>
> import std.stdio;
>
> void main ()
> {
>     int i = int.min;
>     writeln(i > 0);    // prints 'true' with 2.078
> }

test.d
---
import std.stdio;
void main ()
{
   int i = int.min;
   auto b = i > 0;
   b.writeln;
   auto c = int.min > 0;
   c.writeln;
}
---
$ dmd test.d
$ ./test
true
false

January 28, 2018
On 1/27/18 9:50 AM, ag0aep6g wrote:

> Wow, that looks really bad.
> 
> Apparently, dmd implements `i < 0` as a `i >> 31`. I.e., it shifts the bits to the right so far that only the sign bit is left. This is ok.
> 
> But it implements `i > 0` as `(-i) >> 31`. That would be correct if negation would always flip the sign bit. But it doesn't for `int.min`. `-int.min` is `int.min` again.
> 
> So dmd emits wrong code for `i > 0`. O_O
> 
> I've filed an issue:
> https://issues.dlang.org/show_bug.cgi?id=18315

This is insane. i > 0 is used in so many places. The only saving grace appears to be that int.min is just so uncommonly seen in the wild.

I tested all the way back to 2.040, still has the same behavior.

-Steve
January 29, 2018
On Sunday, 28 January 2018 at 19:17:49 UTC, Steven Schveighoffer wrote:
> On 1/27/18 9:50 AM, ag0aep6g wrote:
>
>> Wow, that looks really bad.
>> 
>> Apparently, dmd implements `i < 0` as a `i >> 31`. I.e., it shifts the bits to the right so far that only the sign bit is left. This is ok.
>> 
>> But it implements `i > 0` as `(-i) >> 31`. That would be correct if negation would always flip the sign bit. But it doesn't for `int.min`. `-int.min` is `int.min` again.
>> 
>> So dmd emits wrong code for `i > 0`. O_O
>> 
>> I've filed an issue:
>> https://issues.dlang.org/show_bug.cgi?id=18315
>
> This is insane. i > 0 is used in so many places. The only saving grace appears to be that int.min is just so uncommonly seen in the wild.
>
> I tested all the way back to 2.040, still has the same behavior.
>
> -Steve


FYI/OT: If you need to check the behavior of old compilers, the "All" option at run.dlang.io might be helpful.

It starts from 2.060 and it works best for consistent output (i.e. without stacktrace pointer).
Examples:

- https://run.dlang.io/is/IoN3sj (code from the bug report)
- https://run.dlang.io/is/LuxUQ5 (code from the bug report, slightly modified)
- https://run.dlang.io/is/3R4r1U (simple example of "when was the symbol added to Phobos?")

(It's based on Vladimir's regression tester and can be used locally too: https://github.com/dlang-tour/core-dreg)
January 30, 2018
On Sunday, 28 January 2018 at 19:17:49 UTC, Steven Schveighoffer wrote:
> This is insane. i > 0 is used in so many places. The only saving grace appears to be that int.min is just so uncommonly seen in the wild.

And another one that it does not happen when compiled with optimization (-O) and also that it does not affect all the ints:

---
import std.stdio;

void foo (T) ()
{
   auto i = T.min;
   writefln ("%12s: %24X %12s", T.stringof, i, i > cast(T) 0);
}

void main ()
{
   foo!byte;
   foo!short;
   foo!int;
   foo!long;
}
---

        byte:                       80        false
       short:                     8000        false
         int:                 80000000         true
        long:         8000000000000000         true

In 32 bit mode:

        byte:                       80        false
       short:                     8000        false
         int:                 80000000         true
        long:         8000000000000000        false

January 31, 2018
On 1/30/18 3:37 PM, kdevel wrote:
> On Sunday, 28 January 2018 at 19:17:49 UTC, Steven Schveighoffer wrote:
>> This is insane. i > 0 is used in so many places. The only saving grace appears to be that int.min is just so uncommonly seen in the wild.
> 
> And another one that it does not happen when compiled with optimization (-O) and also that it does not affect all the ints:
> 
> ---
> import std.stdio;
> 
> void foo (T) ()
> {
>     auto i = T.min;
>     writefln ("%12s: %24X %12s", T.stringof, i, i > cast(T) 0);
> }
> 
> void main ()
> {
>     foo!byte;
>     foo!short;
>     foo!int;
>     foo!long;
> }
> ---
> 
>          byte:                       80        false
>         short:                     8000        false
>           int:                 80000000         true
>          long:         8000000000000000         true
>

This is due to integer promotion (https://dlang.org/spec/type.html#usual-arithmetic-conversions). Any operation between two non-integers, first the two operands are promoted to integers.

You can see the result here:

https://run.dlang.io/is/RAk9tE

> In 32 bit mode:
> 
>          byte:                       80        false
>         short:                     8000        false
>           int:                 80000000         true
>          long:         8000000000000000        false
> 

Most likely, this is due to the fact that working with longs cannot be done natively by the CPU, so it can't use the same shifting shortcut that causes the issue in the first place.

-Steve
January 31, 2018
On Saturday, 27 January 2018 at 14:13:49 UTC, kdevel wrote:
> I would expect this code
>
> enforce3.d
> ---
> import std.exception;
>
> void main ()
> {
>    int i = int.min;
>    enforce (i > 0);
> }
> ---
>
> to throw an "Enforcement failed" exception, but it doesn't:
>
> $ dmd enforce3.d
> $ ./enforce3
> [nothing]

I wonder if it's caused by a comparison between signed and unsigned integers.

import std.stdio;

void main ()
{
    int zero = 0;
    writeln(int.min > 0u);
    writeln(int.min > zero);
}

$ rdmd test.d
true
false

The same behavior can be observed in C :

#include <stdio.h>
#include <limits.h>

int main(void)
{
    int zero = 0;
    printf("%d\n", INT_MIN > 0u);
    printf("%d\n", INT_MIN > zero);
    return 0;
}

$ gcc test.c && ./a.out
1
0
January 31, 2018
On 1/31/18 6:19 PM, Azi Hassan wrote:
> On Saturday, 27 January 2018 at 14:13:49 UTC, kdevel wrote:
>> I would expect this code
>>
>> enforce3.d
>> ---
>> import std.exception;
>>
>> void main ()
>> {
>>    int i = int.min;
>>    enforce (i > 0);
>> }
>> ---
>>
>> to throw an "Enforcement failed" exception, but it doesn't:
>>
>> $ dmd enforce3.d
>> $ ./enforce3
>> [nothing]
> 
> I wonder if it's caused by a comparison between signed and unsigned integers.

No, the answer is, there's a shortcut optimization used by the compiler. See the discussion elsewhere in this thread.

> 
> import std.stdio;
> 
> void main ()
> {
>      int zero = 0;
>      writeln(int.min > 0u);
>      writeln(int.min > zero);
> }

Note that comparing the literal int.min will get folded into a constant, and do the right thing. You have to assign it a variable to see the incorrect behavior:

int i = int.min;
writeln(i > 0);

-Steve