February 03, 2022
On Thursday, 3 February 2022 at 05:50:24 UTC, Walter Bright wrote:
> On 2/2/2022 6:25 PM, Siarhei Siamashka wrote:
>> On Thursday, 3 February 2022 at 01:05:15 UTC, Walter Bright wrote:
>>> As does:
>>>
>>>     ubyte a, b, c;
>>>     a = b | c;
>> 
>> But `a = b + c` is rejected by the compiler.
>
> That's because `b + c` may create a value that does not fit in a ubyte.

And yet:

    int a, b, c;
    a = b + c;

`b + c` may create a value that does not fit in an int, but instead of the rejecting the code, the compiler accepts it and allows the result to wrap around.

The inconsistency is the problem here. Having integer types behave differently depending on their width makes the language harder to learn, and forces generic code to add special cases for narrow integers, like this one in `std.math.abs`:

    static if (is(immutable Num == immutable short) || is(immutable Num == immutable byte))
        return x >= 0 ? x : cast(Num) -int(x);
    else
        return x >= 0 ? x : -x;

(Source: https://github.com/dlang/phobos/blob/v2.098.1/std/math/algebraic.d#L56-L59)
February 03, 2022
On Thursday, 3 February 2022 at 05:50:24 UTC, Walter Bright wrote:
> VRP makes many implicit conversions to bytes safely possible.

It also *causes* bugs. When code gets refactored, and the types change, those forced casts may not be doing what is desired, and can do things like unexpectedly truncating integer values.


February 03, 2022

On Thursday, 3 February 2022 at 01:26:04 UTC, Ola Fosheim Grøstad wrote:

>

On Wednesday, 2 February 2022 at 21:42:43 UTC, Dukc wrote:

>

In the opposite case we would have undefined behaviour at @safe code.

People in the D community has the wrong understanding of what "undefined behaviour" means in a standard specification… this is getting tiresome, but to state the obvious: it does not mean that the compiler cannot provide guarantees.

What's your point? Even when a compiler provides more guarantees than the language spec, you should still avoid undefined behaviour if you can. Otherwise you're deliberately making your program non-spec compliant and therefore likely to malfunction with other compilers.

February 03, 2022

On Thursday, 3 February 2022 at 16:41:45 UTC, Dukc wrote:

>

What's your point? Even when a compiler provides more guarantees than the language spec, you should still avoid undefined behaviour if you can. Otherwise you're deliberately making your program non-spec compliant and therefore likely to malfunction with other compilers.

If you don't get my point, what can I do about it?

Modular arithmetics doesn't help at all, it makes it worse. It is better to have a conditional correctly removed than wrongly get it inverted, the latter is disastrous for correctness.

So no, undefined behaviour is not worse than defined behaviour when the defined behaviour is the kind of behaviour nobody wants!

February 03, 2022

On Thursday, 3 February 2022 at 17:05:01 UTC, Ola Fosheim Grøstad wrote:

>

Modular arithmetics doesn't help at all, it makes it worse. It is better to have a conditional correctly removed than wrongly get it inverted, the latter is disastrous for correctness.

So no, undefined behaviour is not worse than defined behaviour when the defined behaviour is the kind of behaviour nobody wants!

Oh now I understand what you're saying. I don't agree though. With overflow at least you can clearly reason what's happening. If compiled code starts to mysteriously disappear when you have overflows there is potential for some very incomprehensible bugs.

It probably would not be that bad in the x < x + 1 example but in the real world you might have careless multiplying of integers, for instance. Lets say I do this:

fun(aLongArray[x]);
x *= 0x10000;

If the array is long enough, with semantics you're advocating the compiler might reason:

  1. x can't overflow, so it must be 0x7FFF at most before the multicipation.
  2. I know aLongArr is longer than that, so I can elide the bounds check.

Overflows are much less an issue than stuff like that.

February 03, 2022

On Thursday, 3 February 2022 at 17:33:35 UTC, Dukc wrote:

>

If the array is long enough, with semantics you're advocating the compiler might reason:

  1. x can't overflow, so it must be 0x7FFF at most before the multicipation.
  2. I know aLongArr is longer than that, so I can elide the bounds check.

Overflows are much less an issue than stuff like that.

I advocated trapping overflows except where you explicitly disable it. I would also advocate for having both modular operators and operators that clamp.

February 03, 2022

On Thursday, 3 February 2022 at 18:07:43 UTC, Ola Fosheim Grøstad wrote:

>

I advocated trapping overflows except where you explicitly disable it.

I don't know if you meant to do it the C++ way (unsigned overflows normally, signed may do anything on overflow), or some other way. Regardless, probably a bad idea. We could allow undefined behaviour only in @system code, and realistically, where would you want integers that behave that way? You're supposed to be a bit desperate before you disable safety features for performance, at that point you have probably already hand-optimised away the code that the compiler could remove for you.

February 03, 2022

On Thursday, 3 February 2022 at 20:56:04 UTC, Dukc wrote:

>

We could allow undefined behaviour only in @system code, and realistically,

How exactly is this relevant for @safe?

February 03, 2022

On Thursday, 3 February 2022 at 20:56:04 UTC, Dukc wrote:

>

On Thursday, 3 February 2022 at 18:07:43 UTC, Ola Fosheim Grøstad wrote:

>

I advocated trapping overflows except where you explicitly disable it.

I don't know if you meant to do it the C++ way (unsigned overflows normally, signed may do anything on overflow), or some other way.

I assume "trapping overflows" means something like GCC's -ftrapv, where integer overflow causes the program to crash.

February 03, 2022

On Thursday, 3 February 2022 at 21:01:30 UTC, Ola Fosheim Grøstad wrote:

>

On Thursday, 3 February 2022 at 20:56:04 UTC, Dukc wrote:

>

We could allow undefined behaviour only in @system code, and realistically,

How exactly is this relevant for @safe?

We cannot allow undefined behaviour in @safe code. That means that any integer that would have undefined semantics for overflows could not be used at @safe.

Well, asserting no overflow would be fine. With a -release switch, it'd behave like the c++ signed int. But not otherwise. In fact this is already doable:

import core.checkedint;
bool check;
auto x = mulu(a,b,check);
assert(!check);

Not sure if the compiler will take advantage of overflow being undefined behaviour here in release mode, though.