February 05, 2022

On Saturday, 5 February 2022 at 10:11:48 UTC, Mark wrote:

>

Also, I don't think being a library type is a mark of shame. Depending on the language, they can be just as useful and almost as convenient as built-in types. C++'s std::byte was mentioned on this thread - it's a library type.

Is it a library type though? I am not sure there is clear distinction between language and library in C++. So you can have "library features" that are implemented using intrinsics, which might make them language-features if they cannot be done within the language in a portable fashion. It is kinda hard to tell sometimes, maybe the huge spec makes it more clear, but it is at least not obvious to me as a programmer.

February 05, 2022

On Saturday, 5 February 2022 at 08:59:22 UTC, Walter Bright wrote:

>

On 2/4/2022 6:35 PM, Siarhei Siamashka wrote:

>

My suggestion:

 1. Implement wrapping_add, wrapping_sub, wrapping_mul intrinsics similar to Rust, this is easy and costs nothing.
 2. Implement an experimental -ftrapv option in one of the D compilers (most likely GDC or LDC) to catch both signed and unsigned overflows at runtime. Or maybe add function attributes to enable/disable this functionality with a more fine grained control. Yes, I know that this violates the current D language spec, which requires two's complement wraparound for everything, but it doesn't matter for a fancy experimental option.
 3. Run some tests with -ftrapv and check how many arithmetic overflows are actually triggered in Phobos. Replace the affected arithmetic operators with intrinsics if the wrapping behavior is actually intended.
 4. In the long run consider updating the language spec.

Benefits: even if -ftrapv turns out to have a high overhead, this would still become a useful tool for testing arithmetic overflows safety in applications. Having something is better than having nothing.

I recommend creating a DIP for it.

Thanks for not outright rejecting it. This really means a lot! I'll look into the DIP submission process.

Accidentally or not, turns out that GDC already supports -ftrapv option. Which works with C/C++ semantics (traps for signed overflows, wraparound for unsigned overflows, types smaller than int are flying under the radar due to integral promotion). Now I need to experiment with it a little bit to check how it interacts with Phobos and the other D code in practice. Patching up GCC sources to test if unsigned overflows can be also trapped is going to be interesting too.

But in general, this looks like a very promising feature. It can provide some protection against arithmetic overflow bugs for 32-bit and 64-bit calculations. And the practical implications of troubleshooting such arithmetic overflows in large and complicated software was one of my primary concerns about D language.

February 05, 2022
On 2/5/2022 6:54 AM, Timon Gehr wrote:
> I get that the entire x87 design is pretty bad and so there are trade-offs, but as it has now been deprecated, I hope this kind of second-guessing will become a thing of the past entirely. In the meantime, I will avoid using DMD for anything that requires floating-point arithmetic.

I'm not sure how you concluded that. DMD now rounds float calculations to float with the x87, despite the cost in speed.

If the CPU has SIMD float instructions on it, that is used instead of the x87, just like what every other compiler does.
February 05, 2022
On 05.02.22 23:01, Walter Bright wrote:
> On 2/5/2022 6:54 AM, Timon Gehr wrote:
>> I get that the entire x87 design is pretty bad and so there are trade-offs, but as it has now been deprecated, I hope this kind of second-guessing will become a thing of the past entirely. In the meantime, I will avoid using DMD for anything that requires floating-point arithmetic.
> 
> I'm not sure how you concluded that.

Maybe my information is outdated. (This has come up many times in the past, and you have traditionally argued in favor of not respecting the specified precision.)

> DMD now rounds float calculations to float with the x87, despite the cost in speed.
> ...

That's great news, but the opposite is still in the spec:
https://dlang.org/spec/float.html

In any case, AFAIK CTFE still relies on this leeway (in all compilers, as it's a frontend feature).

> If the CPU has SIMD float instructions on it, that is used instead of the x87, just like what every other compiler does.

My current understanding is that this can change at any point in time without it being considered a breaking change, and that DMD is more likely to do this than LDC.
February 05, 2022
On 2/5/2022 2:52 PM, Timon Gehr wrote:
> On 05.02.22 23:01, Walter Bright wrote:
>> On 2/5/2022 6:54 AM, Timon Gehr wrote:
>>> I get that the entire x87 design is pretty bad and so there are trade-offs, but as it has now been deprecated, I hope this kind of second-guessing will become a thing of the past entirely. In the meantime, I will avoid using DMD for anything that requires floating-point arithmetic.
>>
>> I'm not sure how you concluded that.
> 
> Maybe my information is outdated. (This has come up many times in the past, and you have traditionally argued in favor of not respecting the specified precision.)
> 
>> DMD now rounds float calculations to float with the x87, despite the cost in speed.
>> ...
> 
> That's great news, but the opposite is still in the spec:
> https://dlang.org/spec/float.html

That'll be fixed.


> In any case, AFAIK CTFE still relies on this leeway (in all compilers, as it's a frontend feature).

I don't think it does, but I'll have to check.


>> If the CPU has SIMD float instructions on it, that is used instead of the x87, just like what every other compiler does.
> 
> My current understanding is that this can change at any point in time without it being considered a breaking change, and that DMD is more likely to do this than LDC.

Highly unlikely. (Neither the C nor the C++ standards require this behavior, either, AFAIK, so you shouldn't use any other compilers, either.)
February 05, 2022
https://issues.dlang.org/show_bug.cgi?id=22740
February 06, 2022
On 06.02.22 00:04, Walter Bright wrote:
> ...
>>
>> My current understanding is that this can change at any point in time without it being considered a breaking change, and that DMD is more likely to do this than LDC.
> 
> Highly unlikely.

Great!

> (Neither the C nor the C++ standards require this behavior, either, AFAIK, so you shouldn't use any other compilers, either.)

In practice, the story is a bit more complicated than this. Besides the C and C++ standards, there is also IEEE 754 and common practice, in particular 32/64 bit IEEE 754. Compilers implement multiple standards, at least with a suitable set of flags, and they explicitly document the guarantees one can expect with each set of flags.

February 06, 2022
On 06.02.22 00:16, Walter Bright wrote:
> https://issues.dlang.org/show_bug.cgi?id=22740

Thanks! One place where this has now actually bit me is this calculation (DMD on linux):

```d
void main(){
    import std.stdio;
    assert(42*6==252);
    // constant folding, uses extended precision, overall less accurate result due to double rounding:
    assert(cast(int)(4.2*60)==251);
    // no constant folding, uses double precision, overall more accurate result
    double x=4.2;
    assert(cast(int)(x*60)==252);
}
```

4.2 and 60 were named constants and the program would have worked fine with a result of either 251 or 252, I did not rely on the result being a specific one of those. However, because the result was sometimes 251 and at other times 252, this resulted in a hard to track down bug caused by the inconsistency. I even got one result on Windows and the other one on linux when compiling _exactly the same expression_. This was with LDC though, not sure if the platform dependency is reproducible with DMD.

Note that this was relatively recently, but I had seen this coming for a long time before it actually happened to me, which is why I had consistently argued so vehemently against this kind of precision "enhancements".
February 05, 2022
Unfortunately, I ran into a stumbling block. The current 80 bit "emulation" done for Microsoft compatibility doesn't support conversion of 80 bits to float or double.

I've been considering for a while writing my own 80 emulator to resolve these problems once and for all, but it's a bit of a project. It's not that hard, it just takes some careful attention to detail.

A search online showed no Boost compatible emulators, which kinda surprised me. After all these years, you'd think there would be one.
February 06, 2022

On Saturday, 5 February 2022 at 07:59:21 UTC, Ola Fosheim Grøstad wrote:

>

[snip]

Well written code would use a narrowing cast with checks for debugging, but the type itself is less interesting, so it would be better with overloading on return type. But it could be the default if overflow checks were implemented.

byte x = narrow(expression);

If it was the default, you could disable it instead:

byte x = uncheck(expression);

In the meantime, the equivalent of the narrow function could get added to std.conv.