January 06, 2021
On Wednesday, 6 January 2021 at 06:50:34 UTC, Walter Bright wrote:
> With programming languages, it does not matter what you think you wrote. What matters is how the language semantics are defined to work.

Yes, this is how it's different from communicating natural languages where the receiver of the message can interpret things "the wrong way", or at least not the way you intended.

It's a nice metaphor for may have happened with that DIP.

Sometimes things are just hard to communicate, so please guys, try to give the benefit of the doubt.

It's 2021, the year of D! 😉

I have a proposal though, even before writing a DIP, would it be a good idea to have a summary with concise examples and then do a quick poll?

That way you don't waste time, and have the opportunity to improve your thoughts even before doing too much work?

Just an idea. What do you think? 🤔
January 06, 2021
On 06.01.21 07:50, Walter Bright wrote:
> 
>  > I want to execute the code that I wrote, not what you think I should have
>  > instead written, because sometimes you will be wrong.
> 
> With programming languages, it does not matter what you think you wrote. What matters is how the language semantics are defined to work.
The language semantics right now are defined to not work, so people are going to rely on the common sense and/or additional promises of specific backend authors. People are going to prefer that route to the alternative of forking every dependency and adding explicit rounding to every single floating-point operation. (Which most likely does not even solve the problem as you'd still get double-rounding issues.)

> In writing professional numerical code, one must carefully understand it, knowing that it does *not* work like 7th grade algebra.

That's why it's important to have precise control. Besides, a lot of contemporary applications of floating-point computations are not your traditional numerically stable fixed-point iterations. Reproducibility even of explicitly chaotic behavior is sometimes a big deal, for example for artificial intelligence research or multiplayer games.
Also, maybe you don't want your code to change behavior randomly between compiler updates. Some applications need to have a certain amount of backwards compatibility.

> Different languages can and do behave differently, too.
Or different implementations. I'm not going to switch languages due to an issue that's fixed by not using DMD.
January 06, 2021
On Wednesday, 6 January 2021 at 02:30:30 UTC, Walter Bright wrote:
> On 1/5/2021 2:42 AM, 9il wrote:
>> On Tuesday, 5 January 2021 at 09:47:41 UTC, Walter Bright wrote:
>>> On 1/4/2021 11:22 PM, 9il wrote:
>>>> I can't reproduce the same DMD output as you.
>>>
>>> I did it on Windows 32 bit. I tried it on Linux 32, which does indeed show the behavior you mentioned. At the moment I don't know why the different behaviors.
>>>
>>> https://issues.dlang.org/show_bug.cgi?id=21526
>>>
>>>
>>>> It just uses SSE, which I think a good way to go, haha.
>>>
>>> As I mentioned upthread, it will use SSE when SSE is baseline on the CPU target, and it will always round to precision.
>> 
>> Does this mean that DMD Linux 32-bit executables should compile with SSE codes?
>
> The baseline Linux target does not have SSE.
>
>
>> I ask because DMD compiles Linux 32-bit executables with x87 codes when -O is passed and with SSE if no -O is passed. That is very weird.
>
> Example, please?

DMD with flag -m32 generates

https://cpp.godbolt.org/z/GMGMra
        assume  CS:.text._D7example1fFffZf
                push    EBP
                mov     EBP,ESP
                sub     ESP,018h
                movss   XMM0,0Ch[EBP]
                movss   XMM1,8[EBP]
                addss   XMM0,XMM1
                movss   -8[EBP],XMM0
                subss   XMM0,XMM1
                movss   -4[EBP],XMM0
                movss   -018h[EBP],XMM0
                fld     float ptr -018h[EBP]
                leave
                ret     8
                add     [EAX],AL

It has been provided in the thread
https://forum.dlang.org/post/gqzdiicrvtlicurxyvby@forum.dlang.org

January 06, 2021
On Wednesday, 6 January 2021 at 06:50:34 UTC, Walter Bright wrote:
> As far as I can tell, the only algorithms that are incorrect with extended precision intermediate values are ones specifically designed to tease out the roundoff to the reduced precision.

It becomes impossible to write good unit-tests for floating point if you don't know what the exact results should be.

Anyway, it is ok if this is up to the compiler vendor if you can test a flag for it.

Just get rid of implicit conversion for floating point. Nobody interested in numerics would want that.


January 06, 2021
On Wednesday, 6 January 2021 at 02:27:25 UTC, Walter Bright wrote:
> On 1/5/2021 5:30 AM, Guillaume Piolat wrote:
>> It would be nice if no excess precision was ever used. It can sometimes gives a false sense of correctness. It has no upside except accidental correctness that can break when compiled for a different platform.
>
> That same argument could be use to always use float instead of double. I hope you see it's fallacious <g>

If I use float, and the compiler use real instead, it might mask precision problems that will be seen when using SSE instead of FPU.

Happened to me once 21 oct 2015 according to my log.
2-pole IIR filters had a different sound in 32-bit and 64-bit Windows with DMD.
The biquad delayline was float and should have been double. But the 32-bit DMD was using real for intermediate values, masking the fact I should have been using double. Going double lowered that quantization noise.

It's really not a problem anymore since using LDC that uses SSE in 32-bit too. So the benefits of accidental precision haven't really materialized.

I understand that the issue really is minor nowadays.
January 06, 2021
On Wednesday, 6 January 2021 at 12:12:31 UTC, Guillaume Piolat wrote:
>
> It's really not a problem anymore since using LDC that uses SSE

The other upside being denormals.


January 06, 2021
On Wednesday, 6 January 2021 at 12:15:25 UTC, Guillaume Piolat wrote:
>
> The other upside being denormals.

What has also been true though is that with LDC I've not been able to actually have 80-bit precision with FPU instructions ; no matter what its FPU control word was. Really not understood why.
January 06, 2021
On Wednesday, 6 January 2021 at 12:12:31 UTC, Guillaume Piolat wrote:
>
> Happened to me once 21 oct 2015 according to my log.

Proof:
https://github.com/AuburnSounds/Dplug/commit/c9a76e024cca4fe7bc94f14c7c1185d854d87947
January 06, 2021
On 2021-01-06 03:30, Walter Bright wrote:

> The baseline Linux target does not have SSE.

Other compilers solve this by having a flag to specify the minimum target CPU.

-- 
/Jacob Carlborg
January 07, 2021
On 12/23/20 10:05 AM, 9il wrote:

> It was a mockery executed by Atila

For those who read the above comment but do not want to read the rest of this long thread, the linked PR discussion does not contain mockery:

> https://github.com/dlang/dmd/pull/9778#issuecomment-498700369

Ali