Thread overview
9999999999999999.0 - 9999999999999998.0
Jan 06, 2019
Samir
Jan 06, 2019
Adam D. Ruppe
Jan 06, 2019
Jesse Phillips
Jan 06, 2019
Samir
January 06, 2019
I saw the following thread[1] today on Hacker News that discusses an article that compares how various languages compute 9999999999999999.0 - 9999999999999998.0.  A surprisingly large number of languages return 2 as the answer.  I ran the following which returned 1:

import std.stdio: writeln;
void main(){
    writeln(cast(double)9999999999999999.0-9999999999999998.0);
}

I don't know anything about IEEE 754[2] which, according to the HN discussion, is the standard for floating point arthimetic, but was pleasantly surprised to see how D handles this.  Does anyone know why?

Thanks
Samir

[1] https://news.ycombinator.com/item?id=18832155
[2] https://en.wikipedia.org/wiki/IEEE_754
January 06, 2019
On Sunday, 6 January 2019 at 00:20:40 UTC, Samir wrote:
> import std.stdio: writeln;
> void main(){
>     writeln(cast(double)9999999999999999.0-9999999999999998.0);
> }

That's because it is done at compile time, since both are compile-time constants. The compiler will evaluate it using the maximum precision available to the compiler, ignoring your request to cast it to double (which annoys some people who value predictability over precision btw). At different precisions, you get different results.

I suggest breaking it up into a different variable to force a runtime evaluation instead of using the compiler's constant folding.

import std.stdio: writeln;
void main(){
    double d = 9999999999999999.0;
    writeln(d-9999999999999998.0);
}


This gives 1. Making it float instead of double, you get something different. With real (which btw is higher precision, but terrible speed), you get 1 - this is what the compiler happened to use at compile time.
January 06, 2019
On Sunday, 6 January 2019 at 00:20:40 UTC, Samir wrote:
>
> [1] https://news.ycombinator.com/item?id=18832155
> [2] https://en.wikipedia.org/wiki/IEEE_754

Since you got your answer you may also like
http://dconf.org/2016/talks/clugston.html
January 06, 2019
On Sunday, 6 January 2019 at 01:05:08 UTC, Adam D. Ruppe wrote:
> That's because it is done at compile time, since both are compile-time constants. The compiler will evaluate it using the maximum precision available to the compiler, ignoring your request to cast it to double (which annoys some people who value predictability over precision btw). At different precisions, you get different results.

Thanks for that explanation, Adam!  Very helpful.

On Sunday, 6 January 2019 at 03:33:45 UTC, Jesse Phillips wrote:
> Since you got your answer you may also like
> http://dconf.org/2016/talks/clugston.html

Thank you for pointing out that talk, Jesse.  I will set aside some time to go through that!

Samir