On Monday, 3 July 2023 at 15:50:43 UTC, Steven Schveighoffer wrote:
> On 7/3/23 4:24 AM, Mehdi wrote:
> Unfortunately, the assert statement fails in this case. I'm wondering if this discrepancy is due to the different float representations, a bug, or if I'm doing something wrong. Any insights would be greatly appreciated!
D always performs floating point math at the highest precision possible. C++ does not. So it's a minor difference in representation between double
and real
.
It’s also a difference in philosophy. Until C++20, C++ was very liberal about what floating point types are and do. Since then, it is: If the platform can do IEEE 754, floating point math is IEEE 754 math. I still don’t know if float
calculations are required by the standard to be carried out in single precision only or if it may store intermediate results in a higher precision.
D’s take on floating point is (or was when I came to D, when this was discussed a lot) that the result of floating point calculations can be any value between the one that would result following IEEE 754 precisely and the one you’d get using infinite precision and then round to the IEEE 754 type. I guess the reason is that needing to round down is in practical applications a loss without any gain: The processor does more work and you have less precision. Of course, when you want to observe rounding errors etc. (e.g. form an academic perspective), D might fool you.
An example would be:
double a = 0.1;
double b = 0.2;
double c = 0.3;
assert(0.1 + 0.2 == c); // passes, 0.1 + 0.2 is evaluated at compile-time
assert( a + b == c); // fails
The only sound reasoning that can lead to this is the D spec guarantees double
calculations are double precision or better.
(Also, the fact that floating point values even have a ==
/!=
comparison is a bad idea.)