May 13, 2016
On 5/12/2016 4:06 PM, Marco Leise wrote:
> Am Mon, 9 May 2016 04:26:55 -0700
> schrieb Walter Bright <newshound2@digitalmars.com>:
>
>>> I wonder what's the difference between 1.30f and cast(float)1.30.
>>
>> There isn't one.
>
> Oh yes, there is! Don't you love floating-point...
>
> cast(float)1.30 rounds twice, first from a base-10
> representation to a base-2 double value and then again to a
> float. 1.30f directly converts to float.

This is one reason why the compiler carries everything internally to 80 bit precision, even if they are typed as some other precision. It avoids the double rounding.

May 13, 2016
On 13.05.2016 21:25, Walter Bright wrote:
> On 5/12/2016 4:06 PM, Marco Leise wrote:
>> Am Mon, 9 May 2016 04:26:55 -0700
>> schrieb Walter Bright <newshound2@digitalmars.com>:
>>
>>>> I wonder what's the difference between 1.30f and cast(float)1.30.
>>>
>>> There isn't one.
>>
>> Oh yes, there is! Don't you love floating-point...
>>
>> cast(float)1.30 rounds twice, first from a base-10
>> representation to a base-2 double value and then again to a
>> float. 1.30f directly converts to float.
>
> This is one reason why the compiler carries everything internally to 80
> bit precision, even if they are typed as some other precision. It avoids
> the double rounding.
>

IMO the compiler should never be allowed to use a precision different from the one specified.
May 13, 2016
On Friday, 13 May 2016 at 18:16:29 UTC, Walter Bright wrote:
>> Please have the frontend behave such that it operates on the precise
>> datatype expressed by the type... the backend probably does this too,
>> and runtime certainly does; they all match.
>
> Except this never happens anyway.

It should in C++ with the right strict-settings, which makes the compiler use reproducible floating point operations. AFAIK it should work out even in modern JavaScript.

May 13, 2016
On 5/13/2016 12:48 PM, Timon Gehr wrote:
> IMO the compiler should never be allowed to use a precision different from the
> one specified.

I take it you've never been bitten by accumulated errors :-)

Reduced precision is only useful for storage formats and increasing speed. If a less accurate result is desired, your algorithm is wrong.
May 13, 2016
On 5/13/2016 1:57 PM, Ola Fosheim Grøstad wrote:
> It should in C++ with the right strict-settings,

Consider what the C++ Standard says, not what the endless switches to tweak the compiler do.

May 13, 2016
On Friday, 13 May 2016 at 21:36:52 UTC, Walter Bright wrote:
> On 5/13/2016 1:57 PM, Ola Fosheim Grøstad wrote:
>> It should in C++ with the right strict-settings,
>
> Consider what the C++ Standard says, not what the endless switches to tweak the compiler do.

The C++ standard cannot even require IEEE754. Nobody relies only on what the C++ standard says in real projects. They rely on what the chosen compiler(s) on concrete platform(s) do.

May 13, 2016
On 5/13/2016 2:42 PM, Ola Fosheim Grøstad wrote:
> On Friday, 13 May 2016 at 21:36:52 UTC, Walter Bright wrote:
>> On 5/13/2016 1:57 PM, Ola Fosheim Grøstad wrote:
>>> It should in C++ with the right strict-settings,
>>
>> Consider what the C++ Standard says, not what the endless switches to tweak
>> the compiler do.
>
> The C++ standard cannot even require IEEE754. Nobody relies only on what the C++
> standard says in real projects. They rely on what the chosen compiler(s) on
> concrete platform(s) do.


Nevertheless, C++ is what the Standard says it is. If Brand X compiler does something else, you should call it "Brand X C++".
May 14, 2016
On 13.05.2016 23:35, Walter Bright wrote:
> On 5/13/2016 12:48 PM, Timon Gehr wrote:
>> IMO the compiler should never be allowed to use a precision different
>> from the one specified.
>
> I take it you've never been bitten by accumulated errors :-)
> ...

If that was the case it would be because I explicitly ask for high precision if I need it.

If the compiler using or not using a higher precision magically fixes an actual issue with accumulated errors, that means the correctness of the code is dependent on something hidden, that you are not aware of, and that could break any time, for example at a time when you really don't have time to track it down.

> Reduced precision is only useful for storage formats and increasing
> speed.  If a less accurate result is desired, your algorithm is wrong.

Nonsense. That might be true for your use cases. Others might actually depend on IEE 754 semantics in non-trivial ways. Higher precision for temporaries does not imply higher accuracy for the overall computation.

E.g., correctness of double-double arithmetic is crucially dependent on correct rounding semantics for double:
https://en.wikipedia.org/wiki/Quadruple-precision_floating-point_format#Double-double_arithmetic

Also, it seems to me that for e.g. https://en.wikipedia.org/wiki/Kahan_summation_algorithm,
the result can actually be made less precise by adding casts to higher precision and truncations back to lower precision at appropriate places in the code.

And even if higher precision helps, what good is a "precision-boost" that e.g. disappears on 64-bit builds and then creates inconsistent results?

Sometimes reproducibility/predictability is more important than maybe making fewer rounding errors sometimes. This includes reproducibility between CTFE and runtime.

Just actually comply to the IEEE floating point standard when using their terminology. There are algorithms that are designed for it and that might stop working if the language does not comply.

Then maybe add additional built-in types with a given storage size that additionally /guarantee/ a certain amount of additional scratch space when used for function-local computations.
May 14, 2016
On 14.05.2016 02:49, Timon Gehr wrote:
> IEE

IEEE.
May 14, 2016
On 14.05.2016 02:49, Timon Gehr wrote:
> result can actually be made less precise

less accurate. I need to go to sleep.