May 21, 2016
On Saturday, 21 May 2016 at 17:58:49 UTC, Walter Bright wrote:
> On 5/21/2016 2:26 AM, Tobias Müller wrote:
>> On Friday, 20 May 2016 at 22:22:57 UTC, Walter Bright wrote:
>>> On 5/20/2016 5:36 AM, Tobias M wrote:
>>>> Still an authority, though.
>>>
>>> If we're going to use the fallacy of appeal to authority, may I present Kahan
>>> who concurrently designed the IEEE 754 spec and the x87.
>>
>> Actually cited this *because* of you mentioning Kahan several times. And because
>> you said "The people who design these things are not fools, and there are good
>> reasons for the way things are."
>
> I meant two things by this:
>
> 1. Do the homework before disagreeing with someone who literally wrote the book and designed the hardware for it.
>
> 2. Complaining that the x87 is not IEEE compliant, when the guy that designed the x87 wrote the spec at the same time, suggests a misunderstanding the spec. I.e. again, gotta do the homework first.

Sorry but this is a misrepresentation. I never claimed that the x87 doesn't conform to the IEEE standard. That's completely missing the point. Again.

> Dismissing several decades of FP designs, and every programming language, as being "obviously wrong" and "insane" is an extraordinary claim, and so requires extraordinary evidence.
>
> After all, what would your first thought be when a sophomore physics student tells you that Feynman got it all wrong? It's good to revisit existing dogma now and then, and challenge the underlying assumptions of it, but again, you gotta understand the existing dogma all the way down first.
>
> If you don't, you're very likely to miss something fundamental and produce a design that is less usable.

The point is, that is IS possible to provide fairly reasonable and consistent semantics within the existing standards (C, C++, IEEE, ...). They provide a certain degree of freedom to accomodate for different hardware, but this doesn't mean that software should use this freedom to do arbitrary things.

Regarding the decades of FP design, the initial edition of K&R C contained the following clause:
"Notice that all floats in an expression are converted to double; all floating point arithmethic in C is done in double precision".
That passus was removed quite quickly because users complained about it.
May 21, 2016
On 21.05.2016 19:58, Walter Bright wrote:
> On 5/21/2016 2:26 AM, Tobias Müller wrote:
>> On Friday, 20 May 2016 at 22:22:57 UTC, Walter Bright wrote:
>>> On 5/20/2016 5:36 AM, Tobias M wrote:
>>>> Still an authority, though.
>>>
>>> If we're going to use the fallacy of appeal to authority, may I
>>> present Kahan
>>> who concurrently designed the IEEE 754 spec and the x87.
>>
>> Actually cited this *because* of you mentioning Kahan several times.
>> And because
>> you said "The people who design these things are not fools, and there
>> are good
>> reasons for the way things are."
>
> I meant two things by this:
>
> 1. Do the homework before disagreeing with someone who literally wrote
> the book and designed the hardware for it.
> ...

Sure.

> 2. Complaining that the x87 is not IEEE compliant, when the guy that
> designed the x87 wrote the spec at the same time, suggests a
> misunderstanding the spec.

Who claimed that the x87 is not IEEE compliant? Anyway, this is easy to resolve. IEEE 754-2008 requires FMA. x87 has no FMA.


Also, in practice, it is used to produce non-compliant results, as it has a default mode of being used that gives you results that differ from the specified results for single and double precision sources and destinations.

From IEEE 754-2008:

"1.2 Purpose
1.2.0
This standard provides a method for computation with floating-point numbers that will yield the same result whether the processing is done in hardware, software, or a combination of the two."

I.e. the only stated purpose of IEEE 754 is actually reproducibility.



"shall indicates mandatory requirements strictly to be followed in order to conform to the standard and from which no deviation is permitted (“shall” means “is required to”"

"3.1.2 Conformance
3.1.2 .0
A  conforming  implementation  of  any  supported  format  shall provide means to initialize that format and shall provide conversions between that format and all other supported formats.

A conforming implementation of a supported arithmetic format shall provide all the operations of this standard defined in Clause 5, for that format.

A conforming implementation of a supported interchange format shall provide means to read and write that
format using a specific encoding defined in this clause, for that format.

A programming environment conforms to this standard, in a particular radix, by implementing one or more of the basic formats of that radix as both a supported arithmetic format and a supported interchange format."


For the original IEEE 754-1985, the x87 seems to support all of those clauses for float, double and extended (if you use it in the right way, which is inefficient, as you need to spill the result to memory after each operation, and it is not the default way), but it also supports further operations that fulfill similar functions in a non-conforming manner, and compiler implementations use it that way.

Another way to read it is that the x86 conforms by supporting float and double as interchange formats and extended as arithmetic format.

What is the correct way to interpret the x87 as conforming?


> I.e. again, gotta do the homework first.
>
> Dismissing several decades of FP designs, and every programming
> language, as being "obviously wrong" and "insane"

No need to unduly generalize the scope of my claims.

> is an extraordinary claim,

Not really. There don't seem to be so many popular systems in computing that are not broken in one way or another. (And even when they are, they are a lot more useful than nothing. I'd rather have the x87 than no hardware floating point support at all.)

> and so requires extraordinary evidence.
> ...

There are several decades of experience with the x87 to draw from. SSE does not suffer from those issues anymore. This is because flaws in the design were identified and fixed. Why is this evidence not extraordinary enough?

> After all, what would your first thought be when a sophomore physics
> student tells you that Feynman got it all wrong?

It's an oddly unspecific statement (what about his work is wrong?) and it's a completely different case. Feynman getting everything wrong is not a popular sentiment in the Physics community AFAICT.

> It's good to revisit
> existing dogma now and then, and challenge the underlying assumptions of
> it, but again, you gotta understand the existing dogma all the way down
> first.
> ...

What parts of "existing dogma" justify all the problems that x87 causes?
Also, if e.g.implicit 80 bit precision for CTFE floats truly is mandated by "floating point dogma" then "floating point dogma" clashes with "language design dogma".

> If you don't, you're very likely to miss something fundamental and
> produce a design that is less usable.

I haven't invented anything new either. SSE already fixes the issues.

(But yes, it is probably hard to support systems that only have the x87 in a sane way. Tradeoffs might be necessary. Deliberately copying mistakes that x87 made to other contexts is not the right course of action though.)

May 21, 2016
Reasons have been alleged. What's your final decision?
May 21, 2016
On 5/21/2016 11:36 AM, Tobias M wrote:
> Sorry but this is a misrepresentation. I never claimed that the x87 doesn't
> conform to the IEEE standard.

My point was directed to more than just you. Sorry I didn't make that clear.


> The point is, that is IS possible to provide fairly reasonable and consistent
> semantics within the existing standards (C, C++, IEEE, ...).

That implies what I propose, which is what many C/C++ compilers do, is unreasonable, inconsistent, not Standard compliant, and not IEEE. I.e. that the x87 is not conformant :-)

Read the documentation on the FP switches for VC++, g++, clang, etc. You'll see there are tradeoffs. There is no "obvious, sane" way to do it.

There just isn't.


> They provide a
> certain degree of freedom to accomodate for different hardware, but this doesn't
> mean that software should use this freedom to do arbitrary things.

Nobody is suggesting doing arbitrary things, but to write portable fp, take into account what the Standard says rather than what your version of the compiler does with various default and semi-documented switches.


> Regarding the decades of FP design, the initial edition of K&R C contained the
> following clause:
> "Notice that all floats in an expression are converted to double; all floating
> point arithmethic in C is done in double precision".
> That passus was removed quite quickly because users complained about it.

It was changed to allow floats to be computed as floats, not require it. And the reason at the time, as I recall, was to get faster floating point ops, not because anyone desired reduced precision.

May 22, 2016
On 21.05.2016 20:14, Walter Bright wrote:
> On 5/21/2016 10:03 AM, Timon Gehr wrote:
>> Check out section 5 for some convincing examples showing why the x87
>> is horrible.
>
> The polio vaccine winds up giving a handful of people polio, too.
> ...

People don't get vaccinated without consent.

> It's good to list traps for the unwary in FP usage. It's disingenuous to
> list only problems with one design and pretend there are no traps in
> another design.

Some designs are much better than others.
May 21, 2016
On Saturday, 21 May 2016 at 21:56:02 UTC, Walter Bright wrote:
> On 5/21/2016 11:36 AM, Tobias M wrote:
>> Sorry but this is a misrepresentation. I never claimed that the x87 doesn't
>> conform to the IEEE standard.
>
> My point was directed to more than just you. Sorry I didn't make that clear.
>
>
>> The point is, that is IS possible to provide fairly reasonable and consistent
>> semantics within the existing standards (C, C++, IEEE, ...).
>
> That implies what I propose, which is what many C/C++ compilers do, is unreasonable, inconsistent, not Standard compliant, and not IEEE. I.e. that the x87 is not conformant :-)

I'm trying to understand what you want to say here, but I just don't get it. Can you maybe formulate it differently?

> Read the documentation on the FP switches for VC++, g++, clang, etc. You'll see there are tradeoffs. There is no "obvious, sane" way to do it.
>
> There just isn't.

As I see it, the only real trade off is speed/optimization vs correctness.

>> They provide a
>> certain degree of freedom to accomodate for different hardware, but this doesn't
>> mean that software should use this freedom to do arbitrary things.
>
> Nobody is suggesting doing arbitrary things, but to write portable fp, take into account what the Standard says rather than what your version of the compiler does with various default and semi-documented switches.

https://gcc.gnu.org/wiki/FloatingPointMath

Seems relatively straight forward to me and well documented to me...
Dangerous optimizations like reordering expressions are all opt-in.

Sure it's probably not 100% consistent across implementations/platforms, but it's also *that* bad. And it's certainly not an excuse to make it even worse.

And yes, I think that in such an underspecified domain like FP, you cannot just rely on the standard but have to take the individual implementations into account.
Again, this is not ideal, but let's not make it even worse.

>> Regarding the decades of FP design, the initial edition of K&R C contained the
>> following clause:
>> "Notice that all floats in an expression are converted to double; all floating
>> point arithmethic in C is done in double precision".
>> That passus was removed quite quickly because users complained about it.
>
> It was changed to allow floats to be computed as floats, not require it. And the reason at the time, as I recall, was to get faster floating point ops, not because anyone desired reduced precision.

I don't think that anyone has argued that lower precision is better. But a compiler should just do what it is told, not trying to be too clever.
June 06, 2016
On Saturday, 21 May 2016 at 22:05:31 UTC, Timon Gehr wrote:
> On 21.05.2016 20:14, Walter Bright wrote:
>> It's good to list traps for the unwary in FP usage. It's disingenuous to
>> list only problems with one design and pretend there are no traps in
>> another design.
>
> Some designs are much better than others.

Indeed. There are actually _only_ problems with D's take on floating point. It even prevents implementing higher precision double-double and quad-double math libraries by error-correction techniques where you get 106 bit/212 bit mantissas:

C++ w/2 x 64 bits adder and conservative settings
--> GOOD ACCURACY/ ~106 significant bits:

#include <iostream>
int main()
{
const double a = 1.23456;
const double b = 1.3e-18;
double hi = a+b;
double lo = -((a - ((a+b) - ((a+b) - a))) - (b + ((a+b) - a)));
std::cout << hi << std::endl; // 1.23456
std::cout << lo << std::endl; // 1.3e-18 SUCCESS!
std::cout << (hi-a) <<std::endl; // 0
}



D w/2 x 64/80 bits adder
--> BAD ACCURACY

import std.stdio;
void main()
{
const double a = 1.23456;
const double b = 1.3e-18;
double hi = a+b;
double lo = -((a - ((a+b) - ((a+b) - a))) - (b + ((a+b) - a)));
writeln(hi);  // 1.23456
writeln(lo); // 2.60104e-18 FAILURE!
writeln(hi-a); // 0
}


Add to this that compiler-backed emulation of 128 bit floats are twice as fast as 80-bit floats in hardware... that's how incredibly slow it 80 bit floats are on modern hardware.

I don't even understand why this is a topic as there is not one single rational to keep it the way it is. Not one.


August 22, 2016
Sorry, I stopped reading this thread after my last response, as I felt I was wasting too much time on this discussion, so I didn't read your response till now.

On Saturday, 21 May 2016 at 14:38:20 UTC, Timon Gehr wrote:
> On 20.05.2016 13:32, Joakim wrote:
>> Yet you're the one arguing against increasing precision everywhere in CTFE.
>> ...
>
> High precision is usually good (use high precision, up to arbitrary precision or even symbolic arithmetic whenever it improves your results and you can afford it). *Implicitly* increasing precision during CTFE is bad. *Explicitly* using higher precision during CTFE than at running time /may or may not/ be good. In case it is good, there is often no reason to stop at 80 bits.

It is not "implicitly increasing," Walter has said it will always be done for CTFE, ie it is explicit behavior for all compile-time calculation.  And he agrees with you about not stopping at 80 bits, which is why he wanted to increase the precision of compile-time calculation even more.

>>> This example wasn't specifically about CTFE, but just imagine that
>>> only part of the computation is done at CTFE, all local variables are
>>> transferred to runtime and the computation is completed there.
>>
>> Why would I imagine that?
>
> Because that's the most direct way to go from that example to one where implicit precision enhancement during CTFE only is bad.

Obviously, but you still have not said why one would need to do that in some real situation, which is what I was asking for.

>> And if any part of it is done at runtime using the algorithms you gave,
>> which you yourself admit works fine if you use the right
>> higher-precision types,
>
> What's "right" about them? That the compiler will not implicitly transform some of them to even higher precision in order to break the algorithm again? (I don't think that is even guaranteed.)

What's right is that their precision is high enough to possibly give you the accuracy you want, and increasing their precision will only better that.

>> you don't seem to have a point at all.
>> ...
>
> Be assured that I have a point. If you spend some time reading, or ask some constructive questions, I might be able to get it across to you. Otherwise, we might as well stop arguing.

I think you don't really have a point, as your argumentation and examples are labored.

>>>> No, it is intrinsic to any floating-point calculation.
>>>> ...
>>>
>>> How do you even define accuracy if you don't specify an infinitely
>>> precise reference result?
>>
>> There is no such thing as an infinitely precise result.  All one can do
>> is compute using even higher precision and compare it to lower precision.
>> ...
>
> If I may ask, how much mathematics have you been exposed to?

I suspect a lot more than you have.  Note that I'm talking about calculation and compute, which can only be done at finite precision.  One can manipulate symbolic math with all kinds of abstractions, but once you have to insert arbitrarily but finitely precise inputs and _compute_ outputs, you have to round somewhere for any non-trivial calculation.

>> That is a very specific case where they're implementing higher-precision
>> algorithms using lower-precision registers.
>
> If I argue in the abstract, people request examples. If I provide examples, people complain that they are too specific.

Yes, and?  The point of providing examples is to illustrate a general need with a specific case.  If your specific case is too niche, it is not a general need, ie the people you're complaining about can make both those statements and still make sense.

>> If you're going to all that
>> trouble, you should know not to blindly run the same code at compile-time.
>> ...
>
> The point of CTFE is to be able to run any code at compile-time that adheres to a well-defined set of restrictions. Not using floating point is not one of them.

My point is that potentially not being able to use CTFE for floating-point calculation that is highly specific to the hardware is a perfectly reasonable restriction.

>>>> The only mention of "the last bit" is
>>>
>>> This part is actually funny. Thanks for the laugh. :-)
>>> I was going to say that your text search was too naive, but then I
>>> double-checked your claim and there are actually two mentions of "the
>>> last bit", and close by to the other mention, the paper says that "the
>>> first double a_0 is a double-precision approximation to the number a,
>>> accurate to almost half an ulp."
>>
>> Is there a point to this paragraph?
>>
>
> I don't think further explanations are required here. Maybe be more careful next time.

Not required because you have some unstated assumptions that we are supposed to read from your mind?  Specifically, you have not said why doing the calculation of that "double-precision approximation" at a higher precision and then rounding would necessarily throw their algorithms off.

>> But as long as the way CTFE extending precision is
>> consistently done and clearly communicated,
>
> It never will be clearly communicated to everyone and it will also hit people by accident who would have been aware of it.
>
> What is so incredibly awesome about /implicit/ 80 bit precision as to justify the loss of control? If I want to use high precision for certain constants precomputed at compile time, I can do it just as well, possibly even at more than 80 bits such as to actually obtain accuracy up to the last bit.

On the other hand, what is so bad about CTFE-calculated constants being computed at a higher precision and then rounded down?  Almost any algorithm would benefit from that.

> Also, maybe I will need to move the computation to startup at runtime some time in the future because of some CTFE limitation, and then the additional implicit gain from 80 bit precision will be lost and cause a regression. The compiler just has no way to guess what precision is actually needed for each operation.

Another scenario that I find far-fetched.

>> those people can always opt out and do it some other way.
>> ...
>
> Sure, but now they need to _carefully_ maintain different implementations for CTFE and runtime, for an ugly misfeature. It's a silly magic trick that is not actually very useful and prone to errors.

I think the idea is to give compile-time calculations a boost in precision and accuracy, thus improving the constants computed at compile-time for almost every runtime algorithm.  There may be some algorithms that have problems with this, but I think Walter and I are saying they're so few not to worry about, ie the benefits greatly outweigh the costs.
August 23, 2016
On 22.08.2016 20:26, Joakim wrote:
> Sorry, I stopped reading this thread after my last response, as I felt I
> was wasting too much time on this discussion, so I didn't read your
> response till now.
> ...

No problem. Would have been fine with me if it stayed that way.

> On Saturday, 21 May 2016 at 14:38:20 UTC, Timon Gehr wrote:
>> On 20.05.2016 13:32, Joakim wrote:
>>> Yet you're the one arguing against increasing precision everywhere in
>>> CTFE.
>>> ...
>>
>> High precision is usually good (use high precision, up to arbitrary
>> precision or even symbolic arithmetic whenever it improves your
>> results and you can afford it). *Implicitly* increasing precision
>> during CTFE is bad. *Explicitly* using higher precision during CTFE
>> than at running time /may or may not/ be good. In case it is good,
>> there is often no reason to stop at 80 bits.
>
> It is not "implicitly increasing,"

Yes it is. I don't state anywhere that I want the precision to increase. The default assumption is that CTFE behaves as (close to as reasonably possible to) runtime execution.

> Walter has said it will always be
> done for CTFE, ie it is explicit behavior for all compile-time
> calculation.

Well, you can challenge the definition of words I am using if you want, but what's the point?

>  And he agrees with you about not stopping at 80 bits,
> which is why he wanted to increase the precision of compile-time
> calculation even more.
> ...

I'd rather not think of someone reaching that conclusion as agreeing with me.

>>>> This example wasn't specifically about CTFE, but just imagine that
>>>> only part of the computation is done at CTFE, all local variables are
>>>> transferred to runtime and the computation is completed there.
>>>
>>> Why would I imagine that?
>>
>> Because that's the most direct way to go from that example to one
>> where implicit precision enhancement during CTFE only is bad.
>
> Obviously, but you still have not said why one would need to do that in
> some real situation, which is what I was asking for.
> ...

It seems you think your use cases are real, but mine are not, so there is no way to give you a "real" example. I can just hope that Murphy's law strikes and you eventually run into the problems yourself.


>>> And if any part of it is done at runtime using the algorithms you gave,
>>> which you yourself admit works fine if you use the right
>>> higher-precision types,
>>
>> What's "right" about them? That the compiler will not implicitly
>> transform some of them to even higher precision in order to break the
>> algorithm again? (I don't think that is even guaranteed.)
>
> What's right is that their precision is high enough to possibly give you
> the accuracy you want, and increasing their precision will only better
> that.
> ...

I have explained why this is not true. (There is another explanation further below.)


>>>>> No, it is intrinsic to any floating-point calculation.
>>>>> ...
>>>>
>>>> How do you even define accuracy if you don't specify an infinitely
>>>> precise reference result?
>>>
>>> There is no such thing as an infinitely precise result.  All one can do
>>> is compute using even higher precision and compare it to lower
>>> precision.
>>> ...
>>
>> If I may ask, how much mathematics have you been exposed to?
>
> I suspect a lot more than you have.

I would not expect anyone familiar with the real number system to make a remark like "there is no such thing as an infinitely precise result".

> Note that I'm talking about
> calculation and compute, which can only be done at finite precision.

I wasn't, and it was my post pointing out the implicit assumption that floating point algorithms are thought of as operating on real numbers that started this subthread, if you remember. Then you said that my point was untrue without any argumentation, and I asked a very specific question in order to figure out how you reached your conclusion. Then you wrote a comment that didn't address my question at all and was obviously untrue from where I stood. Therefore I suspected that we might be using incompatible terminology, hence I asked how familiar you are with mathematical language, which you didn't answer either.

> One can manipulate symbolic math with all kinds of abstractions, but
> once you have to insert arbitrarily but finitely precise inputs and
> _compute_ outputs, you have to round somewhere for any non-trivial
> calculation.
> ...

You don't need to insert any concrete values to make relevant definitions and draw conclusions. My question was how you define accuracy, because this is crucial for understanding and/or refuting your point. It's a reasonable question that you ought to be able to answer if you use the term in an argument repeatedly.

>>> That is a very specific case where they're implementing higher-precision
>>> algorithms using lower-precision registers.
>>
>> If I argue in the abstract, people request examples. If I provide
>> examples, people complain that they are too specific.
>
> Yes, and?  The point of providing examples is to illustrate a general
> need with a specific case.  If your specific case is too niche, it is
> not a general need, ie the people you're complaining about can make both
> those statements and still make sense.
> ...

I think the problem is that they don't see the general need from the example.

>>> If you're going to all that
>>> trouble, you should know not to blindly run the same code at
>>> compile-time.
>>> ...
>>
>> The point of CTFE is to be able to run any code at compile-time that
>> adheres to a well-defined set of restrictions. Not using floating
>> point is not one of them.
>
> My point is that potentially not being able to use CTFE for
> floating-point calculation that is highly specific to the hardware is a
> perfectly reasonable restriction.
> ...

I meant that the restriction is not enforced by the language definition. I.e. it is not a compile-time error to compute with built-in floating point types in CTFE.

Anyway, it is unfortunate but true that performance requirements might make it necessary to allow the results to be slightly hardware-specific; I agree that some compromises might be necessary. Arbitrarily using higher precision even in cases where the target hardware actually supports all features of IEEE floats and doubles does not seem like a good compromise though, it's completely unforced.

>>>>> The only mention of "the last bit" is
>>>>
>>>> This part is actually funny. Thanks for the laugh. :-)
>>>> I was going to say that your text search was too naive, but then I
>>>> double-checked your claim and there are actually two mentions of "the
>>>> last bit", and close by to the other mention, the paper says that "the
>>>> first double a_0 is a double-precision approximation to the number a,
>>>> accurate to almost half an ulp."
>>>
>>> Is there a point to this paragraph?
>>>
>>
>> I don't think further explanations are required here. Maybe be more
>> careful next time.
>
> Not required because you have some unstated assumptions that we are
> supposed to read from your mind?

Because anyone with a suitable pdf reader can verify that "the last bit" is mentioned twice inside that pdf document, and that the mention that you didn't see supports my point.

> Specifically, you have not said why
> doing the calculation of that "double-precision approximation" at a
> higher precision and then rounding would necessarily throw their
> algorithms off.
> ...

I did somewhere in this thread. (Using ASCII-art graphics even.)

Basically, the number is represented using two doubles with a non-overlapping mantissa. I'll try to explain using a decimal floating-point type to maybe illustrate it better. E.g. assume that the higher-precision type has a 4-digit mantissa, and the lower-precision type has a 3-digit mantissa:

The computation could have resulted in the double-double (1234e2, 56.78) representing the number 123456.78 (which is the exact sum of the two components.)

If we now round both components to lower precision independently, we are left with (123e3, 56.7) which represents the number 123056.7, which has only 3 accurate mantissa digits.

If OTOH, we had used the lower-precision type from the start, we would get a more accurate result, such as (123e3, 457) representing the number 123457.

This might be slightly counter-intuitive, but it is not that uncommon for floating point-specific algorithms to actually rely on floating point specifics.

Here, the issue is that the compiler has no way to know the correct way to transform the set of higher-precision floating point numbers to a corresponding set of lower-precision floating point numbers; it does not know how the values are actually interpreted by the program.

>>> But as long as the way CTFE extending precision is
>>> consistently done and clearly communicated,
>>
>> It never will be clearly communicated to everyone and it will also hit
>> people by accident who would have been aware of it.
>>
>> What is so incredibly awesome about /implicit/ 80 bit precision as to
>> justify the loss of control? If I want to use high precision for
>> certain constants precomputed at compile time, I can do it just as
>> well, possibly even at more than 80 bits such as to actually obtain
>> accuracy up to the last bit.
>
> On the other hand, what is so bad about CTFE-calculated constants being
> computed at a higher precision and then rounded down?  Almost any
> algorithm would benefit from that.
> ...

Some will subtly break, for some it won't matter and the others will work for reasons mostly hidden to the programmer and might therefore break later. Sometimes the programmer is aware of the funny language semantics and exploiting it cleverly, using 'float' during CTFE deliberately for performing 80-bit computations, confusing readers about the actual precision being used.


>> Also, maybe I will need to move the computation to startup at runtime
>> some time in the future because of some CTFE limitation, and then the
>> additional implicit gain from 80 bit precision will be lost and cause
>> a regression. The compiler just has no way to guess what precision is
>> actually needed for each operation.
>
> Another scenario that I find far-fetched.
> ...

Well, it's not. (The CTFE limitation could be e.g. performance.)

Basically, anytime that a programmer has wrong assumptions about why their code works correctly, this is slightly dangerous. It's not a good thing if the compiler tries to outsmart the programmer, because the compiler is not (supposed to be) smarter than the programmer.

>>> those people can always opt out and do it some other way.
>>> ...
>>
>> Sure, but now they need to _carefully_ maintain different
>> implementations for CTFE and runtime, for an ugly misfeature. It's a
>> silly magic trick that is not actually very useful and prone to errors.
>
> I think the idea is to give compile-time calculations a boost in
> precision and accuracy, thus improving the constants computed at
> compile-time for almost every runtime algorithm.  There may be some
> algorithms that have problems with this, but I think Walter and I are
> saying they're so few not to worry about, ie the benefits greatly
> outweigh the costs.

There are no benefits, because I can just explicitly compute at the precision I need myself, and I would prefer others to do the same, such that I have some clues about their reasoning when reading their code. Give me what I ask for. If you think I asked for the wrong thing, give me that wrong thing. If it is truly the wrong thing, I will see it and fix it.

If you still disagree, that's fine, just don't claim that I don't have a point, thanks.

June 29, 2017
On 14.05.2016 02:49, Timon Gehr wrote:
> On 13.05.2016 23:35, Walter Bright wrote:
>> On 5/13/2016 12:48 PM, Timon Gehr wrote:
>>> IMO the compiler should never be allowed to use a precision different
>>> from the one specified.
>>
>> I take it you've never been bitten by accumulated errors :-)
>> ...
> 
> If that was the case it would be because I explicitly ask for high precision if I need it.
> 
> If the compiler using or not using a higher precision magically fixes an actual issue with accumulated errors, that means the correctness of the code is dependent on something hidden, that you are not aware of, and that could break any time, for example at a time when you really don't have time to track it down.
> 
>> Reduced precision is only useful for storage formats and increasing
>> speed.  If a less accurate result is desired, your algorithm is wrong.
> 
> Nonsense. That might be true for your use cases. Others might actually depend on IEE 754 semantics in non-trivial ways. Higher precision for temporaries does not imply higher accuracy for the overall computation.
> 
> E.g., correctness of double-double arithmetic is crucially dependent on correct rounding semantics for double:
> https://en.wikipedia.org/wiki/Quadruple-precision_floating-point_format#Double-double_arithmetic

We finally have someone on D.learn who is running into this exact problem:

http://forum.dlang.org/post/vimvfarzqkcmbvtnznrf@forum.dlang.org