May 13, 2016 Re: Always false float comparisons | ||||
---|---|---|---|---|
| ||||
Posted in reply to Timon Gehr | On 5/13/2016 5:49 PM, Timon Gehr wrote: > Nonsense. That might be true for your use cases. Others might actually depend on > IEE 754 semantics in non-trivial ways. Higher precision for temporaries does not > imply higher accuracy for the overall computation. Of course it implies it. An anecdote: a colleague of mine was once doing a chained calculation. At every step, he rounded to 2 digits of precision after the decimal point, because 2 digits of precision was enough for anybody. I carried out the same calculation to the max precision of the calculator (10 digits). He simply could not understand why his result was off by a factor of 2, which was a couple hundred times his individual roundoff error. > E.g., correctness of double-double arithmetic is crucially dependent on correct > rounding semantics for double: > https://en.wikipedia.org/wiki/Quadruple-precision_floating-point_format#Double-double_arithmetic Double-double has its own peculiar issues, and is not relevant to this discussion. > Also, it seems to me that for e.g. > https://en.wikipedia.org/wiki/Kahan_summation_algorithm, > the result can actually be made less precise by adding casts to higher precision > and truncations back to lower precision at appropriate places in the code. I don't see any support for your claim there. > And even if higher precision helps, what good is a "precision-boost" that e.g. > disappears on 64-bit builds and then creates inconsistent results? That's why I was thinking of putting in 128 bit floats for the compiler internals. > Sometimes reproducibility/predictability is more important than maybe making > fewer rounding errors sometimes. This includes reproducibility between CTFE and > runtime. A more accurate answer should never cause your algorithm to fail. It's like putting better parts in your car causing the car to fail. > Just actually comply to the IEEE floating point standard when using their > terminology. There are algorithms that are designed for it and that might stop > working if the language does not comply. Conjecture. I've written FP algorithms (from Cody+Waite, for example), and none of them degraded when using more precision. Consider that the 8087 has been operating at 80 bits precision by default for 30 years. I've NEVER heard of anyone getting actual bad results from this. They have complained about their test suites that tested for less accurate results broke. They have complained about the speed of x87. And Intel has been trying to get rid of the x87 forever. Sometimes I wonder if there's a disinformation campaign about more accuracy being bad, because it smacks of nonsense. BTW, I once asked Prof Kahan about this. He flat out told me that the only reason to downgrade precision was if storage was tight or you needed it to run faster. I am not making this up. |
May 14, 2016 Re: Always false float comparisons | ||||
---|---|---|---|---|
| ||||
Posted in reply to Walter Bright | On Saturday, 14 May 2016 at 01:26:18 UTC, Walter Bright wrote:
> BTW, I once asked Prof Kahan about this. He flat out told me that the only reason to downgrade precision was if storage was tight or you needed it to run faster. I am not making this up.
He should have been aware of reproducibility since people use fixed point to achieve it, if he wasn't then shame on him.
In Java all compile time constants are done using strict settings and it provides a keyword «strictfp» to get strict behaviour for a particular class/function.
In C++ template parameters cannot be floating point, you use std::ratio so you get exact rational number instead. This is to avoid inaccuracy problems in the type system.
In interval-arithmetics you need to round up and down correctly on the bounds-computations to get correct results. (It is ok for the interval to be larger than the real result, but the opposite is a disaster).
With reproducible arithmetics you can do advanced accurate static analysis of programs using floating point code.
With reproducible arithmetics you can sync nodes in a cluster based on "time" alone, saving exchanges of data in simulations.
There are lots of reasons to default to well defined floating point arithmetics.
|
May 14, 2016 Re: Always false float comparisons | ||||
---|---|---|---|---|
| ||||
Posted in reply to Walter Bright | On Saturday, 14 May 2016 at 01:26:18 UTC, Walter Bright wrote:
>
> An anecdote: a colleague of mine was once doing a chained calculation. At every step, he rounded to 2 digits of precision after the decimal point, because 2 digits of precision was enough for anybody. I carried out the same calculation to the max precision of the calculator (10 digits). He simply could not understand why his result was off by a factor of 2, which was a couple hundred times his individual roundoff error.
>
>
I'm sympathetic to this. Some of my work deals with statistics and you see people try to use formula that are faster but less accurate and it can really get you in to trouble. Var(X) = E(X^2) - E(X)^2 is only true for real numbers, not floating point arithmetic. It can also lead to weird results when dealing with matrix inverses.
I like the idea of a float type that is effectively the largest precision on your machine (the D real type). However, I could be convinced by the argument that you should have to opt-in for this and that internal calculations should not implicitly use it. Mainly because I'm sympathetic to the people who would prefer speed to precision. Not everybody needs all the precision all the time.
|
May 14, 2016 Re: Always false float comparisons | ||||
---|---|---|---|---|
| ||||
Posted in reply to Ola Fosheim Grøstad | On Saturday, 14 May 2016 at 05:46:38 UTC, Ola Fosheim Grøstad wrote:
> In Java all compile time constants are done using strict settings and it provides a keyword «strictfp» to get strict behaviour for a particular class/function.
In java everything used to be strictfp (and there was no keyword), but it was changed to do non-strict arithmetic by default after a backlash from numeric programmers.
|
May 14, 2016 Re: Always false float comparisons | ||||
---|---|---|---|---|
| ||||
Posted in reply to QAston | On Saturday, 14 May 2016 at 09:11:49 UTC, QAston wrote:
> On Saturday, 14 May 2016 at 05:46:38 UTC, Ola Fosheim Grøstad wrote:
>> In Java all compile time constants are done using strict settings and it provides a keyword «strictfp» to get strict behaviour for a particular class/function.
>
> In java everything used to be strictfp (and there was no keyword), but it was changed to do non-strict arithmetic by default after a backlash from numeric programmers.
Java had a healthy default, but switched in order to not look so bad in comparison to C on current day hardware. However, they retained the ability to get stricter floating point.
At the end of the day there is literally no end to how far you can move down the line of implementation defined floating point. Take a look at the ARM instruction set, it makes x86 look high level. You can even choose how many iterations you want for complex instructions (i.e. choose the approximation level for faster execution).
However, these days IEEE754-2008 is becoming available in hardware and therefore one is better off choosing the most well-defined semantics for the regular case. It means less optimization opportunities unless you specify relaxed semantics, but that isn't such a bad trade off as long as specifying relaxed semantics is easy.
|
May 14, 2016 Re: Always false float comparisons | ||||
---|---|---|---|---|
| ||||
Posted in reply to Walter Bright | On Saturday, 14 May 2016 at 01:26:18 UTC, Walter Bright wrote:
>> Sometimes reproducibility/predictability is more important than maybe making
>> fewer rounding errors sometimes. This includes reproducibility between CTFE and
>> runtime.
>
> A more accurate answer should never cause your algorithm to fail. It's like putting better parts in your car causing the car to fail.
This is all quite discouraging from a scientific programmers point of view. Precision is important, more precision is good, but reproducibility and predictability are critical.
Tables of constants that change value if I put a `static` in front of them?
Floating point code that produces different results after a compiler upgrade / with different non-fp-related switches?
Ewwwww.
|
May 14, 2016 Re: Always false float comparisons | ||||
---|---|---|---|---|
| ||||
Posted in reply to John Colvin | On 5/14/2016 3:16 AM, John Colvin wrote: > This is all quite discouraging from a scientific programmers point of view. > Precision is important, more precision is good, but reproducibility and > predictability are critical. I used to design and build digital electronics out of TTL chips. Over time, TTL chips got faster and faster. The rule was to design the circuit with a minimum signal propagation delay, but never a maximum. Therefore, putting in faster parts will never break the circuit. Engineering is full of things like this. It's sound engineering practice. I've never ever heard of a circuit requiring a resistor with 20% tolerance that would fail if a 10% tolerance one was put in, for another example. > Tables of constants that change value if I put a `static` in front of them? > > Floating point code that produces different results after a compiler upgrade / > with different non-fp-related switches? > > Ewwwww. Floating point is not exact calculation. It just isn't. Designing an algorithm that relies on worse answers is absurd to my ears. Results should be tested to have a minimum number of correct bits in the answer, not a maximum number. This is, in fact, how std.math checks the result of the algorithms implemented in it, and how it should be done. This is not some weird crazy idea of mine, as I said, the x87 FPU in every x86 chip has been doing this for several decades. |
May 14, 2016 Re: Always false float comparisons | ||||
---|---|---|---|---|
| ||||
Posted in reply to jmh530 | On 5/13/2016 10:52 PM, jmh530 wrote:
> I like the idea of a float type that is effectively the largest precision on
> your machine (the D real type). However, I could be convinced by the argument
> that you should have to opt-in for this and that internal calculations should
> not implicitly use it. Mainly because I'm sympathetic to the people who would
> prefer speed to precision. Not everybody needs all the precision all the time.
Speed matters on the generated target program, not in the compiler internal floating point calculations, simply because the compiler does very, very few of them.
|
May 14, 2016 Re: Always false float comparisons | ||||
---|---|---|---|---|
| ||||
Posted in reply to Ola Fosheim Grøstad | On 5/13/2016 10:46 PM, Ola Fosheim Grøstad wrote: > On Saturday, 14 May 2016 at 01:26:18 UTC, Walter Bright wrote: >> BTW, I once asked Prof Kahan about this. He flat out told me that the only >> reason to downgrade precision was if storage was tight or you needed it to run >> faster. I am not making this up. > > He should have been aware of reproducibility since people use fixed point to > achieve it, if he wasn't then shame on him. Kahan designed the x87 and wrote the IEEE 754 standard, so I'd do my homework before telling him he is wrong about basic floating point stuff. > In Java all compile time constants are done using strict settings and it > provides a keyword «strictfp» to get strict behaviour for a particular > class/function. What happened with Java was interesting. The original spec required double arithmetic to be done with double precision. This wound up failing all over the place on x86 machines, which (as I explained) does temporaries to 80 bits. Forcing the x87 to use doubles for intermediate values caused Java to run much slower, and Sun was forced to back off on that requirement. |
May 14, 2016 Re: Always false float comparisons | ||||
---|---|---|---|---|
| ||||
Posted in reply to Walter Bright | On Saturday, 14 May 2016 at 18:58:35 UTC, Walter Bright wrote:
> Kahan designed the x87 and wrote the IEEE 754 standard, so I'd do my homework before telling him he is wrong about basic floating point stuff.
You don't have to tell me who Kahan is. I don't see the relevance. You are trying to appeal to authority. Stick to facts. :-)
|
Copyright © 1999-2021 by the D Language Foundation