May 20, 2016 Re: Always false float comparisons | ||||
---|---|---|---|---|
| ||||
Posted in reply to Joakim | On 20.05.2016 11:14, Joakim wrote: > On Thursday, 19 May 2016 at 18:22:48 UTC, Timon Gehr wrote: >> On 19.05.2016 08:04, Joakim wrote: >>> On Wednesday, 18 May 2016 at 17:10:25 UTC, Timon Gehr wrote: >>>> It's not just slightly worse, it can cut the number of useful bits in >>>> half or more! It is not unusual, I have actually run into those >>>> problems in the past, and it can break an algorithm that is in Phobos >>>> today! >>> >>> I wouldn't call that broken. Looking at the hex output by replacing %f >>> with %A in writefln, it appears the only differences in all those >>> results is the last byte in the significand. >> >> Argh... >> >> // ... >> >> void main(){ >> //double[] data=[1e16,1,-9e15]; >> import std.range; >> double[] data=1e16~repeat(1.0,100000000).array~(-9e15); >> import std.stdio; >> writefln("%f",sum(data)); // baseline >> writefln("%f",kahan(data)); // kahan >> writefln("%f",kahanBroken(data)); // broken kahan >> } >> >> >> dmd -run kahanDemo.d >> 1000000000000000.000000 >> 1000000100000000.000000 >> 1000000000000000.000000 >> >> dmd -m32 -O -run kahanDemo.d >> 1000000000000000.000000 >> 1000000000000000.000000 >> 1000000000000000.000000 >> >> >> Better? >> >> Obviously there is more structure in the data that I invent manually >> than in a real test case where it would go wrong. The problems carry >> over though. > > I looked over your code a bit. If I define sum and c as reals in > "kahanBroken" at runtime, this problem goes away. Yes. That's absolutely obvious, and I have noted it before, but thanks. Maybe try to understand why this problem occurs in the first place. > Since that's what the > CTFE rule is actually doing, ie extending all floating-point to reals at > compile-time, I don't see what you're complaining about. Try it, run > even your original naive summation algorithm through CTFE and it will > produce the result you want: > > enum double[] ctData=[1e16,1,-9e15]; > enum ctSum = sum(ctData); > writefln("%f", ctSum); > ... This example wasn't specifically about CTFE, but just imagine that only part of the computation is done at CTFE, all local variables are transferred to runtime and the computation is completed there. >>> As Don's talk pointed out, >>> all floating-point calculations will see loss of precision starting >>> there. >>> ... >> >> >> This is implicitly assuming a development model where the programmer >> first writes down the computation as it would be correct in the real >> number system and then naively replaces every operation by the >> rounding equivalent and hopes for the best. > > No, it is intrinsic to any floating-point calculation. > ... How do you even define accuracy if you don't specify an infinitely precise reference result? >> It is a useful rule if that is what you're doing. One might be doing >> something else. Consider the following paper for an example where the >> last bit in the significant actually carries useful information for >> many of the values used in the program. >> >> http://www.jaist.ac.jp/~s1410018/papers/qd.pdf > > Did you link to the wrong paper? ;) No. That paper uses multiple doubles per approximated real value to implement arithmetic that is more precise than using just plain doubles. If any bit in the first double is off, this is no better than using a single double. > I skimmed it and that paper > explicitly talks about error bounds all over the place. It is enough to read the abstract to figure out what the problem is. This demonstrates a non-contrived case where CTFE using enhanced precision throughout can break your program. Compute something as a double-double at compile-time, and when it is transferred to runtime you lose all the nice extra precision, because bits in the middle of the (conceptual) mantissa are lost. > The only mention of "the last bit" is This part is actually funny. Thanks for the laugh. :-) I was going to say that your text search was too naive, but then I double-checked your claim and there are actually two mentions of "the last bit", and close by to the other mention, the paper says that "the first double a_0 is a double-precision approximation to the number a, accurate to almost half an ulp." > when they say they calculated their > constants in arbitrary precision before rounding them for runtime use, > which is ironically similar to what Walter suggested doing for D's CTFE > also. > ... Nothing "ironic" about that. It is sometimes a good idea and I can do this explicitly and make sure the rounding is done correctly, just like they did. Also, it is a lot more flexible if I can specify the exact way the computation is done and the result is rounded. 80 bits might not be enough anyway. There is no reason for the language to apply potentially incorrect "hand holding" here. Again, please understand that my point is not that lower precision is better. My point is that doing the same thing in every context and allowing the programmer to specify what happens is better. >>> In this case, not increasing precision gets the more accurate result, >>> but other examples could be constructed that _heavily_ favor increasing >>> precision. >> >> Sure. In such cases, you should use higher precision. What is the >> problem? This is already supported (the compiler is not allowed to use >> lower precision than requested). > > I'm not the one with the problem, you're the one complaining. > ... So you see no problem with my requested semantics for the built-in floating point types? >>> In fact, almost any real-world, non-toy calculation would >>> favor it. >>> >>> In any case, nobody should depend on the precision out that far being >>> accurate or "reliable." >> >> >> IEEE floating point has well-defined behaviour and there is absolutely >> nothing wrong with code that delivers more accurate results just >> because it is actually aware of the actual semantics of the operations >> being carried out. > > You just made the case for Walter doing what he did. :) No. |
May 20, 2016 Re: Always false float comparisons | ||||
---|---|---|---|---|
| ||||
Posted in reply to Walter Bright | On 20.05.2016 08:12, Walter Bright wrote: > On 5/19/2016 1:26 PM, Timon Gehr wrote: >> Those two lines producing different results is unexpected, because you >> are >> explicitly saying that y is a double, and the first line also does >> implicit >> rounding (probably -- on all compilers and targets that will be >> relevant in the >> near future -- to double). > > [...] >> It's obviously bad language design on multiple levels. > > Floating point semantics simply are not that simple, This is about how the language defines floating point semantics. They can be that simple if the specification says so. > on any compiler, gdc/ldc seem to get it right. David, Iain, is this the case? > language Anybody can invent a language and IEEE 754 semantics are possible to support. Recent hardware does it. > or hardware. I recommend reading what the C/C++ Standards say > about it, and look at the plethora of compiler switches for > VC++/g++/clang++. > ... I don't really need to look at examples of how to do it wrong, but I will read whatever relevant link you can give me carefully. In D you don't need compiler switches to change the semantics of your code. We have more or less sane conditional compilation. > The people who design these things are not fools, I don't know those people, but I would like to know their thoughts about the current discussion. > and there are good reasons for the way things are. > ... It is not unusual that there is some good reason for a design mistake. > Sorry, I don't agree. > >> If you say so. I would like to see an example that demonstrates that >> the first >> roundToDouble is required. > > That's beside the point. Absolutely not. > If there are spots in the program that require > rounding, what is wrong with having to specify it? I explained what is wrong with it. What is wrong with having to specify the precision that is /required for your code to run correctly/? > Why compromise speed & accuracy in the rest of the code You don't compromise speed or accuracy if the rest of the code just correctly specifies the precision it should be run at. > that does not require it? (And yes, > rounding in certain spots can and does compromise both speed and accuracy.) > ... Because it is the only sane thing to do, as explained from many different angles in this thread. If it does compromise accuracy, that means the code is bad (it relies on precision enhancements that are not guaranteed to happen, hence it is hardly correct). > >> And if you are not? (I find the standard assumption that >> counterexamples to >> wrong statements are one-off special cases tiresome. > > You might, but I've implemented a number of FP algorithms, and none of > them required rounding to a lower precision. double x=..., z=...; double y=roundToDouble(x+z); Lower than what precision? It is not a 'lower' precision, it is the precision that was specified. > I've also seen many that > failed due to premature rounding. > ... Yes. Your results might be garbage more often if you run at too low precision. I and everyone else are very aware of this. I am not arguing against any of this. There seems to be a total communication failure here. > >> Creating useful programs in D shouldn't require knowing (or thinking >> about) an >> excess amount of D-specific implicit arcane details of the language >> semantics. >> Being easy to pick up and use is a major selling point for D. > > As pointed out innumerable times here, the same issues are in C and C++ > (with even more variability!). As also pointed out here, it does not matter. I'm *not* trying to make the point that C and C++ do it better or anything like that. (But well, in practice, the actual commonly used C and C++ implementations probably do get it right by default.) > When designing an algorithm that you know > requires rounding in a certain place, you are already significantly > beyond implementing a naive algorithm, and it would be time to start > paying attention to how FP actually works. > > I'm curious if you know of any language that meets your requirements. > (Java 1.0 did, but Sun was forced to abandon that.) x86_64 assembly language. |
May 20, 2016 Re: Always false float comparisons | ||||
---|---|---|---|---|
| ||||
Posted in reply to Timon Gehr | On Friday, 20 May 2016 at 11:02:45 UTC, Timon Gehr wrote: > On 20.05.2016 11:14, Joakim wrote: >> On Thursday, 19 May 2016 at 18:22:48 UTC, Timon Gehr wrote: >>> On 19.05.2016 08:04, Joakim wrote: >>>> On Wednesday, 18 May 2016 at 17:10:25 UTC, Timon Gehr wrote: >>>>> It's not just slightly worse, it can cut the number of useful bits in >>>>> half or more! It is not unusual, I have actually run into those >>>>> problems in the past, and it can break an algorithm that is in Phobos >>>>> today! >>>> >>>> I wouldn't call that broken. Looking at the hex output by replacing %f >>>> with %A in writefln, it appears the only differences in all those >>>> results is the last byte in the significand. >>> >>> Argh... >>> >>> // ... >>> >>> void main(){ >>> //double[] data=[1e16,1,-9e15]; >>> import std.range; >>> double[] data=1e16~repeat(1.0,100000000).array~(-9e15); >>> import std.stdio; >>> writefln("%f",sum(data)); // baseline >>> writefln("%f",kahan(data)); // kahan >>> writefln("%f",kahanBroken(data)); // broken kahan >>> } >>> >>> >>> dmd -run kahanDemo.d >>> 1000000000000000.000000 >>> 1000000100000000.000000 >>> 1000000000000000.000000 >>> >>> dmd -m32 -O -run kahanDemo.d >>> 1000000000000000.000000 >>> 1000000000000000.000000 >>> 1000000000000000.000000 >>> >>> >>> Better? >>> >>> Obviously there is more structure in the data that I invent manually >>> than in a real test case where it would go wrong. The problems carry >>> over though. >> >> I looked over your code a bit. If I define sum and c as reals in >> "kahanBroken" at runtime, this problem goes away. > > Yes. That's absolutely obvious, and I have noted it before, but thanks. Maybe try to understand why this problem occurs in the first place. Yet you're the one arguing against increasing precision everywhere in CTFE. >> Since that's what the >> CTFE rule is actually doing, ie extending all floating-point to reals at >> compile-time, I don't see what you're complaining about. Try it, run >> even your original naive summation algorithm through CTFE and it will >> produce the result you want: >> >> enum double[] ctData=[1e16,1,-9e15]; >> enum ctSum = sum(ctData); >> writefln("%f", ctSum); >> ... > > This example wasn't specifically about CTFE, but just imagine that only part of the computation is done at CTFE, all local variables are transferred to runtime and the computation is completed there. Why would I imagine that? And this whole discussion is about what happens if you change the precision of all variables to real when doing CTFE, so what's the point of giving an example that isn't "specifically" about that? And if any part of it is done at runtime using the algorithms you gave, which you yourself admit works fine if you use the right higher-precision types, you don't seem to have a point at all. >>>> As Don's talk pointed out, >>>> all floating-point calculations will see loss of precision starting >>>> there. >>>> ... >>> >>> >>> This is implicitly assuming a development model where the programmer >>> first writes down the computation as it would be correct in the real >>> number system and then naively replaces every operation by the >>> rounding equivalent and hopes for the best. >> >> No, it is intrinsic to any floating-point calculation. >> ... > > How do you even define accuracy if you don't specify an infinitely precise reference result? There is no such thing as an infinitely precise result. All one can do is compute using even higher precision and compare it to lower precision. >>> It is a useful rule if that is what you're doing. One might be doing >>> something else. Consider the following paper for an example where the >>> last bit in the significant actually carries useful information for >>> many of the values used in the program. >>> >>> http://www.jaist.ac.jp/~s1410018/papers/qd.pdf >> >> Did you link to the wrong paper? ;) > > No. That paper uses multiple doubles per approximated real value to implement arithmetic that is more precise than using just plain doubles. If any bit in the first double is off, this is no better than using a single double. > >> I skimmed it and that paper >> explicitly talks about error bounds all over the place. > > It is enough to read the abstract to figure out what the problem is. This demonstrates a non-contrived case where CTFE using enhanced precision throughout can break your program. Compute something as a double-double at compile-time, and when it is transferred to runtime you lose all the nice extra precision, because bits in the middle of the (conceptual) mantissa are lost. That is a very specific case where they're implementing higher-precision algorithms using lower-precision registers. If you're going to all that trouble, you should know not to blindly run the same code at compile-time. >> The only mention of "the last bit" is > > This part is actually funny. Thanks for the laugh. :-) > I was going to say that your text search was too naive, but then I double-checked your claim and there are actually two mentions of "the last bit", and close by to the other mention, the paper says that "the first double a_0 is a double-precision approximation to the number a, accurate to almost half an ulp." Is there a point to this paragraph? >> when they say they calculated their >> constants in arbitrary precision before rounding them for runtime use, >> which is ironically similar to what Walter suggested doing for D's CTFE >> also. >> ... > > Nothing "ironic" about that. It is sometimes a good idea and I can do this explicitly and make sure the rounding is done correctly, just like they did. Also, it is a lot more flexible if I can specify the exact way the computation is done and the result is rounded. 80 bits might not be enough anyway. There is no reason for the language to apply potentially incorrect "hand holding" here. > > Again, please understand that my point is not that lower precision is better. My point is that doing the same thing in every context and allowing the programmer to specify what happens is better. I understand your point that sometimes the programmer wants more control. But as long as the way CTFE extending precision is consistently done and clearly communicated, those people can always opt out and do it some other way. >>>> In this case, not increasing precision gets the more accurate result, >>>> but other examples could be constructed that _heavily_ favor increasing >>>> precision. >>> >>> Sure. In such cases, you should use higher precision. What is the >>> problem? This is already supported (the compiler is not allowed to used >>> lower precision than requested). >> >> I'm not the one with the problem, you're the one complaining. >> ... > > So you see no problem with my requested semantics for the built-in floating point types? I think it's an extreme minority use case that may not merit the work, though I don't know how much work it would require. However, I also feel that way about Walter's suggested move to 128-bit CTFE, as dmd is x86-only anyway. |
May 20, 2016 Re: Always false float comparisons | ||||
---|---|---|---|---|
| ||||
Posted in reply to Walter Bright | On Friday, 20 May 2016 at 06:12:44 UTC, Walter Bright wrote:
> On 5/19/2016 1:26 PM, Timon Gehr wrote:
>> Those two lines producing different results is unexpected, because you are
>> explicitly saying that y is a double, and the first line also does implicit
>> rounding (probably -- on all compilers and targets that will be relevant in the
>> near future -- to double).
> > [...]
>> It's obviously bad language design on multiple levels.
>
> Floating point semantics simply are not that simple, on any compiler, language or hardware. I recommend reading what the C/C++ Standards say about it, and look at the plethora of compiler switches for VC++/g++/clang++.
>
> The people who design these things are not fools, and there are good reasons for the way things are.
>
> Sorry, I don't agree.
Let me cite Prof. John L Gustafson from his Book "The End of Error" (it's worth reading):
---
The lesson taught by all these attempts at clever hidden scratchpad work is that computer users want *consistent* answers, and want to be *permitted the trade-of between speed and accuracy* depending on the situation. There is a subtle arrogance in any computer system that takes the position "You asked for a quick, cheap, approximate calculation, but I know what is good for you and will instead do a better job even if it takes more time and storage and energy and gives you a different answer. You're welcome."
---
|
May 20, 2016 Re: Always false float comparisons | ||||
---|---|---|---|---|
| ||||
Posted in reply to Timon Gehr | On Thursday, 19 May 2016 at 18:22:48 UTC, Timon Gehr wrote:
>
> dmd -run kahanDemo.d
> 1000000000000000.000000
> 1000000100000000.000000
> 1000000000000000.000000
>
> dmd -m32 -O -run kahanDemo.d
> 1000000000000000.000000
> 1000000000000000.000000
> 1000000000000000.000000
>
> Better?
Ignore if you think it's not relevant.
I am a bit lost in this thread. I thought this was about floating point CTFE, but I think what you are seeing here (in the 2nd case) is the DMD optimizer allowing reassociation of fp expressions? LDC does not allow that (per default) and produces the 1000000100000000.000000 result also with optimization on.
-Johan
|
May 20, 2016 Re: Always false float comparisons | ||||
---|---|---|---|---|
| ||||
Posted in reply to Tobias Müller | On Friday, 20 May 2016 at 12:32:40 UTC, Tobias Müller wrote:
> Let me cite Prof. John L Gustafson
Not "Prof." but "Dr.", sorry about that. Still an authority, though.
|
May 20, 2016 Re: Always false float comparisons | ||||
---|---|---|---|---|
| ||||
Posted in reply to Timon Gehr | On Friday, 20 May 2016 at 11:22:48 UTC, Timon Gehr wrote: > On 20.05.2016 08:12, Walter Bright wrote: >> I'm curious if you know of any language that meets your requirements. >> (Java 1.0 did, but Sun was forced to abandon that.) > > x86_64 assembly language. Similar discussion for Rust: https://internals.rust-lang.org/t/pre-rfc-dealing-with-broken-floating-point/2673/1 |
May 20, 2016 Re: Always false float comparisons | ||||
---|---|---|---|---|
| ||||
Posted in reply to Tobias M | On 5/20/2016 5:36 AM, Tobias M wrote:
> Still an authority, though.
If we're going to use the fallacy of appeal to authority, may I present Kahan who concurrently designed the IEEE 754 spec and the x87.
|
May 21, 2016 Re: Always false float comparisons | ||||
---|---|---|---|---|
| ||||
Posted in reply to Walter Bright | On Friday, 20 May 2016 at 06:12:44 UTC, Walter Bright wrote: >> If you say so. I would like to see an example that demonstrates that the first >> roundToDouble is required. > > That's beside the point. If there are spots in the program that require rounding, what is wrong with having to specify it? Because the programmer has already specified it with a type that requires rounding! > Why compromise speed & accuracy in the rest of the code that does not require it? (And yes, rounding in certain spots can and does compromise both speed and accuracy.) > Accuracy and speed won't be compromised if the programmer chooses an appropriate type - 'real' for the highest precision supported by the implementation. BTW, you need to remove 'hardware' from the definition of 'real' in the spec. |
May 21, 2016 Re: Always false float comparisons | ||||
---|---|---|---|---|
| ||||
Posted in reply to Walter Bright | On Friday, 20 May 2016 at 22:22:57 UTC, Walter Bright wrote:
> On 5/20/2016 5:36 AM, Tobias M wrote:
>> Still an authority, though.
>
> If we're going to use the fallacy of appeal to authority, may I present Kahan who concurrently designed the IEEE 754 spec and the x87.
Actually cited this *because* of you mentioning Kahan several times. And because you said "The people who design these things are not fools, and there are good reasons for the way things are."
Obviously, the designer of x87 and IEEE 754 (which both explicitly allow such "magic" behind the back of the programmer) thinks this is a good thing.
But after reading the book, I tend to agree with Gustafson. In the book this is elaborated more in detail, I just cited the conclusion.
|
Copyright © 1999-2021 by the D Language Foundation