May 19, 2016 Re: Always false float comparisons | ||||
---|---|---|---|---|
| ||||
Posted in reply to Joakim | On Thursday, 19 May 2016 at 11:33:38 UTC, Joakim wrote:
> The example he refers to is laughable because it also checks for equality.
With good reason, because it's intended to illustrate the point that two calculations that _look_ identical in code, that intuitively should produce identical results, actually don't, because of under-the-hood precision differences.
|
May 19, 2016 Re: Always false float comparisons | ||||
---|---|---|---|---|
| ||||
Posted in reply to Joseph Rushton Wakeling | On Thursday, 19 May 2016 at 12:00:33 UTC, Joseph Rushton Wakeling wrote:
> On Thursday, 19 May 2016 at 11:33:38 UTC, Joakim wrote:
>> The example he refers to is laughable because it also checks for equality.
>
> With good reason, because it's intended to illustrate the point that two calculations that _look_ identical in code, that intuitively should produce identical results, actually don't, because of under-the-hood precision differences.
And that's my point too, that floating point will always subvert your expectations, particularly such simplistic notions of equality, which is why you should always use approxEqual or it's analogues.
|
May 19, 2016 Re: Always false float comparisons | ||||
---|---|---|---|---|
| ||||
Posted in reply to Joakim | On Thursday, 19 May 2016 at 11:33:38 UTC, Joakim wrote:
> Computer scientists are no good if they don't know any science.
Even the computer scientists that does not know any science are infinitely better than those who refuse to read papers and debate on a rational level.
Blind D zealotry at its finest.
|
May 19, 2016 Re: Always false float comparisons | ||||
---|---|---|---|---|
| ||||
Posted in reply to Joakim | On 19.05.2016 08:04, Joakim wrote: > On Wednesday, 18 May 2016 at 17:10:25 UTC, Timon Gehr wrote: >> It's not just slightly worse, it can cut the number of useful bits in >> half or more! It is not unusual, I have actually run into those >> problems in the past, and it can break an algorithm that is in Phobos >> today! > > I wouldn't call that broken. Looking at the hex output by replacing %f > with %A in writefln, it appears the only differences in all those > results is the last byte in the significand. Argh... // ... void main(){ //double[] data=[1e16,1,-9e15]; import std.range; double[] data=1e16~repeat(1.0,100000000).array~(-9e15); import std.stdio; writefln("%f",sum(data)); // baseline writefln("%f",kahan(data)); // kahan writefln("%f",kahanBroken(data)); // broken kahan } dmd -run kahanDemo.d 1000000000000000.000000 1000000100000000.000000 1000000000000000.000000 dmd -m32 -O -run kahanDemo.d 1000000000000000.000000 1000000000000000.000000 1000000000000000.000000 Better? Obviously there is more structure in the data that I invent manually than in a real test case where it would go wrong. The problems carry over though. > As Don's talk pointed out, > all floating-point calculations will see loss of precision starting there. > ... This is implicitly assuming a development model where the programmer first writes down the computation as it would be correct in the real number system and then naively replaces every operation by the rounding equivalent and hopes for the best. It is a useful rule if that is what you're doing. One might be doing something else. Consider the following paper for an example where the last bit in the significant actually carries useful information for many of the values used in the program. http://www.jaist.ac.jp/~s1410018/papers/qd.pdf > In this case, not increasing precision gets the more accurate result, > but other examples could be constructed that _heavily_ favor increasing > precision. Sure. In such cases, you should use higher precision. What is the problem? This is already supported (the compiler is not allowed to use lower precision than requested). > In fact, almost any real-world, non-toy calculation would > favor it. > > In any case, nobody should depend on the precision out that far being > accurate or "reliable." IEEE floating point has well-defined behaviour and there is absolutely nothing wrong with code that delivers more accurate results just because it is actually aware of the actual semantics of the operations being carried out. |
May 19, 2016 Re: Always false float comparisons | ||||
---|---|---|---|---|
| ||||
Posted in reply to Timon Gehr | On 18.05.2016 19:10, Timon Gehr wrote:
> implementation-defined behaviour
Maybe that wasn't the right term (it's worse than that; I think the documentation of the implementation is not even required to tell you precisely what it does).
|
May 19, 2016 Re: Always false float comparisons | ||||
---|---|---|---|---|
| ||||
Posted in reply to Walter Bright | On 19.05.2016 09:09, Walter Bright wrote: > On 5/18/2016 10:10 AM, Timon Gehr wrote: >> double kahan(double[] arr){ >> double sum = 0.0; >> double c = 0.0; >> foreach(x;arr){ >> double y=x-c; > double y = roundToDouble(x - c); Those two lines producing different results is unexpected, because you are explicitly saying that y is a double, and the first line also does implicit rounding (probably -- on all compilers and targets that will be relevant in the near future -- to double). It's obviously bad language design on multiple levels. >> double t=sum+y; > double t = roundToDouble(sum + y); >> c = (t-sum)-y; >> sum=t; >> } >> return sum; >> } > > Those are the only two points in the program that depend on rounding. If you say so. I would like to see an example that demonstrates that the first roundToDouble is required. Also, in case that the compiler chooses to internally use higher precision for all local variables in that program, the 'roundToDouble's you have inserted will reduce precision in comparison to the case where they are not inserted, reducing magical precision enhancement that would otherwise happen. (I.e., assuming that some of the ideas are valid and it is in fact desirable to have variable precision enhancement depending on target, if I find a precision bug I can fix by adding roundToDouble, this introduces a potential regression on other targets. And as none of it was guaranteed to work in the first place, the resulting mess is all my fault.) > If you're implementing Kahan, And if you are not? (I find the standard assumption that counterexamples to wrong statements are one-off special cases tiresome. This is not usually the case, even if you cannot construct other examples right away.) > you'd know that, I would, but what often happens in practice is that people don't. For example because they wouldn't expect roundToDouble to be required anywhere in code that only uses doubles. They are right. > so there shouldn't be anything difficult about it. > ... https://issues.dlang.org/show_bug.cgi?id=13474 Creating useful programs in D shouldn't require knowing (or thinking about) an excess amount of D-specific implicit arcane details of the language semantics. Being easy to pick up and use is a major selling point for D. I'm not convinced there is nothing difficult about it. It's certainly a generator of incidental complexity. Also, error prone is not the same as difficult. Almost everybody makes some mistakes occasionally. > There's no reason to make the rest of the program suffer inaccuracies. Indeed there isn't. I'm not suggesting to allow using a precision lower than the one specified. If you are talking about programs that "benefit" from automatic extension of precision: do you think their authors are aware that e.g. their 32 bit release builds with gdc/ldc are producing wrong results? |
May 19, 2016 Re: Always false float comparisons | ||||
---|---|---|---|---|
| ||||
Posted in reply to Ethan Watson | On 5/18/2016 4:30 AM, Ethan Watson wrote:
> I appreciate that it sounds like I'm starting to stretch to hold to my point,
> but I imagine we'd also be able to control such things with the compiler - or at
> least know what flags it uses so that we can ensure consistent behaviour between
> compilation and runtime.
All compilers I know of use "round to nearest" for constant folding, and that is not adjustable.
|
May 19, 2016 Re: Always false float comparisons | ||||
---|---|---|---|---|
| ||||
Posted in reply to Timon Gehr | On 5/19/2016 1:26 PM, Timon Gehr wrote: > Those two lines producing different results is unexpected, because you are > explicitly saying that y is a double, and the first line also does implicit > rounding (probably -- on all compilers and targets that will be relevant in the > near future -- to double). > [...] > It's obviously bad language design on multiple levels. Floating point semantics simply are not that simple, on any compiler, language or hardware. I recommend reading what the C/C++ Standards say about it, and look at the plethora of compiler switches for VC++/g++/clang++. The people who design these things are not fools, and there are good reasons for the way things are. Sorry, I don't agree. > If you say so. I would like to see an example that demonstrates that the first > roundToDouble is required. That's beside the point. If there are spots in the program that require rounding, what is wrong with having to specify it? Why compromise speed & accuracy in the rest of the code that does not require it? (And yes, rounding in certain spots can and does compromise both speed and accuracy.) > And if you are not? (I find the standard assumption that counterexamples to > wrong statements are one-off special cases tiresome. You might, but I've implemented a number of FP algorithms, and none of them required rounding to a lower precision. I've also seen many that failed due to premature rounding. > Creating useful programs in D shouldn't require knowing (or thinking about) an > excess amount of D-specific implicit arcane details of the language semantics. > Being easy to pick up and use is a major selling point for D. As pointed out innumerable times here, the same issues are in C and C++ (with even more variability!). When designing an algorithm that you know requires rounding in a certain place, you are already significantly beyond implementing a naive algorithm, and it would be time to start paying attention to how FP actually works. I'm curious if you know of any language that meets your requirements. (Java 1.0 did, but Sun was forced to abandon that.) |
May 19, 2016 Re: Always false float comparisons | ||||
---|---|---|---|---|
| ||||
Posted in reply to Max Samukha | On 5/19/2016 12:49 AM, Max Samukha wrote:
> People are trying to get across that, if they wanted to maximize accuracy, they
> would request the most precise type explicitly. D has 'real' for that. This
> thread has shown unequivocally that the semantics you are insisting on is bound
> to cause endless confusion and complaints.
C++ programs do the same thing, and have for decades, and such behavior is allowed by the C++ Standard.
People have been confused by and complained about floating point behavior at least since I started programming 40 years ago.
I have not invented anything new here.
|
May 20, 2016 Re: Always false float comparisons | ||||
---|---|---|---|---|
| ||||
Posted in reply to Timon Gehr | On Thursday, 19 May 2016 at 18:22:48 UTC, Timon Gehr wrote: > On 19.05.2016 08:04, Joakim wrote: >> On Wednesday, 18 May 2016 at 17:10:25 UTC, Timon Gehr wrote: >>> It's not just slightly worse, it can cut the number of useful bits in >>> half or more! It is not unusual, I have actually run into those >>> problems in the past, and it can break an algorithm that is in Phobos >>> today! >> >> I wouldn't call that broken. Looking at the hex output by replacing %f >> with %A in writefln, it appears the only differences in all those >> results is the last byte in the significand. > > Argh... > > // ... > > void main(){ > //double[] data=[1e16,1,-9e15]; > import std.range; > double[] data=1e16~repeat(1.0,100000000).array~(-9e15); > import std.stdio; > writefln("%f",sum(data)); // baseline > writefln("%f",kahan(data)); // kahan > writefln("%f",kahanBroken(data)); // broken kahan > } > > > dmd -run kahanDemo.d > 1000000000000000.000000 > 1000000100000000.000000 > 1000000000000000.000000 > > dmd -m32 -O -run kahanDemo.d > 1000000000000000.000000 > 1000000000000000.000000 > 1000000000000000.000000 > > > Better? > > Obviously there is more structure in the data that I invent manually than in a real test case where it would go wrong. The problems carry over though. I looked over your code a bit. If I define sum and c as reals in "kahanBroken" at runtime, this problem goes away. Since that's what the CTFE rule is actually doing, ie extending all floating-point to reals at compile-time, I don't see what you're complaining about. Try it, run even your original naive summation algorithm through CTFE and it will produce the result you want: enum double[] ctData=[1e16,1,-9e15]; enum ctSum = sum(ctData); writefln("%f", ctSum); >> As Don's talk pointed out, >> all floating-point calculations will see loss of precision starting there. >> ... > > > This is implicitly assuming a development model where the programmer first writes down the computation as it would be correct in the real number system and then naively replaces every operation by the rounding equivalent and hopes for the best. No, it is intrinsic to any floating-point calculation. > It is a useful rule if that is what you're doing. One might be doing something else. Consider the following paper for an example where the last bit in the significant actually carries useful information for many of the values used in the program. > > http://www.jaist.ac.jp/~s1410018/papers/qd.pdf Did you link to the wrong paper? ;) I skimmed it and that paper explicitly talks about error bounds all over the place. The only mention of "the last bit" is when they say they calculated their constants in arbitrary precision before rounding them for runtime use, which is ironically similar to what Walter suggested doing for D's CTFE also. >> In this case, not increasing precision gets the more accurate result, >> but other examples could be constructed that _heavily_ favor increasing >> precision. > > Sure. In such cases, you should use higher precision. What is the problem? This is already supported (the compiler is not allowed to use lower precision than requested). I'm not the one with the problem, you're the one complaining. >> In fact, almost any real-world, non-toy calculation would >> favor it. >> >> In any case, nobody should depend on the precision out that far being >> accurate or "reliable." > > > IEEE floating point has well-defined behaviour and there is absolutely nothing wrong with code that delivers more accurate results just because it is actually aware of the actual semantics of the operations being carried out. You just made the case for Walter doing what he did. :) |
Copyright © 1999-2021 by the D Language Foundation