May 18, 2016 Re: Always false float comparisons | ||||
---|---|---|---|---|
| ||||
Posted in reply to Walter Bright | On Wed, May 18, 2016 at 04:09:28PM -0700, Walter Bright via Digitalmars-d wrote: [...] > Now try the square root of 2. Or pi, e, etc. The irrational numbers are, by definition, not representable as a ratio. This is somewhat tangential, but in certain applications it is perfectly possible to represent certain irrationals exactly. For example, if you're working with quadratic fractions of the form a + b*sqrt(r) where r is fixed, you can exactly represent all such numbers as a pair of rational coefficients. These numbers are closed under addition, subtraction, multiplication, and division, so you have a nice closed system that can handle numbers that contain square roots. It is possible to have multiple square roots (e.g., a + b*sqrt(r) + c*sqrt(s) + d*sqrt(r*s)), but the tuple gets exponentially long with each different root, so it quickly becomes unwieldy (not to mention slow). Similar techniques can be used for exactly representing roots of cubic equations (including cube roots) -- a single cubic root can be represented as a 3-tuple -- or higher. Again, these are closed under +, *, -, /, so you can do quite a lot of interesting things with them without sacrificing exact arithmetic. Though again, things quickly become unwieldy. Higher order roots are also possible, though of questionable value since storage and performance costs quickly become too high. Transcendentals like pi or e are out of reach by this method, though. You'd need a variable-length vector to represent numbers that contain pi or e as a factor, because their powers never become integral, so you potentially need to represent an arbitrary power of pi in order to obtain closure under + and *. Of course, with increasing sophistication the computation and storage costs increase accordingly, but the point is that if your application is known to only need a certain subset of irrationals, it may well be possible to represent them exactly within reasonable costs. (There's also RealLib, that can exactly represent *all* computable reals, but may not be feasible depending on what your application does.) T -- Let's eat some disquits while we format the biskettes. |
May 18, 2016 Re: Always false float comparisons | ||||
---|---|---|---|---|
| ||||
Posted in reply to Walter Bright | On Wed, May 18, 2016 at 04:28:13PM -0700, Walter Bright via Digitalmars-d wrote: > On 5/18/2016 4:17 PM, Joseph Rushton Wakeling wrote: > > On Wednesday, 18 May 2016 at 23:09:28 UTC, Walter Bright wrote: > > > Now try the square root of 2. Or pi, e, etc. The irrational numbers are, by definition, not representable as a ratio. > > > > Continued fraction? :-) > > Somehow I don't think gcc is using Mathematica for constant folding. http://apt.cs.manchester.ac.uk/ftp/pub/apt/papers/drl_ieee01.pdf Side-note: continued fractions of quadratic irrationals (a + b*sqrt(c)) are periodic, so it's possible to do exact arithmetic with them using only finite storage. T -- All problems are easy in retrospect. |
May 19, 2016 Re: Always false float comparisons | ||||
---|---|---|---|---|
| ||||
Posted in reply to Timon Gehr | On Wednesday, 18 May 2016 at 17:10:25 UTC, Timon Gehr wrote:
> It's not just slightly worse, it can cut the number of useful bits in half or more! It is not unusual, I have actually run into those problems in the past, and it can break an algorithm that is in Phobos today!
I wouldn't call that broken. Looking at the hex output by replacing %f with %A in writefln, it appears the only differences in all those results is the last byte in the significand. As Don's talk pointed out, all floating-point calculations will see loss of precision starting there.
In this case, not increasing precision gets the more accurate result, but other examples could be constructed that _heavily_ favor increasing precision. In fact, almost any real-world, non-toy calculation would favor it.
In any case, nobody should depend on the precision out that far being accurate or "reliable."
|
May 19, 2016 Re: Always false float comparisons | ||||
---|---|---|---|---|
| ||||
Posted in reply to Timon Gehr | On 5/18/2016 10:10 AM, Timon Gehr wrote: > double kahan(double[] arr){ > double sum = 0.0; > double c = 0.0; > foreach(x;arr){ > double y=x-c; double y = roundToDouble(x - c); > double t=sum+y; double t = roundToDouble(sum + y); > c = (t-sum)-y; > sum=t; > } > return sum; > } Those are the only two points in the program that depend on rounding. If you're implementing Kahan, you'd know that, so there shouldn't be anything difficult about it. There's no reason to make the rest of the program suffer inaccuracies. |
May 19, 2016 Re: Always false float comparisons | ||||
---|---|---|---|---|
| ||||
Posted in reply to Walter Bright | On Thursday, 19 May 2016 at 07:09:30 UTC, Walter Bright wrote:
> On 5/18/2016 10:10 AM, Timon Gehr wrote:
>> double kahan(double[] arr){
>> double sum = 0.0;
>> double c = 0.0;
>> foreach(x;arr){
>> double y=x-c;
> double y = roundToDouble(x - c);
>> double t=sum+y;
> double t = roundToDouble(sum + y);
>> c = (t-sum)-y;
>> sum=t;
>> }
>> return sum;
>> }
>
> Those are the only two points in the program that depend on rounding. If you're implementing Kahan, you'd know that, so there shouldn't be anything difficult about it.
>
> There's no reason to make the rest of the program suffer inaccuracies.
People are trying to get across that, if they wanted to maximize accuracy, they would request the most precise type explicitly. D has 'real' for that. This thread has shown unequivocally that the semantics you are insisting on is bound to cause endless confusion and complaints.
|
May 19, 2016 Re: Always false float comparisons | ||||
---|---|---|---|---|
| ||||
Posted in reply to jmh530 | On Wednesday, 18 May 2016 at 22:16:44 UTC, jmh530 wrote: > On Wednesday, 18 May 2016 at 21:49:34 UTC, Joseph Rushton Wakeling wrote: >> On Wednesday, 18 May 2016 at 20:29:27 UTC, Walter Bright wrote: >>> I do not understand the tolerance for bad results in scientific, engineering, medical, or finance applications. >> >> I don't think anyone has suggested tolerance for bad results in any of those applications. >> > > I don't think its about tolerance for bad results, so much as the ability to make the trade-off between speed and precision when you need to. It isn't only about speed. It is about correctness as well. Compilers should not change the outcome (including precision and rounding) unless the programmer has explicitly requested it, and if you do the effects should be well documented so that the effect is clear to the programmer. This is a well known and universally accepted principle in compiler design. Take for instance the documentation page for a professional level compiler targeting embedded programming: http://processors.wiki.ti.com/index.php/Floating_Point_Optimization It gives the programmer explicit control over what kind of deviations the compiler can create. If you look at Intel's compiler you'll see that they even turn off fused-multiply-add in strict mode because it skips a single rounding between the multiply and the add. https://software.intel.com/sites/default/files/article/326703/fp-control-2012-08.pdf Some languages allow constant folding of literals in _expressions_ to use infinite precision, but once you have bound it to a variable it should use the same precision, and you always have to use the same rounding mode. This means that if you should check the rounding mode before using precomputed values. If you cannot emulate the computation then you can simply run the computations prior to entering main and put the "constant computations" in global variables. That said, compilers may have some problematic settings enabled by default in order to look good in benchmarks/test suites. In order to be IEEE compliant you cannot even optimize "0.0 - x" to "-x", you also cannot optimize "x - 0.0" to "x". Such issues make compiler vendors provide many different floating point options as command line flags. |
May 19, 2016 Re: Always false float comparisons | ||||
---|---|---|---|---|
| ||||
Posted in reply to Joakim | On Thursday, 19 May 2016 at 06:04:15 UTC, Joakim wrote:
> In this case, not increasing precision gets the more accurate result, but other examples could be constructed that _heavily_ favor increasing precision. In fact, almost any real-world, non-toy calculation would favor it.
Please stop saying this. It is very wrong.
Algorithms that need higher accuracy need error correction mechanisms, not unpredictable precision and rounding. Unpredictable precision and rounding makes adding error correction difficult so it does not improve accuracy, it harms accuracy when you need it.
|
May 19, 2016 Re: Always false float comparisons | ||||
---|---|---|---|---|
| ||||
Posted in reply to Ola Fosheim Grøstad | On Thursday, 19 May 2016 at 08:28:22 UTC, Ola Fosheim Grøstad wrote: > On Thursday, 19 May 2016 at 06:04:15 UTC, Joakim wrote: >> In this case, not increasing precision gets the more accurate result, but other examples could be constructed that _heavily_ favor increasing precision. In fact, almost any real-world, non-toy calculation would favor it. > > Please stop saying this. It is very wrong. I will keep saying it because it is _not_ wrong. > Algorithms that need higher accuracy need error correction mechanisms, not unpredictable precision and rounding. Unpredictable precision and rounding makes adding error correction difficult so it does not improve accuracy, it harms accuracy when you need it. And that is what _you_ need to stop saying: there's _nothing unpredictable_ about what D does. You may find it unintuitive, but that's your problem. The notion that "error correction" can fix the inevitable degradation of accuracy with each floating-point calculation is just laughable. |
May 19, 2016 Re: Always false float comparisons | ||||
---|---|---|---|---|
| ||||
Posted in reply to Joakim | On Thursday, 19 May 2016 at 08:37:55 UTC, Joakim wrote: > On Thursday, 19 May 2016 at 08:28:22 UTC, Ola Fosheim Grøstad wrote: >> On Thursday, 19 May 2016 at 06:04:15 UTC, Joakim wrote: >>> In this case, not increasing precision gets the more accurate result, but other examples could be constructed that _heavily_ favor increasing precision. In fact, almost any real-world, non-toy calculation would favor it. >> >> Please stop saying this. It is very wrong. > > I will keep saying it because it is _not_ wrong. Can you then please read this paper in it's entirety before continuing saying it. Because changing precision breaks properties of the semantics of IEEE floating point. What Every Computer Scientist Should Know About Floating-Point Arithmetic https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html#3377 «Conventional wisdom maintains that extended-based systems must produce results that are at least as accurate, if not more accurate than those delivered on single/double systems, since the former always provide at least as much precision and often more than the latter. Trivial examples such as the C program above as well as more subtle programs based on the examples discussed below show that this wisdom is naive at best: some apparently portable programs, which are indeed portable across single/double systems, deliver incorrect results on extended-based systems precisely because the compiler and hardware conspire to occasionally provide more precision than the program expects.» > And that is what _you_ need to stop saying: there's _nothing unpredictable_ about what D does. You may find it unintuitive, but that's your problem. No. It is not my problem as I would never use a system with this kind of semantics for anything numerical. It is a problem for D. Not for me. > The notion that "error correction" can fix the inevitable degradation of accuracy with each floating-point calculation is just laughable. Well, it is not laughable to computer scientists that accuracy depends on knowledge about precision and rounding... And I am a computer scientists, in case you have forgotten... |
May 19, 2016 Re: Always false float comparisons | ||||
---|---|---|---|---|
| ||||
Posted in reply to Ola Fosheim Grøstad | On Thursday, 19 May 2016 at 11:00:31 UTC, Ola Fosheim Grøstad wrote: > On Thursday, 19 May 2016 at 08:37:55 UTC, Joakim wrote: >> On Thursday, 19 May 2016 at 08:28:22 UTC, Ola Fosheim Grøstad wrote: >>> On Thursday, 19 May 2016 at 06:04:15 UTC, Joakim wrote: >>>> In this case, not increasing precision gets the more accurate result, but other examples could be constructed that _heavily_ favor increasing precision. In fact, almost any real-world, non-toy calculation would favor it. >>> >>> Please stop saying this. It is very wrong. >> >> I will keep saying it because it is _not_ wrong. > > Can you then please read this paper in it's entirety before continuing saying it. Because changing precision breaks properties of the semantics of IEEE floating point. > > What Every Computer Scientist Should Know About Floating-Point Arithmetic > > https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html#3377 > > «Conventional wisdom maintains that extended-based systems must produce results that are at least as accurate, if not more accurate than those delivered on single/double systems, since the former always provide at least as much precision and often more than the latter. Trivial examples such as the C program above as well as more subtle programs based on the examples discussed below show that this wisdom is naive at best: some apparently portable programs, which are indeed portable across single/double systems, deliver incorrect results on extended-based systems precisely because the compiler and hardware conspire to occasionally provide more precision than the program expects.» The example he refers to is laughable because it also checks for equality. >> The notion that "error correction" can fix the inevitable degradation of accuracy with each floating-point calculation is just laughable. > > Well, it is not laughable to computer scientists that accuracy depends on knowledge about precision and rounding... And I am a computer scientists, in case you have forgotten... Computer scientists are no good if they don't know any science. |
Copyright © 1999-2021 by the D Language Foundation