May 16, 2016 Re: Always false float comparisons | ||||
---|---|---|---|---|
| ||||
Posted in reply to Timon Gehr | On 16 May 2016 at 18:31, Timon Gehr via Digitalmars-d <digitalmars-d@puremagic.com> wrote:
> On 16.05.2016 17:11, Daniel Murphy wrote:
>>
>> On 16/05/2016 10:37 PM, Walter Bright wrote:
>>>
>>> Some counter points:
>>>
>>> 1. Go uses 256 bit soft float for constant folding.
>>>
>>
>> Then we should use 257 bit soft float!
>>
>
> I don't see how you reach that conclusion, as 258 bit soft float is clearly superior.
You should be more pragmatic, 224 bits is all you need. :-)
|
May 16, 2016 Re: Always false float comparisons | ||||
---|---|---|---|---|
| ||||
Posted in reply to Walter Bright | On 05/16/2016 01:16 PM, Walter Bright wrote:
> On 5/16/2016 6:15 AM, Andrei Alexandrescu wrote:
>> On 5/16/16 8:19 AM, Walter Bright wrote:
>>> We are talking CTFE here, not runtime.
>>
>> I have big plans with floating-point CTFE and all are elastic: the
>> faster CTFE
>> FP is, the more and better things we can do. Things that other
>> languages can't
>> dream to do, like interpolation tables for transcendental functions. So a
>> slowdown of FP CTFE would be essentially a strategic loss. -- Andrei
>
> Based on my experience with soft float on DOS
I seriously think all experience accumulated last century needs to be thoroughly scrutinized. The world of software has changed and is changing fast enough to make all tricks older than a a decade virtually useless. Although core structures and algorithms stay the same, even some fundamentals are changing - e.g. chasing pointers is no longer faster than seemingly slow operations on implicit data structures on arrays, and so on and so forth.
Yes, there was a time when one setvbuf() call would make I/O one order of magnitude faster. Those days are long gone, and the only value of that knowledge industry is tepid anecdotes.
Fast compile-time FP is good, and the faster it is, the better it is. It's a huge differentiating factor for us. A little marginal precision is not. Please don't spend time on making compile-time FP slower.
Thanks,
Andrei
|
May 16, 2016 Re: Always false float comparisons | ||||
---|---|---|---|---|
| ||||
Posted in reply to H. S. Teoh | On 5/16/2016 7:37 AM, H. S. Teoh via Digitalmars-d wrote: > An algorithm that uses 80-bit but isn't written properly to account for > FP total precision loss will also produce wrong results, just like an > algorithm that uses 64-bit and isn't written to account for FP total > precision loss. That is correct. But in most routine cases, it's enough more precision that going to more heroic algorithm changes become unnecessary. > If I were presented with product A having an archaic feature X that > works slowly and that I don't even need in the first place, vs. product > B having exactly the feature set I need without the baggage of archaic > feature X, I'd go with product B. I have business experience with what people actually choose, and it's often not what they say they would choose. Sorry :-) Social proof (i.e. what your colleagues use) is a very, very important factor. But having a needed feature not available in the more popular product trumps social proof. It doesn't need to appeal to everyone, but it can be the wedge that drives a product into the mainstream. The needs of numerical analysts have often been neglected by the C/C++ community. The community has been very slow to recognize IEEE, by about a decade. It wasn't until fairly recently that C/C++ compilers even handled NaN properly. It's no surprise that FORTRAN reigned as the choice of numerical analysts. (On the other hand, the needs of game developers have received strong support.) The dismissal of concerns about precision as "archaic" is something I have seen for a long time. As my calculator anecdote illustrates, even engineers do not recognize loss of precision when it happens. I'm not talking about a few bits in last place, I'm talking about not recognizing total loss of precision. I sometimes wonder how many utter garbage FP results are being generated and treated as correct answers by researchers who confuse textbook math with FP math. The only field I can think of where a sprinkling of garbage results don't matter as long as it is fast is game programming. Even if you select doubles for speed, it's nice to be able to temporarily swap in reals as a check to see if the similar results are computed. If not, you surely have a loss of precision problem. |
May 16, 2016 Re: Always false float comparisons | ||||
---|---|---|---|---|
| ||||
Posted in reply to Andrei Alexandrescu | On Monday, 16 May 2016 at 14:32:55 UTC, Andrei Alexandrescu wrote:
>
> Let's do what everybody did to x87: continue supporting, but slowly and surely drive it to obsolescence.
>
> That's the right way.
This is a long thread that has covered a lot of ground. There has been a lot of argument, but few concrete take-a-ways (this was one, why I'm quoting it).
As far as I see it, the main topics have been:
1) Change in precision during comparisons (the original)
2) Change in precision to real during intermediate floating point calculations
3) Floating point precision in CTFE
On #1, there was discussion initially about compiler warnings as a solution. I have not seen any concrete recommendations.
Personally, I am not a fan of implicit conversions. It's one more set of rules I need to remember. I would prefer a compile-time error with a recommendation to explicitly cast to whatever is the best precision. It may even make sense for the error to also recommend approxEqual.
On #2, I don't see any clear agreement on anything. Walter emphasized the importance of precision, while others emphasized determinism and speed.
As someone who sometimes does matrix inversions and gets eigenvalues, I'm sympathetic to Walter's points about precision. For some applications, additional precision is essential. However, others clearly want the option to disable this behavior.
Walter stressed Java's difficulty in resolving the issue. What about their solution? They have strictfp, which allows exactly the behavior people like Manu seem to want. In D, you could make it an attribute. The benefit of adding the new annotation is that no code would get broken. You could write
@strictfp:
at the top of a file instead of a compiler flag. You could also add a @defaultfp (or !strictfp or @strictfp(false)) as its complement.
I'm not sure whether it should be applied recursively. Perhaps? A recursive approach can be difficult. However, a non-recursive approach would be limited in some ways. You'd have to do some work on std.math to get it working.
On #3, resolving #2 might address some of this. Regardless, this topic seems to be the least fully formed. Improved CTFE seems universally regarded as a positive.
|
May 16, 2016 Re: Always false float comparisons | ||||
---|---|---|---|---|
| ||||
Posted in reply to Walter Bright | On Sunday, May 15, 2016 15:49:27 Walter Bright via Digitalmars-d wrote:
> My proposal removes the "whim" by requiring 128 bit precision for CTFE.
Based on some of the recent discussions, it sounds like having soft floating point in the compiler would also help with cross-compilation. So, completely aside from the precision chosen, it sounds like having a soft floating point implementation in CTFE would definitely help - though maybe I misunderstood. My understanding of the floating point stuff is pretty bad, unfortunately. And Don's talk and this discussion don't exactly make me feel any better about it - since of course what I'd really like is math math and not magician's math, but that's not what we get, and it will never be what we get. I really should study up on FP in detail in the near future. My typical approach has been to avoid it as much as possible and then use the highest precision possible when I'm stuck using it in an attempt to minimize the error that creeps in.
- Jonathan M Davis
|
May 16, 2016 Re: Always false float comparisons | ||||
---|---|---|---|---|
| ||||
Posted in reply to Andrei Alexandrescu | On 5/16/2016 7:32 AM, Andrei Alexandrescu wrote: > It is rare to need to actually compute the inverse of a matrix. Most of the time > it's of interest to solve a linear equation of the form Ax = b, for which a > variety of good methods exist that don't entail computing the actual inverse. I was solving n equations with n unknowns. > I emphasize the danger of this kind of thinking: 1-2 anecdotes trump a lot of > other evidence. This is what happened with input vs. forward C++ iterators as > the main motivator for a variety of concepts designs. What I did was implement the algorithm out of my calculus textbook. Sure, it's a naive algorithm - but it is highly unlikely that untrained FP programmers know intuitively how to deal with precision loss. I bring up our very own Phobos sum algorithm, which was re-implemented later with the Kahan method to reduce precision loss. >> 1. Go uses 256 bit soft float for constant folding. > Go can afford it because it does no interesting things during compilation. We > can't. The we can't is conjecture at the moment. >> 2. Speed is hardly the only criterion. Quickly getting the wrong answer >> (and not just a few bits off, but total loss of precision) is of no value. > Of course. But it turns out the precision argument loses to the speed argument. > > A. It's been many many years and very few if any people commend D for its > superior approach to FP precision. > > B. In contrast, a bunch of folks complain about anything slow, be it during > compilation or at runtime. D's support for reals does not negatively impact the speed of float or double computations. >> 3. Supporting 80 bit reals does not take away from the speed of >> floats/doubles at runtime. > Fast compile-time floats are of strategic importance to us. Give me fast FP > during compilation, I'll make it go slow (whilst put to do amazing work). I still have a hard time seeing what you plan to do at compile time that would involve tens of millions of FP calculations. >> 6. My other experience with feature sets is if you drop things that make >> your product different, and concentrate on matching feature checklists >> with Major Brand X, customers go with Major Brand X. > > This is true in principle but too vague to be relevant. Again, what evidence do > you have that D's additional precision is revered? I see none, over like a decade. Fortran supports Quad (128 bit floats) as standard. https://en.wikipedia.org/wiki/Quadruple-precision_floating-point_format |
May 16, 2016 Re: Always false float comparisons | ||||
---|---|---|---|---|
| ||||
Posted in reply to Max Samukha | On 16.05.2016 07:49, Max Samukha wrote: > On Monday, 16 May 2016 at 04:02:54 UTC, Manu wrote: > >> extended x = 1.3; >> x + y; >> >> If that code were to CTFE, I expect the CTFE to use extended precision. >> My point is, CTFE should surely follow the types and language >> semantics as if it were runtime generated code... It's not reasonable >> that CTFE has higher precision applied than the same code at runtime. >> CTFE must give the exact same result as runtime execution of the >> function. >> > > You are not even guaranteed to get the same result on two different x86 > implementations. Without reading the x86 specification, I think it is safe to claim that you actually are guaranteed to get the same result. > AMD64: > > "The processor produces a floating-point result defined by the IEEE > standard to be infinitely precise. > This result may not be representable exactly in the destination format, > because only a subset of the > continuum of real numbers finds exact representation in any particular > floating-point format." > This just says that results of computations will need to be rounded to fit into constant-size storage. |
May 16, 2016 Re: Always false float comparisons | ||||
---|---|---|---|---|
| ||||
Posted in reply to Walter Bright | On Monday, 16 May 2016 at 18:57:24 UTC, Walter Bright wrote: > On 5/16/2016 7:32 AM, Andrei Alexandrescu wrote: >> It is rare to need to actually compute the inverse of a matrix. Most of the time >> it's of interest to solve a linear equation of the form Ax = b, for which a >> variety of good methods exist that don't entail computing the actual inverse. > > I was solving n equations with n unknowns. > LU decomposition is the more common approach. atlab has a backslash operator to solve systems of equations with LU decomposition: http://www.mathworks.com/help/matlab/ref/inv.html#bu6sfy8-1 |
May 16, 2016 Re: Always false float comparisons | ||||
---|---|---|---|---|
| ||||
Posted in reply to Walter Bright | On Saturday, 14 May 2016 at 20:38:54 UTC, Walter Bright wrote:
> On 5/14/2016 11:46 AM, Walter Bright wrote:
>> I used to design and build digital electronics out of TTL chips. Over time, TTL
>> chips got faster and faster. The rule was to design the circuit with a minimum
>> signal propagation delay, but never a maximum. Therefore, putting in faster
>> parts will never break the circuit.
>
> Eh, I got the min and max backwards.
Heh, I read that and thought, "wtf is he talking about, never a max?" :D
Regarding floating-point, I'll go farther than you and say that if an algorithm depends on lower-precision floating-point to be accurate, it's a bad algorithm. Now, people can always make mistakes in their implementation and unwittingly depend on lower precision somehow, but that _should_ fail.
None of this is controversial to me: you shouldn't be comparing floating-point numbers with anything other than approxEqual, increasing precision should never bother your algorithm, and a higher-precision, common soft-float for CTFE will help cross-compiling and you'll never notice the speed hit.
|
May 16, 2016 Re: Always false float comparisons | ||||
---|---|---|---|---|
| ||||
Posted in reply to Walter Bright | On 05/16/2016 02:57 PM, Walter Bright wrote: > On 5/16/2016 7:32 AM, Andrei Alexandrescu wrote: >> It is rare to need to actually compute the inverse of a matrix. Most >> of the time >> it's of interest to solve a linear equation of the form Ax = b, for >> which a >> variety of good methods exist that don't entail computing the actual >> inverse. > > I was solving n equations with n unknowns. That's the worst way to go about it. I've seen students fail exams over it. Solving a linear equations sytems by computing the inverse is inferior to just about any other method. See e.g. http://www.mathworks.com/help/matlab/ref/inv.html "It is seldom necessary to form the explicit inverse of a matrix. A frequent misuse of inv arises when solving the system of linear equations Ax = b. One way to solve the equation is with x = inv(A)*b. A better way, from the standpoint of both execution time and numerical accuracy, is to use the matrix backslash operator x = A\b. This produces the solution using Gaussian elimination, without explicitly forming the inverse. See mldivide for further information." You have long been advocating that the onus is on the engineer to exercise good understanding of what's going on when using domain-specific code such as UTF, linear algebra, etc. So if you exercised it now, you need to discount this argument. >> I emphasize the danger of this kind of thinking: 1-2 anecdotes trump a >> lot of >> other evidence. This is what happened with input vs. forward C++ >> iterators as >> the main motivator for a variety of concepts designs. > > What I did was implement the algorithm out of my calculus textbook. > Sure, it's a naive algorithm - but it is highly unlikely that untrained > FP programmers know intuitively how to deal with precision loss. As someone else said: a few bits of extra precision ain't gonna help them. I thought that argument was closed. > I bring > up our very own Phobos sum algorithm, which was re-implemented later > with the Kahan method to reduce precision loss. Kahan is clear, ingenous, and understandable and a great part of the stdlib. I don't see what the point is here. Naive approaches aren't going to take anyone far, regardless of precision. >>> 1. Go uses 256 bit soft float for constant folding. >> Go can afford it because it does no interesting things during >> compilation. We >> can't. > > The we can't is conjecture at the moment. We can't and we shouldn't invest time in investigating whether we can. It's a waste even if the project succeeded 100% and exceeded anyone's expectations. >>> 2. Speed is hardly the only criterion. Quickly getting the wrong answer >>> (and not just a few bits off, but total loss of precision) is of no >>> value. >> Of course. But it turns out the precision argument loses to the speed >> argument. >> >> A. It's been many many years and very few if any people commend D for its >> superior approach to FP precision. >> >> B. In contrast, a bunch of folks complain about anything slow, be it >> during >> compilation or at runtime. > > D's support for reals does not negatively impact the speed of float or > double computations. Then let's not do more of it. >>> 3. Supporting 80 bit reals does not take away from the speed of >>> floats/doubles at runtime. >> Fast compile-time floats are of strategic importance to us. Give me >> fast FP >> during compilation, I'll make it go slow (whilst put to do amazing work). > > I still have a hard time seeing what you plan to do at compile time that > would involve tens of millions of FP calculations. Give those to me and you'll be surprised. Andrei |
Copyright © 1999-2021 by the D Language Foundation