May 15, 2016 Re: Always false float comparisons | ||||
---|---|---|---|---|
| ||||
Posted in reply to Walter Bright | On Saturday, 14 May 2016 at 18:46:50 UTC, Walter Bright wrote:
> On 5/14/2016 3:16 AM, John Colvin wrote:
>> This is all quite discouraging from a scientific programmers point of view.
>> Precision is important, more precision is good, but reproducibility and
>> predictability are critical.
>
> I used to design and build digital electronics out of TTL chips. Over time, TTL chips got faster and faster. The rule was to design the circuit with a minimum signal propagation delay, but never a maximum. Therefore, putting in faster parts will never break the circuit.
>
> Engineering is full of things like this. It's sound engineering practice. I've never ever heard of a circuit requiring a resistor with 20% tolerance that would fail if a 10% tolerance one was put in, for another example.
Should scientific software be written to not break if the floating-point precision is enhanced, and to allow greater precision to be used when the hardware supports it? Sure.
However, that's not the same as saying that the choice of precision should be in the hands of the hardware, rather than the person building + running the program. I for one would not like to have to spend time working out why my program was producing different results, just because I (say) switched from a machine supporting maximum 80-bit float to one supporting 128-bit.
|
May 15, 2016 Re: Always false float comparisons | ||||
---|---|---|---|---|
| ||||
Posted in reply to Joseph Rushton Wakeling | On 5/15/2016 6:49 AM, Joseph Rushton Wakeling wrote: > However, that's not the same as saying that the choice of precision should be in > the hands of the hardware, rather than the person building + running the > program. > I for one would not like to have to spend time working out why my > program was producing different results, just because I (say) switched from a > machine supporting maximum 80-bit float to one supporting 128-bit. If you wrote it "to not break if the floating-point precision is enhanced, and to allow greater precision to be used when the hardware supports it" then what's the problem? Can you provide an example of a legitimate algorithm that produces degraded results if the precision is increased? |
May 15, 2016 Re: Always false float comparisons | ||||
---|---|---|---|---|
| ||||
Posted in reply to Ola Fosheim Grøstad | On 5/15/2016 1:33 AM, Ola Fosheim Grøstad wrote:
> That is _very_ bad.
Oh, rubbish. Did you know that (x+y)+z is not equal to x+(y+z) in floating point math? FP math is simply not the math you learned in high school. You'll have nothing but grief with it if you insist it must be the same.
|
May 15, 2016 Re: Always false float comparisons | ||||
---|---|---|---|---|
| ||||
Posted in reply to Walter Bright | On Sunday, 15 May 2016 at 18:30:20 UTC, Walter Bright wrote:
> Can you provide an example of a legitimate algorithm that produces degraded results if the precision is increased?
I've just been observing this argument (over and over again) and I think the two sides are talking about different things.
Walter, you keep saying degraded results.
The scientific folks keep saying *consistent* results.
Think about a key goal in scientific experiments: to measure changes across repeated experiments, to reproduce and confirm or falsify results. They want to keep as much equal as they can.
I suppose people figure if they use the same compiler, same build options, same source code and feed the same data into it, they expect to get the *same* results. It is a deterministic machine, right?
You might argue they should add "same hardware" to that list, but apparently it isn't that easy, or I doubt people would be talking about this.
|
May 15, 2016 Re: Always false float comparisons | ||||
---|---|---|---|---|
| ||||
Posted in reply to Walter Bright | On Sunday, 15 May 2016 at 18:35:15 UTC, Walter Bright wrote:
> On 5/15/2016 1:33 AM, Ola Fosheim Grøstad wrote:
>> That is _very_ bad.
>
> Oh, rubbish. Did you know that (x+y)+z is not equal to x+(y+z) in floating point math? FP math is simply not the math you learned in high school. You'll have nothing but grief with it if you insist it must be the same.
Err... these kind of problems only applies to D.
Please leave your high school experiences out of it. :-)
|
May 15, 2016 Re: Always false float comparisons | ||||
---|---|---|---|---|
| ||||
Posted in reply to Adam D. Ruppe | On Sunday, 15 May 2016 at 18:41:57 UTC, Adam D. Ruppe wrote:
> I suppose people figure if they use the same compiler, same build options, same source code and feed the same data into it, they expect to get the *same* results. It is a deterministic machine, right?
In this case it is worse, you get this:
float f = some_pure_function();
const float cf = some_pure_function();
assert(cf==f); // SHUTDOWN!!!
But Walter doesn't think this is an issue, because correctness is just a high school feature.
|
May 15, 2016 Re: Always false float comparisons | ||||
---|---|---|---|---|
| ||||
Posted in reply to Ola Fosheim Grøstad | On Sunday, 15 May 2016 at 18:59:51 UTC, Ola Fosheim Grøstad wrote:
> On Sunday, 15 May 2016 at 18:41:57 UTC, Adam D. Ruppe wrote:
>> I suppose people figure if they use the same compiler, same build options, same source code and feed the same data into it, they expect to get the *same* results. It is a deterministic machine, right?
>
> In this case it is worse, you get this:
>
> float f = some_pure_function();
> const float cf = some_pure_function();
>
> assert(cf==f); // SHUTDOWN!!!
That was too quick, but you get the idea:
assert((cf==N) == (f==N)); //SHUTDOWN!!
|
May 15, 2016 Re: Always false float comparisons | ||||
---|---|---|---|---|
| ||||
Posted in reply to Ola Fosheim Grøstad | On Sunday, 15 May 2016 at 08:33:44 UTC, Ola Fosheim Grøstad wrote:
> Because D does not cancel out "cast(real)cast(float)1.30)" for "f", but does cancel it out for cf.
Hmmm...
I noticed my code example fails to specify float for the immutable, fixing that only the first line with f == 1.30 fail while all others succeed (for some reason?), which is good news. Does that mean the error that I had noted a few years ago actually was fixed?
|
May 15, 2016 Re: Always false float comparisons | ||||
---|---|---|---|---|
| ||||
Posted in reply to Walter Bright | On Sunday, 15 May 2016 at 18:30:20 UTC, Walter Bright wrote:
> If you wrote it "to not break if the floating-point precision is enhanced, and to allow greater precision to be used when the hardware supports it" then what's the problem?
>
> Can you provide an example of a legitimate algorithm that produces degraded results if the precision is increased?
No, but I think that's not really relevant to the point I was making. Results don't have to be degraded to be inconsistent.
|
May 15, 2016 Re: Always false float comparisons | ||||
---|---|---|---|---|
| ||||
Posted in reply to Walter Bright | > Can you provide an example of a legitimate algorithm that produces degraded results if the precision is increased?
The real problem here is the butterfly effect (the chaos theory thing). Imagine programming a multiplayer game. Ideally you only need to synchronize user events, like key presses etc. Other computation can be duplicated on all machines participating in a session. Now imagine that some logic other than display (e.g. player-bullet collision detection) is using floating point. If those computations are not reproducible, a higher precision on one player's machine can lead to huge inconsistencies in game states between the machines (e.g. my character is dead on your machine but alive on mine)!
If the game developer cannot achieve reproducibility or it takes too much work, the workarounds can be wery costly. He can, for example, convert implementation to soft float or increase amount of synchronization over the network.
Also I think Adam is making a very good point about generl reproducibility here. If a researcher gets a little bit different results, he has to investigate why, because he needs to rule out all the serious mistakes that could be the cause of the difference. If he finds out that the source was an innocuous refactoring of some D code, he will be rightly frustrated that D has caused so much unnecessary churn.
I think the same problem can occur in mission-critical software which undergoes strict certification.
|
Copyright © 1999-2021 by the D Language Foundation