May 18, 2016 Re: Always false float comparisons | ||||
---|---|---|---|---|
| ||||
Posted in reply to deadalnix | On Wednesday, 18 May 2016 at 19:30:12 UTC, deadalnix wrote:
>>
>> I'm confused as to why the compiler would be using soft floats instead of hard floats.
>
> Cross compilation.
Ah, looking back on the discussion, I see the comments about cross compilation and soft floats. Making more sense now...
So if compiling on x86 for x86, you could just use hard floats, but if compiling on x86 for some other system, then use soft floats to mimic what the result would be as if you had compiled on that system. Correct?
But what if you are compiling for a system whose float behavior matches the system you're compiling on? So for instance, suppose you are only using 32bit floats and not allowing anything fancy like 80bit intermediate calculations. And you're compiling for a system that treats floats the same way. Then, you could theoretically use hard floats in the compiler and the results would be the same.
|
May 18, 2016 Re: Always false float comparisons | ||||
---|---|---|---|---|
| ||||
Posted in reply to Manu | On 5/18/2016 4:27 AM, Manu via Digitalmars-d wrote:
> The comparison was a 24bit fpu doing runtime work but where some
> constant input data was calculated with a separate 32bit fpu. The
> particulars were not ever intended to be relevant to the conversation,
> except the fact that 2 differently precisioned float units were
> producing output that then had to reconcile.
>
> The analogy was to CTFE doing all its work at 80bits, and then the
> processor doing work with the types explicitly stated by the
> programmer; a runtime calculation compared against the same compile
> time calculate is likely to be quite radically different. I don't care
> about the precision, I just care that they're really different.
> Ideally, CTFE would produce a result that is as similar to the runtime
> result as reasonably possible, and I expect using the stated types to
> do the calculations would get much much closer.
> I don't know if a couple of least-significant bits of difference would
> have caused problems for us, I suspect not, but I know that doing math
> at radically different precisions (ie, 32bits vs 80bits) does lead to
> radically different results, not just a couple of bits. That is my
> concern wrt reproduction of my anecdote from PS2 and Gamecubes 24bit
> fpu's.
>
"radically different results" means you were getting catastrophic loss of precision in the runtime results, which is just what doing the CTFE at a higher precision is attempting to avoid (and apparently successfully).
I get that the game did not care about the value produced, even if it was total garbage. I suspect that games are the only category of FP apps where there is no concern for the values computed. I do not understand the tolerance for bad results in scientific, engineering, medical, or finance applications.
The only way to *reliably* get the same results at compile time as at runtime, regardless of what technology is put into the language/compiler, is to actually run the code on the target hardware, save the results, and insert those values into the code. It isn't that hard to do, and will save you from endless ghosts popping up that waste your debugging time.
|
May 18, 2016 Re: Always false float comparisons | ||||
---|---|---|---|---|
| ||||
Posted in reply to Era Scarecrow | On Wednesday, 18 May 2016 at 19:53:10 UTC, Era Scarecrow wrote: > On Wednesday, 18 May 2016 at 19:36:59 UTC, tsbockman wrote: >> I agree that intrinsics for this would be nice. I doubt that any current D platform is actually computing the full 128 bit result for every 64 bit multiply though - that would waste both power and performance, for most programs. > > Except the 128 result is _already_ there for 0 cost (at least for x86 instructions that I'm aware). Can you give me a source for this, or at least the name of the relevant op code? (I'm new to x86 assembly.) > There's bound to be enough cases (say pseudo random number generation, encryption, or numerical processing above 64bits) I'd like access to it supported by the language and not having to inject instructions using the asm command. Of course it would be useful to have in the language; I wasn't disputing that. I'd like to have as much support for 128-bit integers in the language as possible. Among other things, this would greatly simplify getting 128-bit floating-point working. I'm just surprised that the CPU would really calculate the upper 64 bits of a multiply without being explicitly asked to. |
May 18, 2016 Re: Always false float comparisons | ||||
---|---|---|---|---|
| ||||
Posted in reply to Walter Bright | On Wednesday, 18 May 2016 at 20:29:27 UTC, Walter Bright wrote:
> I do not understand the tolerance for bad results in scientific, engineering, medical, or finance applications.
I don't think anyone has suggested tolerance for bad results in any of those applications.
What _has_ been argued for is that in order to _prevent_ bad results it's necessary for the programmer to have control and clarity over the choice of precision as much as possible.
If I'm writing a numerical simulation or calculation using insufficient floating-point precision, I don't _want_ to be saved by under-the-hood precision increases -- I would like it to break because then I will be forced to improve either the floating-point precision or the choice of algorithm (or both).
To be clear: the fact that D makes it a priority to offer me the highest possible floating-point precision is awesome. But precision is not the only factor in generating accurate scientific results.
|
May 18, 2016 Re: Always false float comparisons | ||||
---|---|---|---|---|
| ||||
Posted in reply to tsbockman | On Wednesday, 18 May 2016 at 21:02:03 UTC, tsbockman wrote: > On Wednesday, 18 May 2016 at 19:53:10 UTC, Era Scarecrow wrote: >> On Wednesday, 18 May 2016 at 19:36:59 UTC, tsbockman wrote: >>> I agree that intrinsics for this would be nice. I doubt that any current D platform is actually computing the full 128 bit result for every 64 bit multiply though - that would waste both power and performance, for most programs. >> >> Except the 128 result is _already_ there for 0 cost (at least for x86 instructions that I'm aware). > > Can you give me a source for this, or at least the name of the relevant op code? (I'm new to x86 assembly.) http://www.mathemainzel.info/files/x86asmref.html#mul http://www.intel.com/content/www/us/en/processors/architectures-software-developer-manuals.html There's div, idiv, mul, and imul which follow this exact pattern. Although the instruction mentioned in the following pages is meant for 32bit or less, the pattern used is no different. (mathemainzel.info) Usage MUL src Modifies flags CF OF (AF,PF,SF,ZF undefined) Unsigned multiply of the accumulator by the source. If "src" is a byte value, then AL is used as the other multiplicand and the result is placed in AX. If "src" is a word value, then AX is multiplied by "src" and DX:AX receives the result. If "src" is a double word value, then EAX is multiplied by "src" and EDX:EAX receives the result. The 386+ uses an early out algorithm which makes multiplying any size value in EAX as fast as in the 8 or 16 bit registers. (intel.com) Downloading the 64 intel manual on opcodes says the same thing, only the registers become RDX:RAX with 64bit instructions. Quadword RAX r/m64 RDX:RAX |
May 18, 2016 Re: Always false float comparisons | ||||
---|---|---|---|---|
| ||||
Posted in reply to Joseph Rushton Wakeling | On Wednesday, 18 May 2016 at 21:49:34 UTC, Joseph Rushton Wakeling wrote:
> On Wednesday, 18 May 2016 at 20:29:27 UTC, Walter Bright wrote:
>> I do not understand the tolerance for bad results in scientific, engineering, medical, or finance applications.
>
> I don't think anyone has suggested tolerance for bad results in any of those applications.
>
I don't think its about tolerance for bad results, so much as the ability to make the trade-off between speed and precision when you need to.
Just thinking of finance: a market maker has to provide quotes on potentially thousands of instruments in real-time. This might involve some heavy calculations for options pricing. When dealing with real-time tick data (the highest frequency of financial data), sometimes you take shortcuts that you wouldn't be willing to do if you were working with lower frequency data. It's not that you don't care about precision. It's just that sometimes it's more important to be fast than accurate.
I'm not a market maker and don't work with high frequency data. I usually look at low enough frequency data so that I actually do generally care more about accurate results than speed. Nevertheless, sometimes with hefty simulations that take several hours or days to run, I might be willing to take some short cuts to get a general idea of the results. Then, when I implement the strategy, I might do something different.
|
May 18, 2016 Re: Always false float comparisons | ||||
---|---|---|---|---|
| ||||
Posted in reply to Era Scarecrow | On Wednesday, 18 May 2016 at 22:06:43 UTC, Era Scarecrow wrote: > On Wednesday, 18 May 2016 at 21:02:03 UTC, tsbockman wrote: >> Can you give me a source for this, or at least the name of the relevant op code? (I'm new to x86 assembly.) > > http://www.mathemainzel.info/files/x86asmref.html#mul > > http://www.intel.com/content/www/us/en/processors/architectures-software-developer-manuals.html Thanks. > [...] > Downloading the 64 intel manual on opcodes says the same thing, only the registers become RDX:RAX with 64bit instructions. > > Quadword RAX r/m64 RDX:RAX I will look into adding intrinsics for full-width multiply and combined division-modulus. |
May 18, 2016 Re: Always false float comparisons | ||||
---|---|---|---|---|
| ||||
Posted in reply to deadalnix | On 5/18/2016 1:22 PM, deadalnix wrote: > On Wednesday, 18 May 2016 at 20:14:22 UTC, Walter Bright wrote: >> On 5/18/2016 4:48 AM, deadalnix wrote: >>> Typo: arbitrary precision FP. Meaning some soft float that grows as big as >>> necessary to not lose precision à la BitInt but for floats. >> >> 0.10 is not representable in a binary format regardless of precision. > > You should ask the gcc guys how they do it, but you can surely represent this as > a fraction, Right. > so I see no major blocker. Now try the square root of 2. Or pi, e, etc. The irrational numbers are, by definition, not representable as a ratio. |
May 18, 2016 Re: Always false float comparisons | ||||
---|---|---|---|---|
| ||||
Posted in reply to Walter Bright | On Wednesday, 18 May 2016 at 23:09:28 UTC, Walter Bright wrote:
> Now try the square root of 2. Or pi, e, etc. The irrational numbers are, by definition, not representable as a ratio.
Continued fraction? :-)
|
May 18, 2016 Re: Always false float comparisons | ||||
---|---|---|---|---|
| ||||
Posted in reply to Joseph Rushton Wakeling | On 5/18/2016 4:17 PM, Joseph Rushton Wakeling wrote:
> On Wednesday, 18 May 2016 at 23:09:28 UTC, Walter Bright wrote:
>> Now try the square root of 2. Or pi, e, etc. The irrational numbers are, by
>> definition, not representable as a ratio.
>
> Continued fraction? :-)
Somehow I don't think gcc is using Mathematica for constant folding.
|
Copyright © 1999-2021 by the D Language Foundation