May 15, 2016 Re: Always false float comparisons | ||||
---|---|---|---|---|
| ||||
Posted in reply to Manu | On 5/15/2016 10:13 PM, Manu via Digitalmars-d wrote: > 1.3f != 1.3 is not accurate, it's wrong. I'm sorry, there is no way to make FP behave like mathematics. It's its own beast, with its own rules. >> The initial Java spec worked as you desired, and they were pretty much >> forced to back off of it. > Ok, why's that? Because forcing the x87 to work at reduced precision caused a 2x slowdown or something like that, making Java uncompetitive (i.e. unusable) for numerics work. >> They won't match on any code that uses the x87. The standard doesn't require >> float math to use float instructions, they can (and do) use double >> instructions for temporaries. > If it does, then it is careful to make sure the precision expectations > are maintained. Have you tested this? > If you don't '-ffast-math', the FPU code produces a > IEEE conformant result on reasonable compilers. Googling 'fp:fast' yields: "Creates the fastest code in most cases by relaxing the rules for optimizing floating-point operations. This enables the compiler to optimize floating-point code for speed at the expense of accuracy and correctness. When /fp:fast is specified, the compiler may not round correctly at assignment statements, typecasts, or function calls, and may not perform rounding of intermediate expressions. The compiler may reorder operations or perform algebraic transforms—for example, by following associative and distributive rules—without regard to the effect on finite precision results. The compiler may change operations and operands to single-precision instead of following the C++ type promotion rules. Floating-point-specific contraction optimizations are always enabled (fp_contract is ON). Floating-point exceptions and FPU environment access are disabled (/fp:except- is implied and fenv_access is OFF)." This doesn't line up with what you said it does? > We depend on this. I googled 'fp:precise', which is the VC++ default, and found this: "Using /fp:precise when fenv_access is ON disables optimizations such as compile-time evaluations of floating-point expressions." How about that? No CTFE! Is that really what you wanted? :-) > They are certainly selected with the _intent_ that they are less accurate. This astonishes me. What algorithm requires less accuracy? > It's not reasonable that a CTFE function may produce a radically > different result than the same function at runtime. Yeah, since the less accurate version can suffer from a phenomenon called "total loss of precision" where accumulated roundoff errors make the value utter garbage. When is this desirable? >> I'm interested to hear how he was "shafted" by this. This apparently also >> contradicts the claim that other languages do as you ask. > > I've explained prior the cases where this has happened are most often > invoked by the hardware having a reduced runtime precision than the > compiler. The only cases I know of where this has happened due to the > compiler internally is CodeWarrior; an old/dead C++ compiler that > always sucked and caused us headaches of all kinds. > The point is, the CTFE behaviour is essentially identical to our > classic case where the hardware runs a different precision than the > compiler, and that's built into the language! It's not just an anomaly > expressed by one particular awkward platform we're required to > support. You mentioned there was a "shimmer" effect. With the extremely limited ability of C++ compilers to fold constants, I'm left wondering how your code suffered from this, and why you would calculate the same value at both compile and run time. |
May 16, 2016 Re: Always false float comparisons | ||||
---|---|---|---|---|
| ||||
Posted in reply to Ola Fosheim Grøstad | On 16 May 2016 at 08:00, Ola Fosheim Grøstad via Digitalmars-d <digitalmars-d@puremagic.com> wrote:
> On Sunday, 15 May 2016 at 22:49:27 UTC, Walter Bright wrote:
>>
>> On 5/15/2016 2:06 PM, Ola Fosheim Grøstad wrote:
>>>
>>> The net result is that adding const/immutable to a type can change the
>>> semantics
>>> of the program entirely at the whim of the compiler implementor.
>>
>>
>> C++ Standard allows the same increased precision, at the whim of the compiler implementor, as quoted to you earlier.
>>
>> What your particular C++ compiler does is not relevant, as its behavior is not required by the Standard.
>
>
> This is a crazy attitude to take. C++ provides means to detect that IEEE floats are being used in the standard library. C/C++ supports non-standard floating point because some platforms only provide non-standard floating point. They don't do it because it is _desirable_ in general.
>
> You might as well say that you are not required to drive on the right side on the road, because you occasionally have to drive on the left. So therefore it is ok to always drive on left.
>
>> My proposal removes the "whim" by requiring 128 bit precision for CTFE.
>
>
> No, D's take on floating point is FUBAR.
>
> const float value = 1.30;
> float copy = value;
> assert(value*0.5 == copy*0.5); // FAILS! => shutdown
>
This says more about promoting float operations to double than anything else, and has nothing to do with CTFE.
|
May 15, 2016 Re: Always false float comparisons | ||||
---|---|---|---|---|
| ||||
Posted in reply to Manu | On 5/15/2016 10:24 PM, Manu via Digitalmars-d wrote: > On 16 May 2016 at 14:31, Walter Bright via Digitalmars-d > <digitalmars-d@puremagic.com> wrote: >> On 5/15/2016 9:05 PM, Manu via Digitalmars-d wrote: >>> >>> I've never read the C++ standard, but I have more experience with a >>> wide range of real-world compilers than most, and it is rarely very >>> violated. >> >> >> It has on every C/C++ compiler for x86 machines that used the x87, which was >> true until SIMD, and is still true for x86 CPUs that don't target SIMD. > > It has what? Reinterpreted your constant-folding to execute in 80bits > internally for years? Again, if that's true, I expect that's only true > in the context that the compiler also takes care to maintain the IEEE > conformant bit pattern, or at very least, it works because the > opportunity for FP constant folding in C++ is almost non-existent > compared to CTFE, such that it's never resulted in a problem case in > my experience. Check out my other message to you quoting the VC++ manual. It does constant folding at higher precision. So does gcc: http://stackoverflow.com/questions/7295861/enabling-strict-floating-point-mode-in-gcc > In D, we will (do) use CTFE for table generation all the time (this > has never been done in C++). If those tables were generated with > entirely different precision than the runtime functions, that's just > begging for trouble. You can generate the tables at runtime at program startup by using a 'static this()' constructor. > In the majority of my anecdotes, if they don't match, there are > cracks/seams in the world. That is a show-stopping bug. We have had > many late nights, even product launch delays due to by these problems. > They have been a nightmare to solve in the past. > Obviously the solution in this case is a relatively simple > work-around; don't use CTFE (ie, lean on the LLVM runtime codegen > instead to do the right thing with the float precision), but that's a > tragic solution to a problem that should never happen in the first > place. You're always going to have tragic problems if you expect FP to behave like conventional mathematics. Just look at all the switches VC++ has that influence FP behavior. Do you really understand all that? Tell me you understand all the gcc floating point switches? https://gcc.gnu.org/wiki/FloatingPointMath How about g++ getting different results based on optimization switches? http://stackoverflow.com/questions/7517588/different-floating-point-result-with-optimization-enabled-compiler-bug ------- Or, you could design the code so that there aren't cracks because it is more tolerant of slight differences, i.e. look for so many mantissa bits matching instead of all of them matching. This can be done by subtracting the operands and looking at the magnitude of the difference. The fact that you have had "many late nights" and were not using D means that the compiler switches you were using were not adequate or were not doing what you thought they were doing. |
May 15, 2016 Re: Always false float comparisons | ||||
---|---|---|---|---|
| ||||
Posted in reply to Manu | On 5/15/2016 10:37 PM, Manu via Digitalmars-d wrote: > No, you'll give me 80bit _when I type "real"_. Otherwise, if I type > 'double', you'll give me that. I don't understand how that can be > controversial. Because, as I explained, that results in a 2x or more speed degradation (along with the loss of accuracy). > I know you love the x87, but I'm pretty sure you're among a small > minority. Personally, I don't want a single line of code that goes > anywhere near the x87 to be emit in any binary I produce. It's a > deprecated old crappy piece of hardware, and transfers between x87 and > sse regs are slow. I used to do numerics work professionally. Most of the troubles I had were catastrophic loss of precision. Accumulated roundoff errors when doing numerical integration or matrix inversion are major problems. 80 bits helps dramatically with that. >> It's also why I'd like to build a 128 soft fp emulator in dmd for all >> compile time float operations. > > And I did also realise the reason for your desire to implement 128bit > soft-float the same moment I realised this. The situation that > different DMD builds operate at different precisions internally (based > on the host arch) is a whole new level of astonishment. Then you're probably also astonished at the links I posted to you on how g++ and VC++ behave with FP. |
May 16, 2016 Re: Always false float comparisons | ||||
---|---|---|---|---|
| ||||
On 16 May 2016 at 06:02, Manu via Digitalmars-d <digitalmars-d@puremagic.com> wrote: > > I'm not interested in C/C++, I gave some anecdotes where it's gone wrong for me too, but regardless; generally, they do match, and I can't think of a single modern example where that's not true. If you *select* fast-math, then you may generate code that doesn't match, but that's a deliberate selection. > > If I want 'real' math (in CTFE or otherwise), I will type "real". It is completely unreasonable to reinterpret the type that the user specified. CTFE should execute code the same way runtime would execute the code (without -ffast-math, and conformant ieee hardware). This is not a big ask. > > Incidentally, I made the mistake of mentioning this thread (due to my astonishment that CTFE ignores float types) out loud to my colleagues... and they actually started yelling violently out loud. One of them has now been on a 10 minute angry rant with elevated tone and waving his arms around about how he's been shafted by this sort behaviour so many times before. I wish I recorded it, I'd love to have submit it as evidence. It isn't all bad. Most the time you'll never notice. :-) I can't think of a case of the top of my head where too much precision caused a surprise. It's always when there is too little: https://issues.dlang.org/show_bug.cgi?id=16026 And I think the it's about that time of the year when I remind people of gcc bug 323, and this lovely blog post. http://blog.jwhitham.org/2015/04/gcc-bug-323-journey-to-heart-of.html |
May 16, 2016 Re: Always false float comparisons | ||||
---|---|---|---|---|
| ||||
Posted in reply to Max Samukha | On 16 May 2016 at 15:49, Max Samukha via Digitalmars-d <digitalmars-d@puremagic.com> wrote:
> On Monday, 16 May 2016 at 04:02:54 UTC, Manu wrote:
>
>> extended x = 1.3;
>> x + y;
>>
>> If that code were to CTFE, I expect the CTFE to use extended precision.
>> My point is, CTFE should surely follow the types and language
>> semantics as if it were runtime generated code... It's not reasonable
>> that CTFE has higher precision applied than the same code at runtime.
>> CTFE must give the exact same result as runtime execution of the function.
>>
>
> You are not even guaranteed to get the same result on two different x86 implementations. AMD64:
>
> "The processor produces a floating-point result defined by the IEEE standard
> to be infinitely precise.
> This result may not be representable exactly in the destination format,
> because only a subset of the
> continuum of real numbers finds exact representation in any particular
> floating-point format."
I think that is to be interpreted that the result will be the closest
representation possible (rounding up), and that's not fuzzy.
I've never heard of an IEEE float unit that gives different results
than another one...?
|
May 16, 2016 Re: Always false float comparisons | ||||
---|---|---|---|---|
| ||||
Posted in reply to Iain Buclaw | On Monday, 16 May 2016 at 06:48:19 UTC, Iain Buclaw wrote:
> I can't think of a case of the top of my head where too much precision caused a surprise. It's always when there is too little:
Wrong. You obviously don't do much system level programming using floats.
|
May 16, 2016 Re: Always false float comparisons | ||||
---|---|---|---|---|
| ||||
On 16 May 2016 at 16:09, Iain Buclaw via Digitalmars-d <digitalmars-d@puremagic.com> wrote:
> On 16 May 2016 at 06:06, Manu via Digitalmars-d <digitalmars-d@puremagic.com> wrote:
>> On 16 May 2016 at 14:05, Manu <turkeyman@gmail.com> wrote:
>>> On 16 May 2016 at 13:03, Walter Bright via Digitalmars-d <digitalmars-d@puremagic.com> wrote:
>>>> On 5/15/2016 7:01 PM, Manu via Digitalmars-d wrote:
>>>>>
>>>>> Are you saying 'float' in CTFE is not 'float'? I protest this about as strongly as I can muster...
>>>>
>>>>
>>>> I imagine you'll be devastated when you discover that the C++ Standard does not require 32 bit floats to have 32 bit precision either, and never did.
>>>>
>>>> :-)
>>>
>>> I've never read the C++ standard, but I have more experience with a wide range of real-world compilers than most, and it is rarely very violated. The times it is, we've known about it, and it has made us all very, very angry.
>>
>> Holy shit, it's just occurred to me that 'real' is only 64bits on arm
>> (and every non-x86 platform)...
>> That means a compiler running on an arm host will produce a different
>> binary than a compiler running on an x86 host!! O_O
>
> Which is why gcc/g++ (ergo gdc) uses floating point emulation. Getting consistent results at compile time regardless of whatever host/target/cross configuration trumps doing it natively.
Certainly. I understand this, and desire it in the frontend too.
|
May 16, 2016 Re: Always false float comparisons | ||||
---|---|---|---|---|
| ||||
Posted in reply to Ola Fosheim Grøstad | On 16 May 2016 at 09:06, Ola Fosheim Grøstad via Digitalmars-d <digitalmars-d@puremagic.com> wrote:
> On Monday, 16 May 2016 at 06:48:19 UTC, Iain Buclaw wrote:
>>
>> I can't think of a case of the top of my head where too much precision caused a surprise. It's always when there is too little:
>
>
> Wrong. You obviously don't do much system level programming using floats.
>
Feel free to give me any bug report or example. None that you've done so far in this thread relate to CTFE.
|
May 16, 2016 Re: Always false float comparisons | ||||
---|---|---|---|---|
| ||||
Posted in reply to Iain Buclaw | On Monday, 16 May 2016 at 06:34:04 UTC, Iain Buclaw wrote:
> This says more about promoting float operations to double than anything else, and has nothing to do with CTFE.
No, promoting to double is ok. NOT coercing the value to 32 bits representation when it is requested is what is FUBAR and that is the same issue as CTFE. But worse.
This alone is a good reason to avoid D in production. Debugging signal processing/3D code is hard enough as it is. Not having a reliable way to cast to float32 representation is really really really bad.
Random precision is not a good thing, in any way.
1. It does not improve accuracy in a predictable manner. If anything it adds noise.
2. It makes harnessing library code by extensive testing near impossible.
3. It makes debugging of complex system level floating point code very hard.
This ought to be obvious.
|
Copyright © 1999-2021 by the D Language Foundation