May 18, 2016 Re: Always false float comparisons | ||||
---|---|---|---|---|
| ||||
Posted in reply to Ola Fosheim Grøstad | On Wednesday, 18 May 2016 at 09:21:30 UTC, Ola Fosheim Grøstad wrote: > On Wednesday, 18 May 2016 at 07:21:30 UTC, Joakim wrote: >> On Wednesday, 18 May 2016 at 05:49:16 UTC, Ola Fosheim Grøstad wrote: >>> On Wednesday, 18 May 2016 at 03:01:14 UTC, Joakim wrote: >>>> There is nothing "random" about increasing precision till the end, it follows a well-defined rule. >>> >>> Can you please quote that well-defined rule? >> >> It appears to be "the compiler carries everything internally to 80 bit precision, even if they are typed as some other precision." >> http://forum.dlang.org/post/nh59nt$1097$1@digitalmars.com > > "The compiler" means: implementation defined. That is the same as not being well-defined. :-) Welcome to the wonderful world of C++! :D More seriously, it is well-defined for that implementation, you did not raise the issue of the spec till now. In fact, you seemed not to care what the specs say. >> I don't understand why you're using const for one block and not the other, seems like a contrived example. If the precision of such constants matters so much, I'd be careful to use the same const float everywhere. > > Now, that is a contrived defense for brittle language semantics! :-) No, it has nothing to do with language semantics and everything to do with bad numerical programming. >> If matching such small deltas matters so much, I wouldn't be using floating-point in the first place. > > Why not? The hardware gives the same delta. It only goes wrong if the compiler decides to "improve". Because floating-point is itself fuzzy, in so many different ways. You are depending on exactly repeatable results with a numerical type that wasn't meant for it. >>> It depends on the unit tests running with the exact same precision as the production code. >> >> What makes you think they don't? > > Because the language says that I cannot rely on it and the compiler implementation proves that to be correct. You keep saying this: where did anyone mention unit tests not running with the same precision till you just brought it up out of nowhere? The only prior mention was that compile-time calculation of constants that are then checked for bit-exact equality in the tests might have problems, but that's certainly not all tests and I've repeatedly pointed out you should never be checking for bit-exact equality. >>> D is doing it wrong because it makes it is thereby forcing programmers to use algorithms that are 10-100x slower to get reliable results. >>> >>> That is _wrong_. >> >> If programmers want to run their code 10-100x slower to get reliably inaccurate results, that is their problem. > > Huh? The point is that what you consider reliable will be less accurate, sometimes much less. >> If you're so convinced it's exact for a few cases, then check exact equality there. For most calculation, you should be using approxEqual. > > I am sorry, but this is not a normative rule at all. The rule is that you check for the bounds required. If it is exact, it just means the bounds are the same value (e.g. tight). > > It does not help to say that people should use "approxEqual", because it does not improve on correctness. Saying such things just means that non-expert programmers assume that guessing the bounds will be sufficient. Well, it isn't sufficient. The point is that there are _always_ bounds, so you can never check for the same value. Almost any guessed bounds will be better than incorrectly checking for the bit-exact value. >> Since the real error bound is always larger than that, almost any error bound you pick will tend to be closer to the real error bound, or at least usually bigger and therefore more realistic, than checking for exact equality. > > I disagree. It is much better to get extremely wrong results frequently and therefore detect the error in testing. > > What you are saying is that is better to get extremely wrong results infrequently which usually leads to error passing testing and enter production. > > In order to test well you also need to understand for input makes the algorithm unstable/fragile. Nobody is talking about the general principle of how often you get wrong results or unit testing. We were talking about a very specific situation: how should compile-time constants be checked and variables compared to constants, compile-time or not, to avoid exceptional situations. My point is that both should always be thought about. In the latter case, ie your f(x) example, it has nothing to do with error bounds, but that your f(x) is not only invalid at 2, but in a range around 2. Now, both will lead to less "wrong results," but those are wrong results you _should_ be trying to avoid as early as possible. >> The computer doesn't know that, so it will just plug that x in and keep cranking, till you get nonsense data out the end, if you don't tell it to check that x isn't too close to 2 and not just 2. > > Huh? I am not getting nonsense data. I am getting what I am asking for, I only want to avoid dividing by zero because it will make the given hardware 100x slower than the test. Zero is not the only number that screws up that calculation. >> You have a wrong mental model that the math formulas are the "real world," and that the computer is mucking it up. > > Nothing wrong with my mental model. My mental model is the hardware specification + the specifics of the programming platform. That is the _only_ model that matters. > > What D prevents me from getting is the specifics of the programming platform by making the specifics hidden. Your mental model determines what you think is valid input to f(x) and what isn't, that has nothing to do with D. You want D to provide you a way to only check for 0.0, whereas my point is that there are many numbers in the neighborhood of 0.0 which will screw up your calculation, so really you should be using approxEqual. >> The truth is that the computer, with its finite maximums and bounded precision, better models _the measurements we make to estimate the real world_ than any math ever written. > > I am not estimating anything. I am synthesising artificial worlds. My code is the model, the world is my code running at specific hardware. > > It is self contained. I don't want the compiler to change my model because that will generate the wrong world. ;-) It isn't changing your model, you can always use a very small threshold in approxEqual. Yes, a few more values would be disallowed as input and output than if you were to compare exactly to 0.0, but your model is almost certainly undefined there too. If your point is that you're modeling artificial worlds that have nothing to do with reality, you can always change your threshold around 0.0 to be much smaller, and who cares if it can't go all the way to zero, it's all artificial, right? :) If you're modeling the real world, any function that blows up and gives you bad data, blows up over a range, never a single point, because that's how measurement works. >>>> Oh, it's real world alright, you should be avoiding more than just 2 in your example above. >>> >>> Which number would that be? >> >> I told you, any numbers too close to 2. > > All numbers close to 2 in the same precision will work out ok. They will give you large numbers that can be represented in the computer, but do not work out to describe the real world, because such formulas are really invalid in a neighborhood of 2, not just at 2. >> On the contrary, it is done because 80-bit is faster and more precise, whereas your notion of reliable depends on an incorrect notion that repeated bit-exact results are better. > > 80 bit is much slower. 80 bit mul takes 3 micro ops, 64 bit takes 1. Without SIMD 64 bit is at least twice as fast. With SIMD multiply-add is maybe 10x faster in 64bit. I have not measured this speed myself so I can't say. > And it is neither more precise or more accurate when you don't get consistent precision. > > In the real world you can get very good performance for the desired accuracy by using unstable algorithms by adding a stage that compensate for the instability. That does not mean that it is acceptable to have differences in the bias as that can lead to accumulating an offset that brings the result away from zero (thus a loss of precision). A lot of hand-waving about how more precision is worse, with no real example, which is what Walter keeps asking for. >> You noted that you don't care that the C++ spec says similar things, so I don't see why you care so much about the D spec now. >> As for that scenario, nobody has suggested it. > > I care about what the C++ spec. I care about how the platform interprets the spec. I never rely on _ONLY_ the C++ spec for production code. Then you must be perfectly comfortable with a D spec that says similar things. ;) > You have said previously that you know the ARM platform. On Apple CPUs you have 3 different floating point units: 32 bit NEON, 64 bit NEON and 64 bit IEEE. > > It supports 1x64bit IEEE, 2x64bit NEON and 4x32 bit NEON. > > You have to know the language, the compiler and the hardware to make this work out. Sure, but nobody has suggested interchanging the three randomly. >>> And so is "float" behaving differently than "const float". >> >> I don't believe it does. > > I have proven that it does, and posted it in this thread. I don't think that example has much to do with what we're talking about. It appears to be some sort of constant folding in the assert that produces the different results, as Joe says, which goes away if you use approxEqual. If you look at the actual const float initially, it is very much a float, contrary to your assertions. |
May 18, 2016 Re: Always false float comparisons | ||||
---|---|---|---|---|
| ||||
Posted in reply to Ethan Watson | On 5/18/2016 2:54 AM, Ethan Watson wrote:
> On Wednesday, 18 May 2016 at 08:55:03 UTC, Walter Bright wrote:
>> MSVC doesn't appear to have a switch that does what you ask for
>
> I'm still not entirely sure what the /fp switch does for x64 builds. The
> documentation is not clear in the slightest and I haven't been able to find any
> concrete information. As near as I can tell it has no effect as the original
> behaviour was tied to how it handles the x87 control words. But it might also be
> possible that the SSE instructions emitted can differ depending on what
> operation you're trying to do. I have not dug deep to see exactly how the code
> gen differs. I can take a guess that /fp:precise was responsible for promoting
> my float to a double to call CRT functions, but I have not tested that so that's
> purely theoretical at the moment.
>
> Of course, while this conversation has mostly been for compile time constant
> folding, the example of passing a value from the EE and treating it as a
> constant in the VU is still analagous to calculating a value at compile time in
> D at higher precision than the instruction set the runtime code is compiled to
> work with.
>
> /arch:sse2 is the default with MSVC x64 builds (Xbox One defaults to /arch:avx),
> and it sounds like the DMD has defaulted to sse2 for a long time. The exception
> being the compile time behaviour. That compile time behaviour conforming to the
> the runtime behaviour is an option I want, with the default being whatever is
> decided in here. Executing code at compile time at a higher precision than what
> SSE dictates is effectively undesired behaviour for our use cases.
>
> And in cases where we compile code for another architecture on x64 (let's say
> ARM code with NEON instructions, as it's the most common case thanks to iOS
> development) then it would be forced to fallback to the default. Fine for most
> use cases as well. It would be up to the user to compile their ARM code on an
> ARM processor to get the code execution match if they need it.
Again, even if the precision matches, the rounding will NOT match, and you will get different results randomly dependent on the exact operand values.
If those differences matter, then you'll randomly be up all night debugging it. If you're willing to try the approach I mentioned, it'll cost you a bit more time up front, but may save a lot of agony later.
|
May 18, 2016 Re: Always false float comparisons | ||||
---|---|---|---|---|
| ||||
Posted in reply to Ola Fosheim Grøstad | On 5/18/2016 3:46 AM, Ola Fosheim Grøstad wrote:
> On Wednesday, 18 May 2016 at 09:13:35 UTC, Iain Buclaw wrote:
>> Can you back that up statistically? Try running this same operation 600
>> million times plot a graph for the result from each run for it so we can get
>> an idea of just how random or arbitrary it really is.
>
> Huh? This isn't about statistics.
It is when you say 'random'. D's fp math is not random, it is completely deterministic, and I've told you so before. You've been pushing the 'random' notion here, and other people have picked it up and inferred they'd get random results every time they ran a D compiled program.
Please use correct meanings of words.
|
May 18, 2016 Re: Always false float comparisons | ||||
---|---|---|---|---|
| ||||
Posted in reply to Walter Bright | On 18 May 2016 at 18:21, Walter Bright via Digitalmars-d <digitalmars-d@puremagic.com> wrote:
> On 5/18/2016 12:56 AM, Ethan Watson wrote:
>>
>> > In any case, the problem Manu was having was with C++.
>> VU code was all assembly, I don't believe there was a C/C++ compiler for it.
>
>
> The constant folding part was where, then?
The comparison was a 24bit fpu doing runtime work but where some constant input data was calculated with a separate 32bit fpu. The particulars were not ever intended to be relevant to the conversation, except the fact that 2 differently precisioned float units were producing output that then had to reconcile.
The analogy was to CTFE doing all its work at 80bits, and then the
processor doing work with the types explicitly stated by the
programmer; a runtime calculation compared against the same compile
time calculate is likely to be quite radically different. I don't care
about the precision, I just care that they're really different.
Ideally, CTFE would produce a result that is as similar to the runtime
result as reasonably possible, and I expect using the stated types to
do the calculations would get much much closer.
I don't know if a couple of least-significant bits of difference would
have caused problems for us, I suspect not, but I know that doing math
at radically different precisions (ie, 32bits vs 80bits) does lead to
radically different results, not just a couple of bits. That is my
concern wrt reproduction of my anecdote from PS2 and Gamecubes 24bit
fpu's.
|
May 18, 2016 Re: Always false float comparisons | ||||
---|---|---|---|---|
| ||||
Posted in reply to Walter Bright | On Wednesday, 18 May 2016 at 08:55:03 UTC, Walter Bright wrote:
> On 5/18/2016 1:30 AM, Ethan Watson wrote:
>>> You're also asking for a mode where the compiler for one machine is supposed
>>> to behave like hand-coded assembler for another machine with a different
>>> instruction set.
>>
>> Actually, I'm asking for something exactly like the arch option for MSVC/-mfmath
>> option for GCC/etc, and have it respect that for CTFE.
>
>
> MSVC doesn't appear to have a switch that does what you ask for:
>
> https://msdn.microsoft.com/en-us/library/e7s85ffb.aspx
Apologies if this has been addressed in the thread, it's a difficult structure to follow for technical discussion. You seem positive about software implementations of float. What are your thoughts on having the compile time implementation of a given type mirror the behaviour of the runtime version?
Fundamentally whatever rules are chosen it would seem better to have fewer rules for people to remember.
|
May 18, 2016 Re: Always false float comparisons | ||||
---|---|---|---|---|
| ||||
Posted in reply to Walter Bright | On Wednesday, 18 May 2016 at 11:17:14 UTC, Walter Bright wrote:
> Again, even if the precision matches, the rounding will NOT match, and you will get different results randomly dependent on the exact operand values.
We've already been burned by middlewares/APIS toggling MMX flags on and off and not cleaning up after themselves, and as such we strictly control those flags going in to and out of such areas. We even have a little class with implementations for x87 (thoroughly deprecated) and SSE that is used in a RAII manner, copying the MMX flag on construction and restoring it on destruction.
I appreciate that it sounds like I'm starting to stretch to hold to my point, but I imagine we'd also be able to control such things with the compiler - or at least know what flags it uses so that we can ensure consistent behaviour between compilation and runtime.
|
May 18, 2016 Re: Always false float comparisons | ||||
---|---|---|---|---|
| ||||
Posted in reply to ixid | On 18 May 2016 at 21:28, ixid via Digitalmars-d <digitalmars-d@puremagic.com> wrote:
> On Wednesday, 18 May 2016 at 08:55:03 UTC, Walter Bright wrote:
>>
>> On 5/18/2016 1:30 AM, Ethan Watson wrote:
>>>>
>>>> You're also asking for a mode where the compiler for one machine is
>>>> supposed
>>>> to behave like hand-coded assembler for another machine with a different
>>>> instruction set.
>>>
>>>
>>> Actually, I'm asking for something exactly like the arch option for
>>> MSVC/-mfmath
>>> option for GCC/etc, and have it respect that for CTFE.
>>
>>
>>
>> MSVC doesn't appear to have a switch that does what you ask for:
>>
>> https://msdn.microsoft.com/en-us/library/e7s85ffb.aspx
>
>
> Apologies if this has been addressed in the thread, it's a difficult structure to follow for technical discussion. You seem positive about software implementations of float. What are your thoughts on having the compile time implementation of a given type mirror the behaviour of the runtime version?
>
> Fundamentally whatever rules are chosen it would seem better to have fewer rules for people to remember.
That's precisely the suggestion; that compile time execution of a
given type mirror the runtime, that is, matching precisions in this
case.
...within reason; as Walter has pointed out consistently, it's very
difficult to be PERFECT for all the reasons he's been repeating, but
there's still a massive difference between the runtime executing a
bunch of float code, and the compile time executing it all promoted to
80bits. Results will drift apart very quickly.
|
May 18, 2016 Re: Always false float comparisons | ||||
---|---|---|---|---|
| ||||
Posted in reply to Joseph Rushton Wakeling | On Wednesday, 18 May 2016 at 11:12:16 UTC, Joseph Rushton Wakeling wrote: > I'm not sure that the `const float` vs `float` is the difference per se. The difference is that in the examples you've given, the `const float` is being determined (and used) at compile time. They both have to be determined at compile time... as there is only a cast involved. The real issue is that the const float binding is treated textually. > But a `const float` won't _always_ be determined or used at compile time, depending on the context and manner in which the value is set. Which makes the problem worse, not better. |
May 18, 2016 Re: Always false float comparisons | ||||
---|---|---|---|---|
| ||||
Posted in reply to Walter Bright | Am Wed, 18 May 2016 04:11:08 -0700 schrieb Walter Bright <newshound2@digitalmars.com>: > On 5/18/2016 3:15 AM, deadalnix wrote: > > On Wednesday, 18 May 2016 at 08:21:18 UTC, Walter Bright wrote: > >> Trying to make D behave exactly like various C++ compilers do, with all their semi-documented behavior and semi-documented switches that affect constant folding behavior, is a hopeless task. > >> > >> I doubt various C++ compilers are this compatible, even if they > >> follow the same ABI. > >> > > > > They aren't. For instance, GCC use arbitrary precision FB, and LLVM uses 128 bits soft floats in their innards. > > Looks like LLVM had the same idea as myself. > > Anyhow, this pretty much destroys the idea that I have proposed some sort of cowboy FP that's going to wreck FP programs. > > (What is arbitrary precision FB?) A claim from GMP, a library used by GCC: > GMP is a free library for arbitrary precision arithmetic, operating on > signed integers, rational numbers, and floating-point numbers. There > is > no practical limit to the precision except the ones implied by the > available memory in the machine GMP runs on. It's difficult to find reliable information, but I think GCC always uses the target precision for constant folding: https://gcc.gnu.org/onlinedocs/gcc-6.1.0/gccint/Floating-Point.html#Floating-Point > Because different representation systems may offer different amounts > of > range and precision, all floating point constants must be represented > in the target machine's format. Therefore, the cross compiler cannot > safely use the host machine's floating point arithmetic; it must > emulate the target's arithmetic. To ensure consistency, GCC always > uses > emulation to work with floating point values, even when the host and > target floating point formats are identical. |
May 18, 2016 Re: Always false float comparisons | ||||
---|---|---|---|---|
| ||||
Posted in reply to tsbockman | On Wednesday, 18 May 2016 at 10:25:10 UTC, tsbockman wrote:
> On Wednesday, 18 May 2016 at 08:38:07 UTC, Era Scarecrow wrote:
>> try {} // Considers the result of 1 line of basic math to be caught by:
>> carry {} //only activates if carry is set
>> overflow {} //if overflowed during some math
>> modulus(m){} //get the remainder as m after a division operation
>> mult(dx) {} //get upper 32/64/whatever after a multiply and set as dx
>>
>> Of course I'd understand if some hardware doesn't offer such support, so the else could be thrown in to allow a workaround code to detect such an event, or only allow it if it's a compliant architecture. Although workaround detection is always possible, just not as fast as hardware supplied.
>
> https://code.dlang.org/packages/checkedint
> https://dlang.org/phobos/core_checkedint.html
Glancing at the checkedInt I really don't see it as being the same as what I'm talking about. Overflow/carry for add perhaps, but unless it breaks down to a single instruction for the compiler to determine if it needs to do something, I see it as a failure (at best, a workaround).
That's just my thoughts. CheckedInt simply _doesn't_ cover what I was talking about. Obtaining the modulus for 0 cost/instructions after doing a division which is in the hardware's opcode side effects (unless the compiler recognizes the pattern and offers it as an optimization), or having the full result of a multiply on hand (that exceeds it's built-in size, long.max*long.max = 128bit result, which the hardware hands to you if you check the register it stores the other half of the result in).
Perhaps what I want is more limited to handling certain tasks (making software math libraries) but I'd still like/want to see access to the other effects of these opcodes.
|
Copyright © 1999-2021 by the D Language Foundation