October 30, 2013
On 10/30/2013 6:50 AM, Don wrote:
>> Unpredictable, sure, but it is unpredictable in that the error is less than a
>> guaranteed maximum error. The error falls in a range 0<=error<=epsilon. As an
>> analogy, all engineering parts are designed with a maximum deviation from the
>> ideal size, not a minimum deviation.
>
> I don't think the analagy is strong. There's no reason for there to be any error
> at all.
>
> Besides, in the x87 case, there are exponent errors as well precision. Eg,
> double.min * double.min can be zero on some systems, but non-zero on others.
> This causes a total loss of precision.
>
> If this is allowed to happen anywhere (and not even consistently) then it's back
> to the pre-IEEE 754 days: underflow and overflow lead to unspecified behaviour.
>
> The idea that extra precision is always a good thing, is simply incorrect.

Not exactly what I meant - I mean the algorithm should be designed so that extra precision does not break it.


> The problem is that, if calculations can carry extra precision, double rounding
> can occur. This is a form of error that doesn't otherwise exist. If all
> calculations are allowed to do it, there is absolutely nothing you can do to fix
> the problem.
>
> Thus we lose the other major improvement from IEEE 754: predictable rounding
> behaviour.
>
> Fundamentally, there is a primitive operation "discard extra precision" which is
> crucial to most mathematical algorithms but which is rarely explicit.
> In theory in C and C++ this is applied at each sequence point, but in practice
> that's not actually done (for x87 anyway) -- for performance, you want to be
> able to keep values in registers sometimes. So C didn't get this exactly right.
> I think we can do better. But the current behaviour is worse.
>
> This issue is becoming more obvious in CTFE because the extra precision is not
> merely theoretical, it actually happens.

I think it's reasonable to add 3 functions (similar to peek and poke) that force rounding to float/double/real precision. By inserting that into the code where the algorithm requires it would make it far more clear than the C idea of "sequence points" and having no clue whether they matter or not.
October 30, 2013
On 10/23/2013 06:16 PM, Walter Bright wrote:
>
> A D compiler is allowed to compute floating point results at arbitrarily
> large precision - the storage size (float, double, real) only specify
> the minimum precision.

It seems like there is some change in C99 to address excess precision.
Namely assignments and casts are NOT allowed to have greater precision.
Not sure if assignments refers to first storing and then loading a value. So maybe it's useful to review that rule.

http://stackoverflow.com/questions/503436/how-to-deal-with-excess-precision-in-floating-point-computations/503523#503523
http://gcc.gnu.org/ml/gcc-patches/2008-11/msg00105.html
October 30, 2013
On 10/30/2013 07:29 PM, Martin Nowak wrote:
> It seems like there is some change in C99 to address excess precision.
> Namely assignments and casts are NOT allowed to have greater precision.
> Not sure if assignments refers to first storing and then loading a
> value. So maybe it's useful to review that rule.
>
Issue 7455 - Allow a cast to discard precision from a floating point during constant folding
http://d.puremagic.com/issues/show_bug.cgi?id=7455
October 30, 2013
On Oct 23, 2013 5:21 PM, "Walter Bright" <newshound2@digitalmars.com> wrote:
>
> On 10/23/2013 8:44 AM, Apollo Hogan wrote:
>>
>> That is: without optimization the run-time "normalization" is correct.
 With
>> optimization it is broken.  That is pretty easy to work around by simply compiling the relevant library without optimization.  (Though it would
be nice
>> to have, for example, pragmas to mark some functions as "delicate" or "non-optimizable".)  A bigger issue is that the compile-time
normalization call
>> gives the 'wrong' answer consistently with or without optimization.  One
would
>> expect that evaluating a pure function at run-time or compile-time would
give
>> the same result...
>
>
> A D compiler is allowed to compute floating point results at arbitrarily
large precision - the storage size (float, double, real) only specify the
minimum precision.
>
> This behavior is fairly deeply embedded into the front end, optimizer,
and various back ends.
>
> To precisely control maximum precision, I suggest using inline assembler
to use the exact sequence of instructions needed for double-double operations.

Why do I feel like you recommend writing code in assembler every other post you make. :o)

Regards
-- 
Iain Buclaw

*(p < e ? p++ : p) = (c & 0x0f) + '0';


November 05, 2013
On Wednesday, 30 October 2013 at 18:28:14 UTC, Walter Bright wrote:
> On 10/30/2013 6:50 AM, Don wrote:
>>> Unpredictable, sure, but it is unpredictable in that the error is less than a
>>> guaranteed maximum error. The error falls in a range 0<=error<=epsilon. As an
>>> analogy, all engineering parts are designed with a maximum deviation from the
>>> ideal size, not a minimum deviation.
>>
>> I don't think the analagy is strong. There's no reason for there to be any error
>> at all.
>>
>> Besides, in the x87 case, there are exponent errors as well precision. Eg,
>> double.min * double.min can be zero on some systems, but non-zero on others.
>> This causes a total loss of precision.
>>
>> If this is allowed to happen anywhere (and not even consistently) then it's back
>> to the pre-IEEE 754 days: underflow and overflow lead to unspecified behaviour.
>>
>> The idea that extra precision is always a good thing, is simply incorrect.
>
> Not exactly what I meant - I mean the algorithm should be designed so that extra precision does not break it.

Unfortunately, that's considerably more difficult than writing an algorithm for a known precision.
And it is impossible in any case where you need full machine precision (which applies to practically all library code, and most of my work).


>> The problem is that, if calculations can carry extra precision, double rounding
>> can occur. This is a form of error that doesn't otherwise exist. If all
>> calculations are allowed to do it, there is absolutely nothing you can do to fix
>> the problem.
>>
>> Thus we lose the other major improvement from IEEE 754: predictable rounding
>> behaviour.
>>
>> Fundamentally, there is a primitive operation "discard extra precision" which is
>> crucial to most mathematical algorithms but which is rarely explicit.
>> In theory in C and C++ this is applied at each sequence point, but in practice
>> that's not actually done (for x87 anyway) -- for performance, you want to be
>> able to keep values in registers sometimes. So C didn't get this exactly right.
>> I think we can do better. But the current behaviour is worse.
>>
>> This issue is becoming more obvious in CTFE because the extra precision is not
>> merely theoretical, it actually happens.
>
> I think it's reasonable to add 3 functions (similar to peek and poke) that force rounding to float/double/real precision. By inserting that into the code where the algorithm requires it would make it far more clear than the C idea of "sequence points" and having no clue whether they matter or not.

Yeah, the sequence points thing is a bit of a failure. It introduces such a performance penalty, that compilers don't actually respect it, and nobody would want them to.

A compiler intrinsic, which generates no code (simply inserting a barrier for the optimiser) sounds like the correct approach.

Coming up for a name for this operation is difficult.


November 05, 2013
Don:

> A compiler intrinsic, which generates no code (simply inserting a barrier for the optimiser) sounds like the correct approach.
>
> Coming up for a name for this operation is difficult.

Something like this?

noFPOpt
naiveFP
literalFP
asisFP
FPbarrier
barrierFP

Bye,
bearophile
November 05, 2013
On Tuesday, 5 November 2013 at 16:31:23 UTC, bearophile wrote:
> Don:
>
>> A compiler intrinsic, which generates no code (simply inserting a barrier for the optimiser) sounds like the correct approach.
>>
>> Coming up for a name for this operation is difficult.
>
> Something like this?
>
> noFPOpt
> naiveFP
> literalFP
> asisFP
> FPbarrier
> barrierFP


The name should be about the semantic, not the intended behavior (optimization barrier). My ideas:

x + precisely!float(y - z)
x + exactly!float(y - z)
x + exact!float(y - z)
x + strictly!float(y - z)
x + strict!float(y - z)
x + strictfp!float(y - z)  // familiar for Java programmers

November 06, 2013
On 11/5/2013 8:19 AM, Don wrote:
> On Wednesday, 30 October 2013 at 18:28:14 UTC, Walter Bright wrote:
>> Not exactly what I meant - I mean the algorithm should be designed so that
>> extra precision does not break it.
>
> Unfortunately, that's considerably more difficult than writing an algorithm for
> a known precision.
> And it is impossible in any case where you need full machine precision (which
> applies to practically all library code, and most of my work).

I have a hard time buying this. For example, when I wrote matrix inversion code, more precision was always gave more accurate results.


> A compiler intrinsic, which generates no code (simply inserting a barrier for
> the optimiser) sounds like the correct approach.
>
> Coming up for a name for this operation is difficult.

float toFloatPrecision(real arg) ?

November 06, 2013
On 10/30/2013 3:36 PM, Iain Buclaw wrote:
> Why do I feel like you recommend writing code in assembler every other post you
> make. :o)

You're exaggerating. I recommend assembler in only 1 out of 4 posts.

November 06, 2013
On Wednesday, 6 November 2013 at 06:28:59 UTC, Walter Bright wrote:
> On 11/5/2013 8:19 AM, Don wrote:
>> On Wednesday, 30 October 2013 at 18:28:14 UTC, Walter Bright wrote:
>>> Not exactly what I meant - I mean the algorithm should be designed so that
>>> extra precision does not break it.
>>
>> Unfortunately, that's considerably more difficult than writing an algorithm for
>> a known precision.
>> And it is impossible in any case where you need full machine precision (which
>> applies to practically all library code, and most of my work).
>
> I have a hard time buying this. For example, when I wrote matrix inversion code, more precision was always gave more accurate results.

I had a chat with a fluid simulation expert (mostly plasma and microfluids) with a broad computing background and the only algorithms he could think of that are by necessity fussy about max precision are elliptical curve algorithms.

>
>> A compiler intrinsic, which generates no code (simply inserting a barrier for
>> the optimiser) sounds like the correct approach.
>>
>> Coming up for a name for this operation is difficult.
>
> float toFloatPrecision(real arg) ?