May 15, 2016
On 5/15/2016 11:41 AM, Adam D. Ruppe wrote:
> Walter, you keep saying degraded results.
>
> The scientific folks keep saying *consistent* results.

Programs will not produce inconsistent results. You run the same program, you get the same results.

Scientists know that all measurements contain error. The idea is to identify what the maximum error is. I've never heard of a scientific experiment that required a minimum error.


> You might argue they should add "same hardware" to that list, but apparently it
> isn't that easy, or I doubt people would be talking about this.

You're arguing for floating point portability, which is something else again. Java tried that (discussed earlier) and it was a near complete failure.
May 15, 2016
On 5/15/2016 11:45 AM, Ola Fosheim Grøstad wrote:
> On Sunday, 15 May 2016 at 18:35:15 UTC, Walter Bright wrote:
>> On 5/15/2016 1:33 AM, Ola Fosheim Grøstad wrote:
>>> That is _very_ bad.
>>
>> Oh, rubbish. Did you know that (x+y)+z is not equal to x+(y+z) in floating
>> point math? FP math is simply not the math you learned in high school. You'll
>> have nothing but grief with it if you insist it must be the same.
>
> Err... these kind of problems only applies to D.

Nope. They occur with every floating point implementation in every programming language. FP math does not adhere to associative identities.

  http://www.walkingrandomly.com/?p=5380

  http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html

Ironically, the identity is more likely to hold with D's extended precision for intermediate values than with other languages.
May 15, 2016
On Sunday, 15 May 2016 at 20:34:19 UTC, Era Scarecrow wrote:
>  I noticed my code example fails to specify float for the immutable, fixing that only the first line with f == 1.30 fail while all others succeed (for some reason?), which is good news.

Well, it isn't actually good news. Both "const float" and "immutable float" should fail as you have requested coercion to 32 bit floats which ought to change the value of "1.30", but unfortunately D does not heed the coercion.

The net result is that adding const/immutable to a type can change the semantics of the program entirely at the whim of the compiler implementor.

In comparison C++:

  float f = 1.30;
  const float c = 1.30;
  constexpr float i = 1.30;

  std::cout << (f==1.30) << std::endl;  // false
  std::cout << (c==1.30) << std::endl;  // false
  std::cout << (i==1.30) << std::endl;  // false
  std::cout << (1.30==(float)1.30) << std::endl;  // false


May 15, 2016
On Sunday, 15 May 2016 at 21:06:22 UTC, Ola Fosheim Grøstad wrote:
>   std::cout << (f==1.30) << std::endl;  // false
>   std::cout << (c==1.30) << std::endl;  // false
>   std::cout << (i==1.30) << std::endl;  // false
>   std::cout << (1.30==(float)1.30) << std::endl;  // false

If we want equality then we should compare to the representation for a 32 bit float:

 std::cout << (f == 1.2999999523162841796875) << std::endl; // true


May 15, 2016
On Sunday, 15 May 2016 at 21:01:14 UTC, Walter Bright wrote:
>> Err... these kind of problems only applies to D.
>
> Nope. They occur with every floating point implementation in every programming language. FP math does not adhere to associative identities.

No. ONLY D give different results for the same pure function call because you bind the result to a "const float" rather than a "float".

Yes. Algorithms can break because of it.

> Ironically, the identity is more likely to hold with D's extended precision for intermediate values than with other languages.

No, it is not more likely to hold with D's hazard game. I don't know of a single language that doesn't heed a request to truncate/round the mantissa if it provides the means to do it.

I care about algorithms working they way I designed them to work and what I have tested them for. If I request rounding to a 24 bit mantissa then I _expect_ the rounding to take place. And yes, it can break algorithms if you don't.

May 15, 2016
On 5/15/2016 1:49 PM, poliklosio wrote:
>> Can you provide an example of a legitimate algorithm that produces degraded
>> results if the precision is increased?
>
> The real problem here is the butterfly effect (the chaos theory thing). Imagine
> programming a multiplayer game. Ideally you only need to synchronize user
> events, like key presses etc. Other computation can be duplicated on all
> machines participating in a session. Now imagine that some logic other than
> display (e.g. player-bullet collision detection) is using floating point. If
> those computations are not reproducible, a higher precision on one player's
> machine can lead to huge inconsistencies in game states between the machines
> (e.g. my character is dead on your machine but alive on mine)!
> If the game developer cannot achieve reproducibility or it takes  too much work,
> the workarounds can be wery costly. He can, for example, convert implementation
> to soft float or increase amount of synchronization over the network.

If you use the same program binary, you *will* get the same results.


> Also I think Adam is making a very good point about generl reproducibility here.
> If a researcher gets a little bit different results, he has to investigate why,
> because he needs to rule out all the serious mistakes that could be the cause of
> the difference. If he finds out that the source was an innocuous refactoring of
> some D code, he will be rightly frustrated that D has caused so much unnecessary
> churn.
>
> I think the same problem can occur in mission-critical software which undergoes
> strict certification.


Frankly, I think you are setting unreasonable expectations. Today, if you take a standard compliant C program, and compile it with different switch settings, or run it on a machine with a different CPU, you can very well get different answers. If you reorganize the code expressions, you can very well get different answers.
May 15, 2016
On 5/15/2016 2:36 PM, Ola Fosheim Grøstad wrote:
> On Sunday, 15 May 2016 at 21:01:14 UTC, Walter Bright wrote:
>>> Err... these kind of problems only applies to D.
>>
>> Nope. They occur with every floating point implementation in every programming
>> language. FP math does not adhere to associative identities.
>
> No. ONLY D give different results for the same pure function call because you
> bind the result to a "const float" rather than a "float".

It's a fact that with floating point arithmetic, (x+y)+z is not equal to x+(y+x), on every programming language, whether or not the result is bound to a const float or a float.


> Yes. Algorithms can break because of it.

So far, nobody has posted a legitimate one (i.e. not contrived).


>> Ironically, the identity is more likely to hold with D's extended precision
>> for intermediate values than with other languages.
> No, it is not more likely to hold with D's hazard game. I don't know of a single
> language that doesn't heed a request to truncate/round the mantissa if it
> provides the means to do it.

Standard C provides no such hooks for constant folding at compile time. Neither does Standard C++. In fact I know of no language that does. Perhaps you can provide a link?


> I care about algorithms working they way I designed them to work and what I have
> tested them for. If I request rounding to a 24 bit mantissa then I _expect_ the
> rounding to take place.

Example, please, of how you 'request' rounding/truncation.


> And yes, it can break algorithms if you don't.

Example, please.

----

Something you should know from the C++ Standard:

"The values of the floating operands and the results of floating expressions may be represented in greater precision and range than that required by the type; the types are not changed thereby"  -- 5.0.11

D is clearly making FP arithmetic *more* predictable, not less. Since you care about FP results, I seriously suggest studying http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html and what the C++ Standard actually says.
May 15, 2016
On 5/15/2016 2:06 PM, Ola Fosheim Grøstad wrote:
> The net result is that adding const/immutable to a type can change the semantics
> of the program entirely at the whim of the compiler implementor.

C++ Standard allows the same increased precision, at the whim of the compiler implementor, as quoted to you earlier.

What your particular C++ compiler does is not relevant, as its behavior is not required by the Standard.

My proposal removes the "whim" by requiring 128 bit precision for CTFE.
May 15, 2016
On Sunday, 15 May 2016 at 22:49:27 UTC, Walter Bright wrote:
> On 5/15/2016 2:06 PM, Ola Fosheim Grøstad wrote:
>> The net result is that adding const/immutable to a type can change the semantics
>> of the program entirely at the whim of the compiler implementor.
>
> C++ Standard allows the same increased precision, at the whim of the compiler implementor, as quoted to you earlier.
>
> What your particular C++ compiler does is not relevant, as its behavior is not required by the Standard.
>

I am watching this conversation for quite a while now and it's interesting to see how we went from the problem

float f = 1.30;
assert(f == 1.30); // NO
assert(f == cast(float)1.30); //OK

to this (endless) discussion.

> My proposal removes the "whim" by requiring 128 bit precision for CTFE.

@Walter: I think this is a great idea and increased precision is always better!
As you pointed out, yes the only reason to decrease precision if a trade-off for speed. Please keep doing/pushing this!
May 16, 2016
On Sunday, 15 May 2016 at 22:49:27 UTC, Walter Bright wrote:
> My proposal removes the "whim" by requiring 128 bit precision for CTFE.

 Is there an option to use a reproducible fraction type that doesn't have the issues floating point has?

 Consider for most things perhaps this format would work better?

 struct Ftype { //Ftype a placeholder name
   long whole;
   long fraction;
   long divider;
 }

 Now 1.30 can perfectly be represented as Ftype(1,3,10) (or 13/10) and would never fail, represent incredibly small and incredibly large things, just not nearly to the degree floating/doubles could for scientific calculations, while at the same time avoiding the imprecision (or requirement) of the FPU entirely.