May 16, 2016
On 5/16/16 3:31 AM, Walter Bright wrote:
>
> Ironically, this Microsoft article argues for greater precision for
> intermediate calculations, although Microsoft ditched 80 bits:
>
> https://msdn.microsoft.com/en-us/library/aa289157(VS.71).aspx

Do you have an explanation on why Microsoft ditched 80-bit floats? -- Andrei
May 16, 2016
On 5/16/16 4:10 AM, Walter Bright wrote:
> FP behavior has complex trade-offs with speed, accuracy, compatibility,
> and size. There are no easy, obvious answers.

That's a fair statement. My understanding is also that 80-bit math is on the wrong side of the tradeoff simply because it's disproportionately slow (again I cite http://nicolas.limare.net/pro/notes/2014/12/12_arit_speed/). All modern ALUs I looked at have 32- and 64-bit FP units only. I'm trying to figure why. -- Andrei

May 16, 2016
On 5/16/2016 3:14 AM, Joseph Rushton Wakeling wrote:
> 1.2999999523162841796875
> 1.3000000000000000444089209850062616169452667236328125

Note the increase in correctness of the result by 10 digits.


> ... which is unintuitive, to say the least;

It isn't any less intuitive than:

   f + f + 1.3f

being calculated in 64 or 80 bit precision, which is commonplace, or for that matter:

   ubyte b = 200;
   ubyte c = 100;
   writeln(b + c);

giving an answer of 300 (instead of 44), which every C/C++/D compiler does.

May 16, 2016
On Monday, 16 May 2016 at 10:57:00 UTC, Walter Bright wrote:
> On 5/16/2016 3:14 AM, Joseph Rushton Wakeling wrote:
>> 1.2999999523162841796875
>> 1.3000000000000000444089209850062616169452667236328125
>
> Note the increase in correctness of the result by 10 digits.

As Adam mentioned, you keep saying "correctness" or "accuracy", when people are consistently talking to you about "consistency" ... :-)

I can always request more precision if I need or want it.  Getting different results for a superficially identical float * double calculation, because one was performed at compile time and another at runtime, is an inconsistency that it might be nicer to avoid.

>> ... which is unintuitive, to say the least;
>
> It isn't any less intuitive than:
>
>    f + f + 1.3f
>
> being calculated in 64 or 80 bit precision

It is less intuitive.  If someFloat + 1.3f is calculated in 64 or 80 bit precision at runtime, it's still constrained by the fact that someFloat only provides 32 bits of floating-point input to the calculation.

If someFloat + 1.3f is calculated instead at compile time, the reasonable assumption is that someFloat _still_ only brings 32 bits of floating-point input.  But as we've seen above, it doesn't.

> or for that matter:
>
>    ubyte b = 200;
>    ubyte c = 100;
>    writeln(b + c);
>
> giving an answer of 300 (instead of 44), which every C/C++/D compiler does.

The latter result, at least (AIUI) is consistent depending on whether the calculation is done at compile time or runtime.
May 16, 2016
On Monday, 16 May 2016 at 11:18:45 UTC, Joseph Rushton Wakeling wrote:
> As Adam mentioned, you keep saying "correctness" or "accuracy"

... meant to say '"correctness" or "precision"'.
May 16, 2016
On Monday, 16 May 2016 at 10:25:33 UTC, Andrei Alexandrescu wrote:
> On 5/16/16 12:37 AM, Walter Bright wrote:
>> Me, I think of that as "Who cares that you paid $$$ for an 80 bit CPU,
>> we're going to give you only 64 bits."
>
> I'm not sure about this. My understanding is that all SSE has hardware for 32 and 64 bit floats, and the the 80-bit hardware is pretty much cut-and-pasted from the x87 days without anyone really looking in improving it. And that's been the case for more than a decade. Is that correct?
>
> I'm looking for example at http://nicolas.limare.net/pro/notes/2014/12/12_arit_speed/ and see that on all Intel and compatible hardware, the speed of 80-bit floating point operations ranges between much slower and disastrously slower.
>
> I think it's time to revisit our attitudes to floating point, which was formed last century in the heydays of x87. My perception is the world has moved to SSE and 32- and 64-bit float; the "real" type is a distraction for D; the whole let's do things in 128-bit during compilation is a time waster; and many of the original things we want to do with floating point are different without a distinction, and a further waste of our resources.
>
> It is a bit ironic that we worry about autodecoding (I'll destroy that later) whilst a landslide loss of speed and predictability in floating point math doesn't raise an eyebrow.
>
>
> Andrei

The AMD64 programmer's manual discourages the use of x87:

"For media and scientific programs that demand floating-point operations, it is often easier and more
powerful to use SSE instructions. Such programs perform better than x87 floating-point programs,
because the YMM/XMM register file is flat rather than stack-oriented, there are twice as many
registers (in 64-bit mode), and SSE instructions can operate on four or eight times the number of
floating-point operands as can x87 instructions. This ability to operate in parallel on multiple pairs of
floating-point elements often makes it possible to remove local temporary variables that would
otherwise be needed in x87 floating-point code."
May 16, 2016
On Monday, 16 May 2016 at 09:54:51 UTC, Iain Buclaw wrote:
> Your still using doubles.  Are you intentionally missing the point?

You'll never know, he is Poe's law in action. At the end, it doesn't matter if he a moron or doing it intentionally, the result is the same.

May 16, 2016
On Monday, 16 May 2016 at 10:29:02 UTC, Andrei Alexandrescu wrote:
> On 5/16/16 2:46 AM, Walter Bright wrote:
>> I used to do numerics work professionally. Most of the troubles I had
>> were catastrophic loss of precision. Accumulated roundoff errors when
>> doing numerical integration or matrix inversion are major problems. 80
>> bits helps dramatically with that.
>
> Aren't good algorithms helping dramatically with that?
>
> Also, do you have a theory that reconciles your assessment of the importance of 80-bit math with the fact that the computing world is moving away from it? http://stackoverflow.com/questions/3206101/extended-80-bit-double-floating-point-in-x87-not-sse2-we-dont-miss-it
>
>
> Andrei

Regardless of the compiler actually doing it or not, the argument that extra precision is a problem is self defeating. I don't think argument for speed have been raised so far.
May 16, 2016
On 5/16/16 7:28 AM, deadalnix wrote:
> On Monday, 16 May 2016 at 09:54:51 UTC, Iain Buclaw wrote:
>> Your still using doubles.  Are you intentionally missing the point?
>
> You'll never know, he is Poe's law in action. At the end, it doesn't
> matter if he a moron or doing it intentionally, the result is the same.

To all: please do not engage trolls. The best way to keep their incompetent platitudes away is to get them bored. -- Andrei

May 16, 2016
On 5/16/16 7:33 AM, deadalnix wrote:
> Regardless of the compiler actually doing it or not, the argument that
> extra precision is a problem is self defeating.

I agree that this whole "we need less precision" argument would be difficult to accept.

> I don't think argument
> for speed have been raised so far.

This may be the best angle in this discussion. For all I can tell 80 bit is slow as molasses and on the road to getting slower. Isn't that enough of an argument to move away from it?


Andrei