May 16, 2016
On Monday, 16 May 2016 at 09:54:51 UTC, Iain Buclaw wrote:
> On 16 May 2016 at 10:52, Ola Fosheim Grøstad via Digitalmars-d <digitalmars-d@puremagic.com> wrote:
>> On Monday, 16 May 2016 at 08:47:03 UTC, Iain Buclaw wrote:
>>>
>>> But you *didn't* request coercion to 32 bit floats.  Otherwise you would have used 1.30f.
>>
>>
>>         const float f = 1.3f;
>>         float c = f;
>>         assert(c*1.0 == f*1.0); // Fails! SHUTDOWN!
>>
>>
>
> Your still using doubles.  Are you intentionally missing the point?

What is your point? My point is that no other language I have ever used has overruled me requesting a coercion to single precision floats. And yes, binding it to a single precision float does qualify as a coercion on all platforms I have ever used.

I should not have to implement a function with float parameters if I have a working function with real parameters, just to get reasonable behaviour:

void assert_equality(real x, real y){ assert(x==y);}

void main(){
  const float f = 1.3f;
  float c = f;
  assert_equality(f,c); // Fails!
}


Stuff like this makes the codebase brittle.

Not being able to control precision in unit tests make unit tests potentially succeed when they should fail. That makes testing floating point code for correctness virtually impossible in D.

I don't use 32 bit float scalars to save space. I use it to get higher performance and in order to be able to turn the code into pure SIMD code at a later stage. So I require SIMD like semantics for float scalars. Anything less is unacceptable.

If I want high precision at compile time, then I use rational numbers like std::ratio in C++ which gives me _exact_ values. If I want something more advanced then I use Maxima and literally copy in the results.

C++17 is getting hex literals for floating point for a reason: accurate bit level representation.


May 16, 2016
On 5/16/16 8:37 AM, Walter Bright wrote:
> On 5/16/2016 3:27 AM, Andrei Alexandrescu wrote:
>> I'm looking for example at
>> http://nicolas.limare.net/pro/notes/2014/12/12_arit_speed/ and see
>> that on all
>> Intel and compatible hardware, the speed of 80-bit floating point
>> operations
>> ranges between much slower and disastrously slower.
>
> It's not a totally fair comparison. A matrix inversion algorithm that
> compensates for cumulative precision loss involves executing a lot more
> FP instructions (don't know the ratio).

It is rare to need to actually compute the inverse of a matrix. Most of the time it's of interest to solve a linear equation of the form Ax = b, for which a variety of good methods exist that don't entail computing the actual inverse.

I emphasize the danger of this kind of thinking: 1-2 anecdotes trump a lot of other evidence. This is what happened with input vs. forward C++ iterators as the main motivator for a variety of concepts designs.

> Some counter points:

Glad to see these!

> 1. Go uses 256 bit soft float for constant folding.

Go can afford it because it does no interesting things during compilation. We can't.

> 2. Speed is hardly the only criterion. Quickly getting the wrong answer
> (and not just a few bits off, but total loss of precision) is of no value.

Of course. But it turns out the precision argument loses to the speed argument.

A. It's been many many years and very few if any people commend D for its superior approach to FP precision.

B. In contrast, a bunch of folks complain about anything slow, be it during compilation or at runtime.

Good algorithms lead to good precision, not 16 additional bits. Precision is overrated. Speed isn't.

> 3. Supporting 80 bit reals does not take away from the speed of
> floats/doubles at runtime.

Fast compile-time floats are of strategic importance to us. Give me fast FP during compilation, I'll make it go slow (whilst put to do amazing work).

> 4. Removing 80 bit reals will consume resources (adapting the test
> suite, rewriting the math library, ...).

I won't argue with that! Just let's focus on the right things: good fast streamlined computing using the appropriate hardware.

> 5. Other languages not supporting it means D has a capability they don't
> have. My experience with selling products is that if you have an
> exclusive feature that a particular customer needs, it's a slam dunk sale.

Again: I'm not seeing people coming out of the woodwork to praise D's precision. What they would indeed enjoy is amazing FP use during compilation, and that can be done only if CTFE FP is __FAST__. That _is_, indeed the capability others are missing!

> 6. My other experience with feature sets is if you drop things that make
> your product different, and concentrate on matching feature checklists
> with Major Brand X, customers go with Major Brand X.

This is true in principle but too vague to be relevant. Again, what evidence do you have that D's additional precision is revered? I see none, over like a decade.

> 7. 80 bit reals are there and they work. The support is mature, and is
> rarely worked on, i.e. it does not consume resources.

Yeah, I just don't want it used in any new code. Please. It's using lead for building boats.

> 8. Removing it would break an unknown amount of code, and there's no
> reasonable workaround for those that rely on it.

Let's do what everybody did to x87: continue supporting, but slowly and surely drive it to obsolescence.

That's the right way.


Andrei

May 16, 2016
On Mon, May 16, 2016 at 05:37:58AM -0700, Walter Bright via Digitalmars-d wrote:
> On 5/16/2016 3:27 AM, Andrei Alexandrescu wrote:
[...]
> >I think it's time to revisit our attitudes to floating point, which was formed last century in the heydays of x87. My perception is the world has moved to SSE and 32- and 64-bit float; the "real" type is a distraction for D; the whole let's do things in 128-bit during compilation is a time waster; and many of the original things we want to do with floating point are different without a distinction, and a further waste of our resources.
> 
> Some counter points:
> 
> 1. Go uses 256 bit soft float for constant folding.
> 
> 2. Speed is hardly the only criterion. Quickly getting the wrong answer (and not just a few bits off, but total loss of precision) is of no value.

An algorithm that uses 80-bit but isn't written properly to account for FP total precision loss will also produce wrong results, just like an algorithm that uses 64-bit and isn't written to account for FP total precision loss.


[...]
> 5. Other languages not supporting it means D has a capability they don't have. My experience with selling products is that if you have an exclusive feature that a particular customer needs, it's a slam dunk sale.

It's also the case that maintaining a product with feature X that nobody uses is an unnecessary drain on developmental resources.


> 6. My other experience with feature sets is if you drop things that make your product different, and concentrate on matching feature checklists with Major Brand X, customers go with Major Brand X.
[...]

If I were presented with product A having an archaic feature X that works slowly and that I don't even need in the first place, vs. product B having exactly the feature set I need without the baggage of archaic feature X, I'd go with product B.


T

-- 
Real Programmers use "cat > a.out".
May 16, 2016
On Monday, 16 May 2016 at 14:21:34 UTC, Ola Fosheim Grøstad wrote:

> C++17 is getting hex literals for floating point for a reason: accurate bit level representation.

D has had hex FP literals for ages.


May 17, 2016
On 16/05/2016 10:37 PM, Walter Bright wrote:
> Some counter points:
>
> 1. Go uses 256 bit soft float for constant folding.
>

Then we should use 257 bit soft float!

May 16, 2016
On Tuesday, 10 May 2016 at 07:28:21 UTC, Manu wrote:
> Perhaps float comparison should *always* be done at the lower precision? There's no meaningful way to perform a float/double comparison where the float is promoted, whereas demoting the double for the comparison will almost certainly yield the expected result.

Assuming that's what you want, it's reasonably straightforward to use

    feqrel(someFloat, someDouble) >= float.mant_dig

... to compare to the level of precision that matters to you.  That's probably a better option than adjusting `==` to always prefer a lower level of precision (because it's arguably accurate to say that 1.3f != 1.3).
May 16, 2016
On 16.05.2016 17:11, Daniel Murphy wrote:
> On 16/05/2016 10:37 PM, Walter Bright wrote:
>> Some counter points:
>>
>> 1. Go uses 256 bit soft float for constant folding.
>>
>
> Then we should use 257 bit soft float!
>

I don't see how you reach that conclusion, as 258 bit soft float is clearly superior.
May 16, 2016
On 5/16/2016 6:15 AM, Andrei Alexandrescu wrote:
> On 5/16/16 8:19 AM, Walter Bright wrote:
>> We are talking CTFE here, not runtime.
>
> I have big plans with floating-point CTFE and all are elastic: the faster CTFE
> FP is, the more and better things we can do. Things that other languages can't
> dream to do, like interpolation tables for transcendental functions. So a
> slowdown of FP CTFE would be essentially a strategic loss. -- Andrei

Based on my experience with soft float on DOS, and the fact that CPUs are what, 1000 times faster today, I have a hard time thinking of a case where that would be a big problem.

I can't see someone running a meteorological prediction using CTFE :-)
May 16, 2016
On 5/16/2016 5:37 AM, Walter Bright wrote:
> [...]

9. Both clang and gcc offer 80 bit long doubles. It's Microsoft VC++ that is out of step. Not having an 80 bit type in D means diminished interoperability with very popular C and C++ compilers.

May 16, 2016
On 5/16/2016 8:11 AM, Daniel Murphy wrote:
> On 16/05/2016 10:37 PM, Walter Bright wrote:
>> 1. Go uses 256 bit soft float for constant folding.
> Then we should use 257 bit soft float!


I like the cut of your jib!