May 15, 2016
On 5/15/2016 9:02 PM, Manu via Digitalmars-d wrote:
> Yes, but you don't accidentally use 128bit floats, you type:
>
> extended x = 1.3;
> x + y;

The C/C++ standards allow constant folding at 128 bits, despite floats being 32 bits.


> If that code were to CTFE, I expect the CTFE to use extended precision.
> My point is, CTFE should surely follow the types and language
> semantics as if it were runtime generated code... It's not reasonable
> that CTFE has higher precision applied than the same code at runtime.
> CTFE must give the exact same result as runtime execution of the function.

It hasn't for decades on x86 machines, and the world hasn't collapsed, in fact, few people ever even notice. Not many people prefer less accurate answers.

The initial Java spec worked as you desired, and they were pretty much forced to back off of it.


>> The belief that compile time and runtime are exactly the same floating point
>> in C/C++ is false.
>
> I'm not interested in C/C++, I gave some anecdotes where it's gone
> wrong for me too, but regardless; generally, they do match, and I
> can't think of a single modern example where that's not true. If you
> *select* fast-math, then you may generate code that doesn't match, but
> that's a deliberate selection.

They won't match on any code that uses the x87. The standard doesn't require float math to use float instructions, they can (and do) use double instructions for temporaries.


> If I want 'real' math (in CTFE or otherwise), I will type "real". It
> is completely unreasonable to reinterpret the type that the user
> specified. CTFE should execute code the same way runtime would execute
> the code (without -ffast-math, and conformant ieee hardware). This is
> not a big ask.
>
> Incidentally, I made the mistake of mentioning this thread (due to my
> astonishment that CTFE ignores float types)

Float types are not selected because they are less accurate, they are selected because they are smaller and faster.

> out loud to my
> colleagues... and they actually started yelling violently out loud.
> One of them has now been on a 10 minute angry rant with elevated tone
> and waving his arms around about how he's been shafted by this sort
> behaviour so many times before. I wish I recorded it, I'd love to have
> submit it as evidence.

I'm interested to hear how he was "shafted" by this. This apparently also contradicts the claim that other languages do as you ask.

May 15, 2016
On 5/15/2016 9:05 PM, Manu via Digitalmars-d wrote:
> I've never read the C++ standard, but I have more experience with a
> wide range of real-world compilers than most, and it is rarely very
> violated.

It has on every C/C++ compiler for x86 machines that used the x87, which was true until SIMD, and is still true for x86 CPUs that don't target SIMD.

> The times it is, we've known about it, and it has made us
> all very, very angry.

The C/C++ standard is written that way for a reason.

I'd like to hear what terrible problem is caused by having more accurate values.
May 15, 2016
On 5/15/2016 9:06 PM, Manu via Digitalmars-d wrote:
> Holy shit, it's just occurred to me that 'real' is only 64bits on arm
> (and every non-x86 platform)...

There are some that support 128 bits.

> That means a compiler running on an arm host will produce a different
> binary than a compiler running on an x86 host!! O_O

That's probably why VC++ dropped 80 bit long double support entirely.

Me, I think of that as "Who cares that you paid $$$ for an 80 bit CPU, we're going to give you only 64 bits."

It's also why I'd like to build a 128 soft fp emulator in dmd for all compile time float operations.
May 16, 2016
On 16 May 2016 at 14:26, Walter Bright via Digitalmars-d <digitalmars-d@puremagic.com> wrote:
> On 5/15/2016 9:02 PM, Manu via Digitalmars-d wrote:
>>
>> If that code were to CTFE, I expect the CTFE to use extended precision.
>> My point is, CTFE should surely follow the types and language
>> semantics as if it were runtime generated code... It's not reasonable
>> that CTFE has higher precision applied than the same code at runtime.
>> CTFE must give the exact same result as runtime execution of the function.
>
>
> It hasn't for decades on x86 machines, and the world hasn't collapsed, in fact, few people ever even notice. Not many people prefer less accurate answers.

That doesn't mean it's not wrong.
Don noticed, he gave a lecture on floating-point gotchas.
I'm still firmly engaged in trying to use D professionally... should I
ever successfully pass that barrier, then it's just a matter of time
until such a piece of code that tend to challenge these things is
written. There are not enough uses of D that we can offer anecdotes,
but we can offer anecdotes from our decades with C++ that we would
desperately like to avoid in the future.

If we don't care about doing what's right, then we just accept that float remains a highly expert/special-knowledge-centric field, and talks like the one Don gave should be taken as thought-provoking, however of no practical relevance for the language, since it's 'fine for most people, most of the time', and we get on with other things. This thread is evidence that people would like to do the best we can.

1.3f != 1.3 is not accurate, it's wrong.


> The initial Java spec worked as you desired, and they were pretty much forced to back off of it.

Ok, why's that?


>>> The belief that compile time and runtime are exactly the same floating
>>> point
>>> in C/C++ is false.
>>
>>
>> I'm not interested in C/C++, I gave some anecdotes where it's gone wrong for me too, but regardless; generally, they do match, and I can't think of a single modern example where that's not true. If you *select* fast-math, then you may generate code that doesn't match, but that's a deliberate selection.
>
>
> They won't match on any code that uses the x87. The standard doesn't require float math to use float instructions, they can (and do) use double instructions for temporaries.

If it does, then it is careful to make sure the precision expectations are maintained. If you don't '-ffast-math', the FPU code produces a IEEE conformant result on reasonable compilers. We depend on this.


>> If I want 'real' math (in CTFE or otherwise), I will type "real". It is completely unreasonable to reinterpret the type that the user specified. CTFE should execute code the same way runtime would execute the code (without -ffast-math, and conformant ieee hardware). This is not a big ask.
>>
>> Incidentally, I made the mistake of mentioning this thread (due to my astonishment that CTFE ignores float types)
>
>
> Float types are not selected because they are less accurate, they are selected because they are smaller and faster.

They are selected because they are smaller and faster with the
understood trade-off that they are less accurate.
They are certainly selected with the _intent_ that they are less accurate.

It's not reasonable that a CTFE function may produce a radically different result than the same function at runtime.

>> out loud to my
>> colleagues... and they actually started yelling violently out loud.
>> One of them has now been on a 10 minute angry rant with elevated tone
>> and waving his arms around about how he's been shafted by this sort
>> behaviour so many times before. I wish I recorded it, I'd love to have
>> submit it as evidence.
>
>
> I'm interested to hear how he was "shafted" by this. This apparently also contradicts the claim that other languages do as you ask.

I've explained prior the cases where this has happened are most often
invoked by the hardware having a reduced runtime precision than the
compiler. The only cases I know of where this has happened due to the
compiler internally is CodeWarrior; an old/dead C++ compiler that
always sucked and caused us headaches of all kinds.
The point is, the CTFE behaviour is essentially identical to our
classic case where the hardware runs a different precision than the
compiler, and that's built into the language! It's not just an anomaly
expressed by one particular awkward platform we're required to
support.
May 16, 2016
On 16 May 2016 at 14:31, Walter Bright via Digitalmars-d <digitalmars-d@puremagic.com> wrote:
> On 5/15/2016 9:05 PM, Manu via Digitalmars-d wrote:
>>
>> I've never read the C++ standard, but I have more experience with a wide range of real-world compilers than most, and it is rarely very violated.
>
>
> It has on every C/C++ compiler for x86 machines that used the x87, which was true until SIMD, and is still true for x86 CPUs that don't target SIMD.

It has what? Reinterpreted your constant-folding to execute in 80bits
internally for years? Again, if that's true, I expect that's only true
in the context that the compiler also takes care to maintain the IEEE
conformant bit pattern, or at very least, it works because the
opportunity for FP constant folding in C++ is almost non-existent
compared to CTFE, such that it's never resulted in a problem case in
my experience.
In D, we will (do) use CTFE for table generation all the time (this
has never been done in C++). If those tables were generated with
entirely different precision than the runtime functions, that's just
begging for trouble.

>> The times it is, we've known about it, and it has made us all very, very angry.
>
>
> The C/C++ standard is written that way for a reason.
>
> I'd like to hear what terrible problem is caused by having more accurate values.

In the majority of my anecdotes, if they don't match, there are
cracks/seams in the world. That is a show-stopping bug. We have had
many late nights, even product launch delays due to by these problems.
They have been a nightmare to solve in the past.
Obviously the solution in this case is a relatively simple
work-around; don't use CTFE (ie, lean on the LLVM runtime codegen
instead to do the right thing with the float precision), but that's a
tragic solution to a problem that should never happen in the first
place.
May 16, 2016
On 16 May 2016 at 14:37, Walter Bright via Digitalmars-d <digitalmars-d@puremagic.com> wrote:
> On 5/15/2016 9:06 PM, Manu via Digitalmars-d wrote:
>>
>> Holy shit, it's just occurred to me that 'real' is only 64bits on arm
>> (and every non-x86 platform)...
>
>
> There are some that support 128 bits.
>
>> That means a compiler running on an arm host will produce a different binary than a compiler running on an x86 host!! O_O
>
>
> That's probably why VC++ dropped 80 bit long double support entirely.
>
> Me, I think of that as "Who cares that you paid $$$ for an 80 bit CPU, we're going to give you only 64 bits."

No, you'll give me 80bit _when I type "real"_. Otherwise, if I type
'double', you'll give me that. I don't understand how that can be
controversial.
I know you love the x87, but I'm pretty sure you're among a small
minority. Personally, I don't want a single line of code that goes
anywhere near the x87 to be emit in any binary I produce. It's a
deprecated old crappy piece of hardware, and transfers between x87 and
sse regs are slow.

> It's also why I'd like to build a 128 soft fp emulator in dmd for all compile time float operations.

And I did also realise the reason for your desire to implement 128bit
soft-float the same moment I realised this. The situation that
different DMD builds operate at different precisions internally (based
on the host arch) is a whole new level of astonishment.
I support this soft-float, but please, build a soft-float for all
precisions, and use them everywhere that a hardware float for _the
specified precision_ is not available ;)
May 16, 2016
On Monday, 16 May 2016 at 04:02:54 UTC, Manu wrote:

> extended x = 1.3;
> x + y;
>
> If that code were to CTFE, I expect the CTFE to use extended precision.
> My point is, CTFE should surely follow the types and language
> semantics as if it were runtime generated code... It's not reasonable
> that CTFE has higher precision applied than the same code at runtime.
> CTFE must give the exact same result as runtime execution of the function.
>

You are not even guaranteed to get the same result on two different x86 implementations. AMD64:

"The processor produces a floating-point result defined by the IEEE standard to be infinitely precise.
This result may not be representable exactly in the destination format, because only a subset of the
continuum of real numbers finds exact representation in any particular floating-point format."





May 16, 2016
On Sunday, 15 May 2016 at 22:49:27 UTC, Walter Bright wrote:
> On 5/15/2016 2:06 PM, Ola Fosheim Grøstad wrote:
>> The net result is that adding const/immutable to a type can change the semantics
>> of the program entirely at the whim of the compiler implementor.
>
> C++ Standard allows the same increased precision, at the whim of the compiler implementor, as quoted to you earlier.
>
> What your particular C++ compiler does is not relevant, as its behavior is not required by the Standard.

This is a crazy attitude to take. C++ provides means to detect that IEEE floats are being used in the standard library. C/C++ supports non-standard floating point because some platforms only provide non-standard floating point. They don't do it because it is _desirable_ in general.

You might as well say that you are not required to drive on the right side on the road, because you occasionally have to drive on the left. So therefore it is ok to always drive on left.

> My proposal removes the "whim" by requiring 128 bit precision for CTFE.

No, D's take on floating point is FUBAR.

const float value = 1.30;
float  copy = value;
assert(value*0.5 ==  copy*0.5); // FAILS! => shutdown






May 16, 2016
On 16 May 2016 at 06:06, Manu via Digitalmars-d <digitalmars-d@puremagic.com> wrote:
> On 16 May 2016 at 14:05, Manu <turkeyman@gmail.com> wrote:
>> On 16 May 2016 at 13:03, Walter Bright via Digitalmars-d <digitalmars-d@puremagic.com> wrote:
>>> On 5/15/2016 7:01 PM, Manu via Digitalmars-d wrote:
>>>>
>>>> Are you saying 'float' in CTFE is not 'float'? I protest this about as strongly as I can muster...
>>>
>>>
>>> I imagine you'll be devastated when you discover that the C++ Standard does not require 32 bit floats to have 32 bit precision either, and never did.
>>>
>>> :-)
>>
>> I've never read the C++ standard, but I have more experience with a wide range of real-world compilers than most, and it is rarely very violated. The times it is, we've known about it, and it has made us all very, very angry.
>
> Holy shit, it's just occurred to me that 'real' is only 64bits on arm
> (and every non-x86 platform)...
> That means a compiler running on an arm host will produce a different
> binary than a compiler running on an x86 host!! O_O

Which is why gcc/g++ (ergo gdc) uses floating point emulation. Getting consistent results at compile time regardless of whatever host/target/cross configuration trumps doing it natively.
May 16, 2016
On Sunday, 15 May 2016 at 22:34:24 UTC, Walter Bright wrote:
> So far, nobody has posted a legitimate one (i.e. not contrived).

Oh, so comparing the exact same calculation, using the exact same binary executable function, is  not a legitimate algorithm. It is the most trivial thing to do and rather common. It should be _very_ convincing.

But hey, here is another one:

const real x = f();
assert(0<=x && x<1);
x += 1;

const float f32 = cast(float)(x);
const real residue = x - cast(real)f32; // ZERO!!!!

output(dither(f32, residue)); // DITHERING IS FUBAR!!!