May 15, 2016
On 5/15/2016 5:08 PM, Era Scarecrow wrote:
>  Is there an option to use a reproducible fraction type that doesn't have the
> issues floating point has?

D has excellent facilities for creating your own types with the semantics you wish.

May 16, 2016
On 16 May 2016 at 08:05, Walter Bright via Digitalmars-d <digitalmars-d@puremagic.com> wrote:
> On 5/15/2016 1:49 PM, poliklosio wrote:
>
>> Also I think Adam is making a very good point about generl reproducibility
>> here.
>> If a researcher gets a little bit different results, he has to investigate
>> why,
>> because he needs to rule out all the serious mistakes that could be the
>> cause of
>> the difference. If he finds out that the source was an innocuous
>> refactoring of
>> some D code, he will be rightly frustrated that D has caused so much
>> unnecessary
>> churn.
>>
>> I think the same problem can occur in mission-critical software which
>> undergoes
>> strict certification.
>
>
>
> Frankly, I think you are setting unreasonable expectations. Today, if you take a standard compliant C program, and compile it with different switch settings, or run it on a machine with a different CPU, you can very well get different answers. If you reorganize the code expressions, you can very well get different answers.

The argument you used against me earlier was that it was unacceptable
for some C code pasted in D to behave differently than in C... but
here you've just destroyed your own argument by noting that C behaves
differently than itself.
The rest of us would never get away with this ;)


For reference:

> > I think it's the only reasonable solution.

> It may be, but it is unusual and therefore surprising behavior.

> > What is the problem with this behaviour I suggest?

> Code will do one thing in C, and the same code will do something unexpectedly different in D.

So let's reopen the discussion from my first post that you dismissed? If the situation is that C compilers produce no predictable behaviour (as you claim above, and I agree), what is the harm to applying a behaviour that actually works?
May 16, 2016
On 14 May 2016 at 00:00, Iain Buclaw via Digitalmars-d <digitalmars-d@puremagic.com> wrote:
> On 13 May 2016 at 07:12, Manu via Digitalmars-d <digitalmars-d@puremagic.com> wrote:
>> On 13 May 2016 at 11:03, Walter Bright via Digitalmars-d <digitalmars-d@puremagic.com> wrote:
>>> On 5/12/2016 4:32 PM, Marco Leise wrote:
>>>>
>>>> - Unless CTFE uses soft-float implementation, depending on
>>>>   compiler and flags used to compile a D compiler, resulting
>>>>   executable produces different CTFE floating-point results
>>>
>>>
>>> I've actually been thinking of writing a 128 bit float emulator, and then using that in the compiler internals to do all FP computation with.
>>
>> No. Do not.
>> I've worked on systems where the compiler and the runtime don't share
>> floating point precisions before, and it was a nightmare.
>
> I have some bad news for you about CTFE then. This already happens in DMD even though float is not emulated.  :-o

O_O

Are you saying 'float' in CTFE is not 'float'? I protest this about as strongly as I can muster...
May 16, 2016
On 14 May 2016 at 04:16, Walter Bright via Digitalmars-d <digitalmars-d@puremagic.com> wrote:
> On 5/12/2016 10:12 PM, Manu via Digitalmars-d wrote:
>>
>> No. Do not.
>> I've worked on systems where the compiler and the runtime don't share
>> floating point precisions before, and it was a nightmare.
>> One anecdote, the PS2 had a vector coprocessor; it ran reduced (24bit
>> iirc?) float precision, code compiled for it used 32bits in the
>> compiler... to make it worse, the CPU also ran 32bits. The result was,
>> literals/constants, or float data fed from the CPU didn't match data
>> calculated by the vector unit at runtime (ie, runtime computation of
>> the same calculation that may have occurred at compile time to produce
>> some constant didn't match). The result was severe cracking and
>> visible/shimmering seams between triangles as sub-pixel alignment
>> broke down.
>> We struggled with this for years. It was practically impossible to
>> solve, and mostly involved workarounds.
>
>
> I understand there are some cases where this is needed, I've proposed intrinsics for that.

Intrinsics for... what? Making the compiler use the type specified at
compile time?
Is it true that that's not happening already?

I really don't want to use an intrinsic to have float behave like a float at CTFE... nobody will EVER do that.


>> I really just want D to use double throughout, like all the cpu's that run code today. This 80bit real thing (only on x86 cpu's though!) is a never ending pain.
>
>
> It's 128 bits on other CPUs.

What?


>> This sounds like designing specifically for my problem from above, where the frontend is always different than the backend/runtime. Please have the frontend behave such that it operates on the precise datatype expressed by the type... the backend probably does this too, and runtime certainly does; they all match.
>
>
> Except this never happens anyway.

Huh?


I'm sorry, I didn't follow those points.
May 15, 2016
On 5/15/2016 6:59 PM, Manu via Digitalmars-d wrote:
> The argument you used against me earlier was that it was unacceptable
> for some C code pasted in D to behave differently than in C... but
> here you've just destroyed your own argument by noting that C behaves
> differently than itself.

D nails down some C behavior that is specified as "implementation defined". This is not being incompatible.

May 15, 2016
On 5/15/2016 7:04 PM, Manu via Digitalmars-d wrote:
>> I understand there are some cases where this is needed, I've proposed
>> intrinsics for that.
> Intrinsics for... what?

   float roundToFloat(float f);


> I really don't want to use an intrinsic to have float behave like a
> float at CTFE... nobody will EVER do that.

Floats aren't required to have float precision by the C or C++ Standards. I quoted it for Ola :-)


>> It's 128 bits on other CPUs.
> What?

Some CPUs have 128 bit floats.


>>> This sounds like designing specifically for my problem from above,
>>> where the frontend is always different than the backend/runtime.
>>> Please have the frontend behave such that it operates on the precise
>>> datatype expressed by the type... the backend probably does this too,
>>> and runtime certainly does; they all match.
>>
>>
>> Except this never happens anyway.
>
> Huh? I'm sorry, I didn't follow those points.

The belief that compile time and runtime are exactly the same floating point in C/C++ is false.
May 15, 2016
On 5/15/2016 7:01 PM, Manu via Digitalmars-d wrote:
> Are you saying 'float' in CTFE is not 'float'? I protest this about as
> strongly as I can muster...

I imagine you'll be devastated when you discover that the C++ Standard does not require 32 bit floats to have 32 bit precision either, and never did.

:-)

May 16, 2016
On 16 May 2016 at 12:56, Walter Bright via Digitalmars-d <digitalmars-d@puremagic.com> wrote:
> On 5/15/2016 7:04 PM, Manu via Digitalmars-d wrote:
>>>
>>> I understand there are some cases where this is needed, I've proposed intrinsics for that.
>>
>> Intrinsics for... what?
>
>
>    float roundToFloat(float f);
>
>
>> I really don't want to use an intrinsic to have float behave like a float at CTFE... nobody will EVER do that.
>
>
> Floats aren't required to have float precision by the C or C++ Standards. I quoted it for Ola :-)
>
>
>>> It's 128 bits on other CPUs.
>>
>> What?
>
>
> Some CPUs have 128 bit floats.

Yes, but you don't accidentally use 128bit floats, you type:

extended x = 1.3;
x + y;

If that code were to CTFE, I expect the CTFE to use extended precision.
My point is, CTFE should surely follow the types and language
semantics as if it were runtime generated code... It's not reasonable
that CTFE has higher precision applied than the same code at runtime.
CTFE must give the exact same result as runtime execution of the function.

>>>> This sounds like designing specifically for my problem from above, where the frontend is always different than the backend/runtime. Please have the frontend behave such that it operates on the precise datatype expressed by the type... the backend probably does this too, and runtime certainly does; they all match.
>>>
>>>
>>>
>>> Except this never happens anyway.
>>
>>
>> Huh? I'm sorry, I didn't follow those points.
>
>
> The belief that compile time and runtime are exactly the same floating point in C/C++ is false.

I'm not interested in C/C++, I gave some anecdotes where it's gone wrong for me too, but regardless; generally, they do match, and I can't think of a single modern example where that's not true. If you *select* fast-math, then you may generate code that doesn't match, but that's a deliberate selection.

If I want 'real' math (in CTFE or otherwise), I will type "real". It is completely unreasonable to reinterpret the type that the user specified. CTFE should execute code the same way runtime would execute the code (without -ffast-math, and conformant ieee hardware). This is not a big ask.

Incidentally, I made the mistake of mentioning this thread (due to my astonishment that CTFE ignores float types) out loud to my colleagues... and they actually started yelling violently out loud. One of them has now been on a 10 minute angry rant with elevated tone and waving his arms around about how he's been shafted by this sort behaviour so many times before. I wish I recorded it, I'd love to have submit it as evidence.
May 16, 2016
On 16 May 2016 at 13:03, Walter Bright via Digitalmars-d <digitalmars-d@puremagic.com> wrote:
> On 5/15/2016 7:01 PM, Manu via Digitalmars-d wrote:
>>
>> Are you saying 'float' in CTFE is not 'float'? I protest this about as strongly as I can muster...
>
>
> I imagine you'll be devastated when you discover that the C++ Standard does not require 32 bit floats to have 32 bit precision either, and never did.
>
> :-)

I've never read the C++ standard, but I have more experience with a wide range of real-world compilers than most, and it is rarely very violated. The times it is, we've known about it, and it has made us all very, very angry.
May 16, 2016
On 16 May 2016 at 14:05, Manu <turkeyman@gmail.com> wrote:
> On 16 May 2016 at 13:03, Walter Bright via Digitalmars-d <digitalmars-d@puremagic.com> wrote:
>> On 5/15/2016 7:01 PM, Manu via Digitalmars-d wrote:
>>>
>>> Are you saying 'float' in CTFE is not 'float'? I protest this about as strongly as I can muster...
>>
>>
>> I imagine you'll be devastated when you discover that the C++ Standard does not require 32 bit floats to have 32 bit precision either, and never did.
>>
>> :-)
>
> I've never read the C++ standard, but I have more experience with a wide range of real-world compilers than most, and it is rarely very violated. The times it is, we've known about it, and it has made us all very, very angry.

Holy shit, it's just occurred to me that 'real' is only 64bits on arm
(and every non-x86 platform)...
That means a compiler running on an arm host will produce a different
binary than a compiler running on an x86 host!! O_O