Jump to page: 1 24  
Page
Thread overview
Differences in results when using the same function in CTFE and Runtime
Aug 08
IchorDev
Aug 08
user1234
Aug 10
IchorDev
Aug 12
IchorDev
Aug 13
Abdulhaq
Aug 13
matheus
Aug 13
Abdulhaq
Aug 13
IchorDev
Aug 13
Abdulhaq
Aug 13
Abdulhaq
Aug 13
claptrap
Aug 14
claptrap
Aug 15
Dom DiSc
Aug 16
claptrap
Aug 15
Abdulhaq
Aug 15
Abdulhaq
Aug 15
Abdulhaq
August 08

Hi

I'm playing with CTFE in D. This feature allows for a lot of funny things, e.g. initialisation of immutable data at compile time with the result of some other function (template).

As a result I get immutable result blobs compiled into the binary. But none of the generating code, because it was already executed by CTFE.

This worked nicly for several other usecases as well.For now the results of CTFE and RT were always the same. As expected.

However, yesterday a unit-test started to report, that the results created by the same code with same parameters differ when run in CTFE mode or at runtime.

    static immutable ubyte[] burningShipImageCTFE = generateBurningShipImage(twidth, theight, maxIter);
    immutable ubyte[] burningShipImageRT = generateBurningShipImage(twidth, theight, maxIter);
    assert(burningShipImageCTFE == burningShipImageRT, "Same results expected.");

I diffed the pictures and indeed some of the pixels in the more complex areas of the BurningShip fractal were clearly and noteably different.

Ok, the fractal code uses 'double' floats, which are by their very nature limited in precision. But assuming that the math emulation of CTFE works identical as the CPU does at runtime, the outcome should be identical.

Or not, in some cases ;-) E.g. with a fractal equation where smallest changes can result into big differences.

And it opens up some questions:

  • Can CTFE be used under all circumstances when float numbers of any precision are involved?
  • Or is this some kind of expected behaviour whenever floats are involved?
  • Is the D CTFE documentation completely covering such possible issues?

I can imagine that bugs causes by such subtil differences might be very difficult to fix.

Any experiences or thought on this?

Greetz
Carsten

August 08

On Thursday, 8 August 2024 at 10:31:32 UTC, Carsten Schlote wrote:

>

[…] assuming that the math emulation of CTFE works identical as the CPU does at runtime, the outcome should be identical.

Which it probably does not. CTFE is meant to be less platform-dependent than runtime code, so it is interpreted rather than running directly on the CPU. This is why there are strict limitations on where CTFE can be invoked. I believe the exact behaviour of CTFE would be implementation-dependent though.

August 08

On Thursday, 8 August 2024 at 10:31:32 UTC, Carsten Schlote wrote:

>

Hi

[...]

I can imagine that bugs causes by such subtil differences might be very difficult to fix.

Any experiences or thought on this?

Greetz
Carsten

During CTFE there has to be much more round-trips between the storage and the effective operations whereas with native code several operations can happen without leaving the FPU... those are done with a different accurary (80 bits). During the round trips implied by CTFE you have a floating point truncation after each op.

You see during CTFE just a + b * c means two truncations, one after the multiply, the second after the addition (even if this example is not very good as those op are done using SSE)

That being said, that'd be intersting to verify if it's the problem. Maybe update your code to use real variables and let's see.

August 08
On Thu, Aug 08, 2024 at 10:31:32AM +0000, Carsten Schlote via Digitalmars-d wrote: [...]
> Ok, the fractal code uses 'double' floats, which are by their very nature limited in precision. But assuming that the math emulation of CTFE works identical as the CPU does at runtime, the outcome should be identical.

IIRC DMD either uses in CTFE or emits code that uses 8088 80-bit extended precision in intermediate results. Which, obviously, will produce different results when the same computation performed using only doubles.


[...]
> - Or is this some kind of expected behaviour whenever floats are involved?

https://floating-point-gui.de/


--T
August 09

On Thursday, 8 August 2024 at 10:31:32 UTC, Carsten Schlote wrote:

>

Hi

  • Can CTFE be used under all circumstances when float numbers of any precision are involved?
  • Or is this some kind of expected behaviour whenever floats are involved?
  • Is the D CTFE documentation completely covering such possible issues?

I can imagine that bugs causes by such subtil differences might be very difficult to fix.

Any experiences or thought on this?

there are toPrec1 intrinsics to solve exactly this issue of lack of truncation of precision.

>

Greetz
Carsten

August 09

On Thursday, 8 August 2024 at 13:31:43 UTC, user1234 wrote:

>

That being said, that'd be intersting to verify if it's the problem. Maybe update your code to use real variables and let's see.

Replaced the 'doubles' with 'real' and now the the results of the CTFE and RT executions are exactly the same.

So, as you expected and described, the CTFE lost some precision by splitting the ops and storing the intermediate results back from FPU registers with internal precision to the limited 'double' type somewhere in memory.

However, the other fractals (mandelbot, julia, newton, lyapunov, ...) work prefectly with the 'double's. I suspect, that their default/starting views do no involve enough recursions and iterations in the calculation, so that the missing precision is not important, yet. I will pimp my unittests to catch such precision issues for the other fractals as well, and then fix it by using 'real' types.

So, what are the lessions learned here:

  1. When using floats use the 'real' type - it uses the maximum available FPU precision and no precision is lost, when the FPU register is stored/read back to/from memory. So CTFE and RT execute have the best chance to produce the same results.

  2. Use 'float' and 'double' only if you need to save some bits of storage and if the reduced precision is acceptable.

  3. Unittests proofed again to be great. Especially when support for unittesting is built into the language. Even simplest assumptions can be wrong and having assertions for such assumptions will be helpful.

  4. Use assert()/enforce(), in/out/do to check everything to match your expectations and assumptions.

At the end assert(burningShipImageCTFE == burningShipImageRT, "Same results expected."); in my unitest did find it. And I got a chance to think about it and work on my assumptions ;-)

August 10

On Friday, 9 August 2024 at 00:46:12 UTC, Nicholas Wilson wrote:

>

On Thursday, 8 August 2024 at 10:31:32 UTC, Carsten Schlote wrote:

>

Hi

  • Can CTFE be used under all circumstances when float numbers of any precision are involved?
  • Or is this some kind of expected behaviour whenever floats are involved?
  • Is the D CTFE documentation completely covering such possible issues?

I can imagine that bugs causes by such subtil differences might be very difficult to fix.

Any experiences or thought on this?

there are toPrec1 intrinsics to solve exactly this issue of lack of truncation of precision.

>

Greetz
Carsten

Can anyone remind me whether there’s a way to force calculations to be performed with a certain degree of precision (e.g. single or double) instead of rounding down from the largest floats available? Would be really useful for cross-platform consistency. :’|

August 12

On Saturday, 10 August 2024 at 11:15:19 UTC, IchorDev wrote:

>

On Friday, 9 August 2024 at 00:46:12 UTC, Nicholas Wilson wrote:

>

On Thursday, 8 August 2024 at 10:31:32 UTC, Carsten Schlote wrote:

>

Hi

  • Can CTFE be used under all circumstances when float numbers of any precision are involved?
  • Or is this some kind of expected behaviour whenever floats are involved?
  • Is the D CTFE documentation completely covering such possible issues?

I can imagine that bugs causes by such subtil differences might be very difficult to fix.

Any experiences or thought on this?

there are toPrec1 intrinsics to solve exactly this issue of lack of truncation of precision.

>

Greetz
Carsten

Can anyone remind me whether there’s a way to force calculations to be performed with a certain degree of precision (e.g. single or double) instead of rounding down from the largest floats available? Would be really useful for cross-platform consistency. :’|

D specifies that floating point calculations may be performed with higher precision than the type suggests:

>

Execution of floating-point expressions may yield a result of greater precision than dictated by the source.

Floating-Point Intermediate Values, D Language Specification

On almost all non-embedded CPUs, doing non-vector calculations in float is more costly than doing them in double or real because for single-arguments, the floats are converted to double or real. I consider float to be a type used for storing values in arrays that don’t need the precision and save me half the RAM.

August 12

On Monday, 12 August 2024 at 10:22:52 UTC, Quirin Schroll wrote:

>

On almost all non-embedded CPUs, doing non-vector calculations in float is more costly than doing them in double or real because for single-arguments, the floats are converted to double or real. I consider float to be a type used for storing values in arrays that don’t need the precision and save me half the RAM.

I don’t care. Only one family of CPU architectures supports ‘extended precision’ floats (because it’s a waste of time), so I would like to know a way to always perform calculations with double precision for better cross-platform consistency. Imagine trying to implement JRE without being able to do native double precision maths.

August 12
On 8/8/24 12:31, Carsten Schlote wrote:
> 
> And it opens up some questions:
> 
> - Can CTFE be used under all circumstances when float numbers of any precision are involved?
> - Or is this some kind of expected behaviour whenever floats are involved?
> - Is the D CTFE documentation completely covering such possible issues?
> 
> I can imagine that bugs causes by such subtil differences might be very difficult to fix.
> 
> 
> Any experiences or thought on this?

The specification allows behavior like that even at runtime.
https://dlang.org/spec/float.html

I have been vocally against this. Full portability is perhaps one thing, as hardware differences do exist, but completely thwarting the expectations by deliberately doing a different operation than the one you requested I think makes no sense at all. A lot of modern hardware nowadays is compatible or almost compatible regarding floats.
« First   ‹ Prev
1 2 3 4