August 14
On 8/14/24 00:35, claptrap wrote:
> On Tuesday, 13 August 2024 at 21:03:00 UTC, Timon Gehr wrote:
>> On 8/13/24 13:09, Abdulhaq wrote:
>>
>> At the same time, "better accuracy" is part of the reason why CTFE uses higher floating-point precision, but higher precision does not mean the result will be more accurate. Unless arbitrary precision is used, which is even slower, it leads to double-rounding issues which can cause the result to be less accurate.
> 
> If you have a pristine 24 bit audio sample, maybe 120db SnR due to DAC limitations. Processing it in 16 bit will automatically loose you 4 bits of accuracy, it'll drop the SnR to 96dbs. If you process it at 32 bits you still have 120db SnR, greater precision but same accuracy as the source.
> 
> The point is the statement "higher precision does not mean the result will be more accurate." is only half true.
> ...

It is fully true. "A means B" holds when in any situation where A holds, B also holds. "A does not mean B" holds when there is a situation where A is true but B is false.

Clearly I am not saying that higher accuracy implies lower precision, just that there are cases where lower precision gives you higher accuracy.

> If the precision you are doing the calculations at is already higher than the accuracy of your data, more precision wont get you much of anything.
> ...

It may even break some algorithms, especially when it is inconsistently and implicitly applied, which is the case in D.

> but if the converse is true, if you are processing the data at lower precision than the accuracy of your source data, then increasing precision will absolutely increase accuracy.
> 

Usually true, but I still want to decide on my own which computation will be rounded to what precision. Even an accidentally avoided accuracy issue resulting from not guaranteed implicit higher precision is a ticking time bomb and very annoying to debug when it goes off.
August 14
On Wednesday, 14 August 2024 at 02:35:08 UTC, Timon Gehr wrote:
> On 8/14/24 00:35, claptrap wrote:
>>
>> If you have a pristine 24 bit audio sample, maybe 120db SnR due to DAC limitations. Processing it in 16 bit will automatically loose you 4 bits of accuracy, it'll drop the SnR to 96dbs. If you process it at 32 bits you still have 120db SnR, greater precision but same accuracy as the source.
>> 
>> The point is the statement "higher precision does not mean the result will be more accurate." is only half true.
>> ...
>
> It is fully true. "A means B" holds when in any situation where A holds, B also holds. "A does not mean B" holds when there is a situation where A is true but B is false.

I'd argue "does not mean" is vague, and most people would read what you wrote as " A != B in any situation."

But otherwise I agree based on what you meant.


>> but if the converse is true, if you are processing the data at lower precision than the accuracy of your source data, then increasing precision will absolutely increase accuracy.
>> 
>
> Usually true, but I still want to decide on my own which computation will be rounded to what precision. Even an accidentally avoided accuracy issue resulting from not guaranteed implicit higher precision is a ticking time bomb and very annoying to debug when it goes off.

I agree the compiler should actually use the float precision you explicitly ask for.




August 15

On Tuesday, 13 August 2024 at 07:56:34 UTC, Abdulhaq wrote:

>

One little piece of anecdata, when working with python I saw differing results in the 17th d.p. when running the exact same calculation in the same python session. The reason turned out to be which registers were being used each time, which could vary. This is not considered to be a bug.

I would consider it a bug.

Running the same piece of code multiple times with exactly the same parameters should result into the exactly same results.

But once I also expected that CTFE and real execution of some code should always yield the same results. This assumption is wrong as I learned recently. But using 'real' as type usually leads to the same result.

All the other issues with floats of any size are some separate topic.

August 15

On Wednesday, 14 August 2024 at 07:54:15 UTC, claptrap wrote:

>

I agree the compiler should actually use the float precision you explicitly ask for.

But if you do cross compiling, that may simply be not possible (because the target has different hardware implementation than the hardware the compiler runs on).

Also relying on specific inaccuracy of FP calculations is very bad design.

August 15
On 8/15/24 12:25, Dom DiSc wrote:
> On Wednesday, 14 August 2024 at 07:54:15 UTC, claptrap wrote:
>> I agree the compiler should actually use the float precision you explicitly ask for.
> 
> But if you do cross compiling, that may simply be not possible (because the target has different hardware implementation than the hardware the compiler runs on).
> ...

It's still possible, but it's not even what I asked for. Also, personally I am not targeting any machine whose floating-point unit is sufficiently incompatible to cause a difference on this specific code. There has been a convergence of how floats work. D differs gratuitously.

> Also relying on specific inaccuracy of FP calculations is very bad design.

The point is not "relying on specific inaccuracy", the point is reproducibility. The specifics of the inaccuracy usually do not matter at all, what matters a great deal sometimes is that you get the same result when you run the same computation.
August 15
On 8/15/24 11:13, Carsten Schlote wrote:
> On Tuesday, 13 August 2024 at 07:56:34 UTC, Abdulhaq wrote:
>> One little piece of anecdata, when working with python I saw differing results in the 17th d.p. when running the exact same calculation in the same python session. The reason turned out to be which registers were being used each time, which could vary. This is not considered to be a bug.
> 
> I would consider it a bug.
> 
> Running the same piece of code multiple times with exactly the same parameters should result into the exactly same results.

The D spec allows this to not be true too. I consider it a weakness of the spec.
August 15
On 8/15/24 12:25, Dom DiSc wrote:
> Also relying on specific inaccuracy of FP calculations is very bad design.

Personally I mostly care about reproducibility, but also, as I said earlier, accuracy-improving algorithms like Kahan summation or general double-double computations very much "rely on specific inaccuracy". It's not bad design, it may just be that you are not sufficiently informed about this kind of thing.

Kahan summation is even in Phobos: https://github.com/dlang/phobos/blob/master/std/algorithm/iteration.d#L7558-L7572

This implementation is however technically incorrect because D could choose a bad subset of the computations and run only those at "higher" precision, destroying the overall accuracy.

It gets more obvious if you drop the judgmental tone and just call "specific inaccuracy" by the proper name: "correct rounding".
August 15

On Thursday, 15 August 2024 at 09:13:31 UTC, Carsten Schlote wrote:

>

On Tuesday, 13 August 2024 at 07:56:34 UTC, Abdulhaq wrote:

>

One little piece of anecdata, when working with python I saw differing results in the 17th d.p. when running the exact same calculation in the same python session. The reason turned out to be which registers were being used each time, which could vary. This is not considered to be a bug.

I would consider it a bug.

I'd say that whether it's a bug or not comes down to the definition of "bug", and I don't think a discussion of that is useful, and also it's subjective and reasonable people can differ on it.

But I would say that, in my python example, both answers were wrong, in the sense that they were not exactly right (and can't be), but they were both wrong in a different way.

In your example I'd ask you to consider this: all the images you created, except the trivial ones, were wrong, in the sense that some of the calculations were inexact and some of them had sufficient error to change the color of the pixel. It's just that the CTFE image was wrong in a different way to the non-CTFE image. Why do you want to reproduce the errors?

In general floating point calculations are inexact, and we shouldn't (generalising here) expect to get the same error across different platforms.

In this specific case you could argue that it's the "same platform", I get that.

August 15

On Thursday, 15 August 2024 at 16:21:35 UTC, Abdulhaq wrote:

>

On Thursday, 15 August 2024 at 09:13:31 UTC, Carsten Schlote

To clarify a bit more, I'm not just talking about single isolated computations, I'm talking about e.g. matrix multiplication. Different compilers, even LDC vs DMD for example, could optimise the calculation in a different way, loop unrolling, step elimination etc. even if the rounding algorithms etc. at the chip level are the same, the way the code is compiled and calculations sequenced will change the error in the final answer.

Then, variations in pipelining and caching at the processor level could also affect the answer.

And if you move on to different computing paradigms such as quantum computing and other as yet undiscovered techniques, again the way operations and rounding etc is compounded will cause divergences in computations.

Now, we could insist that we somehow legislate for the way compound calculations are conducted. But that would cripple the speed of calculations for some processor architecture / paradigms for a goal (reproduceability) which is worthy, but for 99% of usages not sufficiently beneficial to pay the big price in performance.

August 15
On 8/15/24 18:21, Abdulhaq wrote:
> Why do you want to reproduce the errors?

It's you who calls it "errors". Someone else may just call it "results".

Science. Determinism, e.g. blockchain or other deterministic lock-step networking. etc. It's not hard to come up with use cases. Sometimes what exact result you get is not very important, but it is important that you get the same one.

> 
> In general floating point calculations are inexact, and we shouldn't (generalising here) expect to get the same error across different platforms. 

It's just not true that floating point calculations are somehow "inexact". You are confusing digital and analog computing. This is still digital computing. Rounding is a deterministic, exact function, like any other.