June 05, 2020
On 6/5/20 4:05 PM, jmh530 wrote:
> However, I have always been sympathetic to Walter's argument in favor doing intermediates at the highest precision.

I have, too. But times have changed. That has aged poorly.
June 05, 2020
On Friday, 5 June 2020 at 19:48:55 UTC, H. S. Teoh wrote:
> On Fri, Jun 05, 2020 at 03:39:26PM -0400, Andrei Alexandrescu via Digitalmars-d wrote:
>> On 6/4/20 10:39 AM, jmh530 wrote:
> [...]
>> > [...]
> [...]
>> > [...]
>> 
>> This needs to change. It's one thing to offer more precision to the user who consciously uses 80-bit reals, and it's an entirely different thing to force that bias on the user who's okay with double precision.
>
> +100!  std.math has been pessimal for all these years for no good reason, let's get this fixed by making sure this gets pushed through:
>
> 	https://github.com/dlang/phobos/pull/7463
>
> AIUI, it's currently only held up by a single 3rd party package awaiting a new release. Once that's done, we need to rectify this pessimal state of affairs.
>
What I don't understand is why the overloads are not implemented with templates. Isn't this exactly the use case they were invented for?
June 06, 2020
On 05.06.20 21:40, Andrei Alexandrescu wrote:
> On 6/4/20 10:40 AM, Timon Gehr wrote:
>> On 04.06.20 16:14, Andrei Alexandrescu wrote:
>>> D should just use the C functions when they offer better speed.
>>>
>>> https://www.reddit.com/r/programming/comments/gvuy59/a_look_at_chapel_d_and_julia_using_kernel_matrix/fsr4w5o/ 
>>>
>>
>> In one of my projects I had to manually re-implement all functions that currently forward to libc, because libc is not portable.
> 
> It's totally fine to version() things appropriately. In fact it's the best way - you get to version() in specific libc implementations known to be adequate.

I think the issue was that different implementations were inadequate in different ways.
June 06, 2020
On 05.06.20 21:39, Andrei Alexandrescu wrote:
> On 6/4/20 10:39 AM, jmh530 wrote:
>> On Thursday, 4 June 2020 at 14:14:22 UTC, Andrei Alexandrescu wrote:
>>> D should just use the C functions when they offer better speed.
>>>
>>> https://www.reddit.com/r/programming/comments/gvuy59/a_look_at_chapel_d_and_julia_using_kernel_matrix/fsr4w5o/ 
>>>
>>
>> Below is a typical example of a std.math implementation for a trig function. It casts everything to real to improve accuracy. This doesn't explain everything, but it's a general strategy in std.math to prefer accuracy over speed.
>>
>> real cosh(real x) @safe pure nothrow @nogc
>> {
>>      //  cosh = (exp(x)+exp(-x))/2.
>>      // The naive implementation works correctly.
>>      const real y = exp(x);
>>      return (y + 1.0/y) * 0.5;
>> }
>>
>> /// ditto
>> double cosh(double x) @safe pure nothrow @nogc { return cosh(cast(real) x); }
>>
>> /// ditto
>> float cosh(float x) @safe pure nothrow @nogc  { return cosh(cast(real) x); }
> 
> This needs to change. It's one thing to offer more precision to the user who consciously uses 80-bit reals, and it's an entirely different thing to force that bias on the user who's okay with double precision.

Those implementations don't give you more than double precision. The issue is that they are slow, not that they are too precise.

Anyway, what you are saying is of course true, but it is a critique of language semantics, not Phobos:

https://dlang.org/spec/float.html
"It's possible that, due to greater use of temporaries and common subexpressions, optimized code may produce a more accurate answer than unoptimized code."

I'll decide which one (if any) of those results is "accurate", thank you very much.
June 06, 2020
On Friday, 5 June 2020 at 20:05:56 UTC, jmh530 wrote:
> On Friday, 5 June 2020 at 19:39:26 UTC, Andrei Alexandrescu wrote:
>> [snip]
>>
>> This needs to change. It's one thing to offer more precision to the user who consciously uses 80-bit reals, and it's an entirely different thing to force that bias on the user who's okay with double precision.
>
> I agree with you that more precision should be opt-in.
>
> However, I have always been sympathetic to Walter's argument in favor doing intermediates at the highest precision. There are many good reasons why critical calculations need to be done at the highest precision possible.

I believe that decision was based on a time when floating point math on common computers was done in higher precision anyway so explicit `real` didn't cost anything and avoided needless rounding.

https://en.wikipedia.org/wiki/X87#Description

>By default, the x87 processors all use 80-bit double-extended precision
>internally (to allow sustained precision over many calculations, see IEEE
>754 design rationale).
June 06, 2020
On Thursday, 4 June 2020 at 17:25:51 UTC, jmh530 wrote:
>
> LDC has intrinsics that I mentioned above that mir.math.common uses. However, I'm not sure how they are implemented...
>

Things like llvm_pow and llvm_exp actually defers to libc IIRC.
So the results varies a bit from libc to libc. And is also slightly different from phobos. ^^

Complicated stuff if one want to be perfectly backwards compatible.

Example:

    // Gives considerable speed improvement over `std.math.exp`.
    // Exhaustive testing for 32-bit `float` shows
    // Relative accuracy is within < 0.0002% of std.math.exp
    // for every possible input.
    // So a -120 dB inaccuracy max, and -140dB the vast majority of the time.
    alias fast_exp = llvm_exp;

This is with the Microsoft C runtime, haven't tested others.
So, this one looks relatively safe to use, -140dB is as good as you can go with float, but for other transcendental is might be a bit more shakey.

The ideal transcendental would be
   - faster than libc
   - same precision on all platforms
   - specialized by float/double size
   - within a given margin of the truth (in ulp)
June 08, 2020
On Thursday, 4 June 2020 at 14:14:22 UTC, Andrei Alexandrescu wrote:
> D should just use the C functions when they offer better speed.
>
> https://www.reddit.com/r/programming/comments/gvuy59/a_look_at_chapel_d_and_julia_using_kernel_matrix/fsr4w5o/

If it's any help or informative I explicitly benchmark a selection of mathematics functions from `std.math`, `core.stdc.math`, and `LDC_intrinsic` here: https://github.com/dataPulverizer/DMathBench/blob/master/report.md
June 08, 2020
On 08.06.20 05:05, data pulverizer wrote:
> On Thursday, 4 June 2020 at 14:14:22 UTC, Andrei Alexandrescu wrote:
>> D should just use the C functions when they offer better speed.
>>
>> https://www.reddit.com/r/programming/comments/gvuy59/a_look_at_chapel_d_and_julia_using_kernel_matrix/fsr4w5o/ 
>>
> 
> If it's any help or informative I explicitly benchmark a selection of mathematics functions from `std.math`, `core.stdc.math`, and `LDC_intrinsic` here: https://github.com/dataPulverizer/DMathBench/blob/master/report.md

> many important processes rely on their performance and accuracy.

This is true, but you only measured performance, not accuracy.
June 08, 2020
On Mon, Jun 08, 2020 at 08:13:20PM +0200, Timon Gehr via Digitalmars-d wrote:
> On 08.06.20 05:05, data pulverizer wrote:
> > On Thursday, 4 June 2020 at 14:14:22 UTC, Andrei Alexandrescu wrote:
> > > D should just use the C functions when they offer better speed.
> > > 
> > > https://www.reddit.com/r/programming/comments/gvuy59/a_look_at_chapel_d_and_julia_using_kernel_matrix/fsr4w5o/
> > > 
> > 
> > If it's any help or informative I explicitly benchmark a selection of mathematics functions from `std.math`, `core.stdc.math`, and `LDC_intrinsic` here: https://github.com/dataPulverizer/DMathBench/blob/master/report.md
> 
> > many important processes rely on their performance and accuracy.
> 
> This is true, but you only measured performance, not accuracy.

Case in point: the `sin` intrinsic in dmd is broken, because it uses the x87 fsin instruction, which is buggy:

	https://issues.dlang.org/show_bug.cgi?id=15854

Certainly, using fsin is faster than a software implementation of sin(x), but at the cost of a pretty badly-off result.


T

-- 
Do not reason with the unreasonable; you lose by definition.
June 09, 2020
On 2020-06-06 05:58:37 +0000, Nathan S. said:

> On Friday, 5 June 2020 at 20:05:56 UTC, jmh530 wrote:
>> On Friday, 5 June 2020 at 19:39:26 UTC, Andrei Alexandrescu wrote:
>>> [snip]
>>> 
>>> This needs to change. It's one thing to offer more precision to the user who consciously uses 80-bit reals, and it's an entirely different thing to force that bias on the user who's okay with double precision.
>> 
>> I agree with you that more precision should be opt-in.
>> 
>> However, I have always been sympathetic to Walter's argument in favor doing intermediates at the highest precision. There are many good reasons why critical calculations need to be done at the highest precision possible.
> 
> I believe that decision was based on a time when floating point math on common computers was done in higher precision anyway

It's still this way today, your statement reads as x87 is gone. It's not.

>  so explicit `real` didn't cost anything and avoided needless rounding.

It's still the way today.

And one feature I like a lot about D is, that I have simple access to the 80-Bit FP precision. I would even like to have a 128-Bit FP, but Intel won't do it.

All the GPU, AI/ML hype is focusing on 64-Bit FP or less. There is no glory in giving up additional precision.

As already stated in this threa: Why not implement the code as templates and provide some pre-instantiated wrappers?

-- 
Robert M. Münch
http://www.saphirion.com
smarter | better | faster

1 2 3
Next ›   Last »