Thread overview
complex arithmetic in D: multiple questions
Mar 09, 2018
J-S Caux
Mar 09, 2018
Nicholas Wilson
Mar 09, 2018
J-S Caux
Mar 09, 2018
Nicholas Wilson
Mar 09, 2018
bachmeier
March 09, 2018
Please bear with me, this is a long question!
To explain, I'm a scientist considering switching from C++ to D, but before I do, I need to ensure that I can:
- achieve execution speeds comparable to C++ (for the same accuracy; I can accept a slight slowdown, call it 30%, to get a few more digits (which I typically don't need))
- easily perform all standard mathematical operations on complex numbers
- (many other things not relevant here: memory use, parallelization, etc).

In the two files linked below, I compare execution speed/accuracy between D and C++ when using log on complex variables:

https://www.dropbox.com/s/hfw7nkwg25mk37u/test_cx.d?dl=0
https://www.dropbox.com/s/hfw7nkwg25mk37u/test_cx.d?dl=0

The idea is simple: let a complex variable be uniformly distributed around the unit circle. Summing the logs should give zero.

In the D code, I've defined an "own" version of the log, log_cx, since std.complex (tragically!) does not provide this function (and many others, see recent threads https://forum.dlang.org/post/dewzhtnpqkaqkzxwpkrs@forum.dlang.org and https://forum.dlang.org/thread/lsnuevdefktulxltoqpj@forum.dlang.org, and issue https://issues.dlang.org/show_bug.cgi?id=18571).

First, speed/accuracy (times for 1M points in all cases):
D:
dmd, no flags:
Complex!real: re, im (should be 0): -9.24759400999786151942e-15	6.26324079407839123978e-14
time for 1000000 pts: 190 ms, 508 μs, and 9 hnsecs
Complex!double: re, im (should be 0): -1.96986871259241524967e-12	5.46260919029144254022e-09
time for 1000000 pts: 455 ms, 888 μs, and 7 hnsecs

dmd -release -inline -O:
Complex!real: re, im (should be 0): -9.24759400999786151942e-15	6.26324079407839123978e-14
time for 1000000 pts: 175 ms, 352 μs, and 3 hnsecs
Complex!double: re, im (should be 0): -4.23880765557105362133e-14	5.46260919029144254022e-09
time for 1000000 pts: 402 ms, 965 μs, and 7 hnsecs

ldc2, no flags:
Complex!real: re, im (should be 0): -9.24759400999786151942e-15	6.26324079407839123978e-14
time for 1000000 pts: 184 ms, 353 μs, and 9 hnsecs
Complex!double: re, im (should be 0): -1.96986871259241524967e-12	5.46260919029144254022e-09
time for 1000000 pts: 436 ms, 526 μs, and 8 hnsecs

ldc2 -release -O:
Complex!real: re, im (should be 0): -9.24759400999786151942e-15	6.26324079407839123978e-14
time for 1000000 pts: 108 ms and 966 μs
Complex!double: re, im (should be 0): -1.96986871259241524967e-12	5.46260919029144254022e-09
time for 1000000 pts: 330 ms, 421 μs, and 8 hnsecs

As expected accuracy with Complex!real is about 4 digits better, and the best combo is ldc2 with flags.

Now C++:
GCC 7.1.0, -O3:
complex<double>: re, im (should be 0): (8.788326118779445e-13,1.433519814600731e-11)
time for 1000000 pts: 0.042751 seconds.

Apple LLVM version 9.0.0 (clang-900.0.39.2), -O3:
complex<double>: re, im (should be 0): (-3.0160318686967e-12,1.433519814600731e-11)
time for 1000000 pts: 0.038715 seconds.

So simple C++ is thrice as fast as the best-achieved D I managed.

Now for my questions:
- I would expect the D `Complex!double` case to work faster than the `real` one. Why is it the other way around? [I can accept (and use) D with Complex!real running 1/3 the speed of C++ (but with increased accuracy), but I'd also love to be able to run D with `Complex!double` at C++ speeds, since the tradeoff might be worth it for some calculations]
- what is the best way to correct the unfortunate (to be polite) omission of many standard mathematical functions from std.complex? [if I may be frank, this is a real pain for us scientists] There exists https://gist.github.com/Biotronic/17af645c2c9b7913de1f04980cd22b37 but can this be integrated (after improvements) in the language, or should I (re)build my own?
- for my education, why was the decision made to go from the built-in types `creal` etc to the `Complex` type?

[related questions:
March 09, 2018
On Friday, 9 March 2018 at 12:34:40 UTC, J-S Caux wrote:
> Please bear with me, this is a long question!
> To explain, I'm a scientist considering switching from C++ to D, but before I do, I need to ensure that I can:
> - achieve execution speeds comparable to C++ (for the same accuracy; I can accept a slight slowdown, call it 30%, to get a few more digits (which I typically don't need))
> - easily perform all standard mathematical operations on complex numbers
> - (many other things not relevant here: memory use, parallelization, etc).
>
> In the two files linked below, I compare execution speed/accuracy between D and C++ when using log on complex variables:
>
> https://www.dropbox.com/s/hfw7nkwg25mk37u/test_cx.d?dl=0
> https://www.dropbox.com/s/hfw7nkwg25mk37u/test_cx.d?dl=0
>
> The idea is simple: let a complex variable be uniformly distributed around the unit circle. Summing the logs should give zero.
>
> In the D code, I've defined an "own" version of the log, log_cx, since std.complex (tragically!) does not provide this function (and many others, see recent threads https://forum.dlang.org/post/dewzhtnpqkaqkzxwpkrs@forum.dlang.org and https://forum.dlang.org/thread/lsnuevdefktulxltoqpj@forum.dlang.org, and issue https://issues.dlang.org/show_bug.cgi?id=18571).
>
> [snip]
>
> So simple C++ is thrice as fast as the best-achieved D I managed.
>
> Now for my questions:
> - I would expect the D `Complex!double` case to work faster than the `real` one. Why is it the other way around? [I can accept (and use) D with Complex!real running 1/3 the speed of C++ (but with increased accuracy), but I'd also love to be able to run D with `Complex!double` at C++ speeds, since the tradeoff might be worth it for some calculations]

because the double version is doing the exact same work as the real
except that it is also converting between real and double for atan2 (from arg). https://github.com/dlang/phobos/blob/master/std/math.d#L1352

I'm really not sure why phobos does that, it should't.

> - what is the best way to correct the unfortunate (to be polite) omission of many standard mathematical functions from std.complex? [if I may be frank, this is a real pain for us scientists] There exists https://gist.github.com/Biotronic/17af645c2c9b7913de1f04980cd22b37 but can this be integrated (after improvements) in the language, or should I (re)build my own?

It will be much faster to build your own that just forward to the C functions (or LLVM intrinsics) see https://github.com/libmir/mir-algorithm/blob/master/source/mir/math/common.d#L126 for a starting point (we'd greatly appreciate pull requests for anything missing).
Even if the decision to make std.math behave at the appropriate precision is accepted, it will still take an entire release cycle (unless you use dmd master), and I'm not so sure it will.

> - for my education, why was the decision made to go from the built-in types `creal` etc to the `Complex` type?

I think because they can be done in the library.
I personally don't think they should have been, they don't complicate things that much.

> [related questions:

Did you press send too soon?
March 09, 2018
On Friday, 9 March 2018 at 13:56:33 UTC, Nicholas Wilson wrote:
>> - I would expect the D `Complex!double` case to work faster than the `real` one. Why is it the other way around? [I can accept (and use) D with Complex!real running 1/3 the speed of C++ (but with increased accuracy), but I'd also love to be able to run D with `Complex!double` at C++ speeds, since the tradeoff might be worth it for some calculations]
>
> because the double version is doing the exact same work as the real
> except that it is also converting between real and double for atan2 (from arg). https://github.com/dlang/phobos/blob/master/std/math.d#L1352
>
> I'm really not sure why phobos does that, it should't.

Is this a case for a bug report? Seems pretty bizarre to do that, like an oversight/neglect.

>> - what is the best way to correct the unfortunate (to be polite) omission of many standard mathematical functions from std.complex? [if I may be frank, this is a real pain for us scientists] There exists https://gist.github.com/Biotronic/17af645c2c9b7913de1f04980cd22b37 but can this be integrated (after improvements) in the language, or should I (re)build my own?
>
> It will be much faster to build your own that just forward to the C functions (or LLVM intrinsics) see https://github.com/libmir/mir-algorithm/blob/master/source/mir/math/common.d#L126 for a starting point (we'd greatly appreciate pull requests for anything missing).
> Even if the decision to make std.math behave at the appropriate precision is accepted, it will still take an entire release cycle (unless you use dmd master), and I'm not so sure it will.
>

OK thanks. I looked at libmir, and saw many good things there. I was wondering: is it still actively developed/maintained? How will it fit with the "core" D in the future? [I don't want to build dependencies to libraries which aren't there to stay in the long run, I want code which can survive for decades]. It would seem to me that some of the things included in there should be part of D core/std anyway.

Going further, I'm really wondering what the plan is as far as Complex is concerned. Right now it just feels neglected (half-done/aborted transition from creal etc to Complex, lots of missing basic functions etc), and is one major blocking point as far as adoption (among scientists) is concerned. Julia is really taking off with many of my colleagues, mostly because due respect was given to maths. I'd certainly choose Julia if it wasn't for the fact that I can't get my exploratory/testing codes to run faster than about 1/10th of my C++ stuff. It seems D could have such an appeal in the realm of science, but these little things are really blocking adoption (including for myself).

>> [related questions:
>
> Did you press send too soon?

No, the related questions were linked in my previous post (just copied & pasted it further above, but didn't delete these last couple of words properly).

Thanks a lot Nicholas!

March 09, 2018
On Friday, 9 March 2018 at 14:41:47 UTC, J-S Caux wrote:
> Is this a case for a bug report? Seems pretty bizarre to do that, like an oversight/neglect.

Yes if there's not one there for it already.

> OK thanks. I looked at libmir, and saw many good things there. I was wondering: is it still actively developed/maintained? How will it fit with the "core" D in the future? [I don't want to build dependencies to libraries which aren't there to stay in the long run, I want code which can survive for decades]. It would seem to me that some of the things included in there should be part of D core/std anyway.

Yes, it is sponsored by https://github.com/kaleidicassociates it will be around for a long time.
It is developed separately because the dev/release cycles don't easily align with the core/ stdlib developers.

https://github.com/libmir/mir-algorithm/blob/master/source/mir/ndslice/slice.d#L594
is the de facto matrix structure for D.

> Going further, I'm really wondering what the plan is as far as Complex is concerned. Right now it just feels neglected (half-done/aborted transition from creal etc to Complex, lots of missing basic functions etc), and is one major blocking point as far as adoption (among scientists) is concerned. Julia is really taking off with many of my colleagues, mostly because due respect was given to maths. I'd certainly choose Julia if it wasn't for the fact that I can't get my exploratory/testing codes to run faster than about 1/10th of my C++ stuff. It seems D could have such an appeal in the realm of science, but these little things are really blocking adoption (including for myself).

Indeed, I'll see what I can do about it.

>>> [related questions:
>>
>> Did you press send too soon?
>
> No, the related questions were linked in my previous post (just copied & pasted it further above, but didn't delete these last couple of words properly).
>
> Thanks a lot Nicholas!

March 09, 2018
On Friday, 9 March 2018 at 14:41:47 UTC, J-S Caux wrote:

> Going further, I'm really wondering what the plan is as far as Complex is concerned. Right now it just feels neglected (half-done/aborted transition from creal etc to Complex, lots of missing basic functions etc), and is one major blocking point as far as adoption (among scientists) is concerned. Julia is really taking off with many of my colleagues, mostly because due respect was given to maths. I'd certainly choose Julia if it wasn't for the fact that I can't get my exploratory/testing codes to run faster than about 1/10th of my C++ stuff. It seems D could have such an appeal in the realm of science, but these little things are really blocking adoption (including for myself).

I don't do the things you're doing (I do econometrics) but I don't think that at this point D is ready to be used as a complete solution for everything you need. It can be done, but someone has to do the work, and that hasn't happened. D is designed to be fully interoperable with C and mostly interoperable with C++. You'll get the same performance as C, but that doesn't help if the libraries you need haven't been written yet.

From a practical perspective (i.e., you want to just work without writing a bunch of low-level stuff yourself) it's best to prepare to call from D into C/C++ or from C/C++ into D. This hybrid approach has worked well for me, and to be honest, I'd rather rely on well-tested, well-maintained C libraries than worry about pure D libraries that haven't been tested extensively and may or may not be maintained in the future. It really doesn't matter to your own code if the function you're calling was written in C or written in D.

As for Julia, that was created as a Matlab replacement, and they have full-time devs to work on it. If I were starting over, I would consider Julia for my own work. I'd probably still choose D but Julia does offer advantages.