June 09, 2015
> I believe that Phobos must support some common methods of linear algebra and general mathematics. I have no desire to join D with Fortran libraries :)

D definitely needs BLAS API support for matrix multiplication. Best BLAS libraries are written in assembler like openBLAS. Otherwise D will have last position in corresponding math benchmarks.
June 09, 2015
On 6/9/15 1:50 AM, John Colvin wrote:
> On Tuesday, 9 June 2015 at 06:59:07 UTC, Andrei Alexandrescu wrote:
>> (a) Provide standard data layouts in std.array for the typical shapes
>> supported by linear algebra libs: row major, column major, alongside
>> with striding primitives.
>
> I don't think this is quite the right approach. Multidimensional arrays
> and matrices are about accessing and iteration over data, not data
> structures themselves. The standard layouts are common special cases.

I see. So what would be the primitives necessary? Strides (in the form of e.g. special ranges)? What are the things that would make a library vendor or user go, "OK, now I know what steps to take to use my code with D"?

>> (b) Provide signatures for C and Fortran libraries so people who have
>> them can use them easily with D.
>>
>> (c) Provide high-level wrappers on top of those functions.
>>
>>
>> Andrei
>
> That is how e.g. numpy works and it's OK, but D can do better.
>
> Ilya, I'm very interested in discussing this further with you. I have a
> reasonable idea and implementation of how I would want the generic
> n-dimensional types in D to work, but you seem to have more experience
> with BLAS and LAPACK than me* and of course interfacing with them is
> critical.
>
> *I rarely interact with them directly.

Color me interested. This is another of those domains that hold great promise for D, but sadly a strong champion has been missing. Or two :o).


Andrei

June 09, 2015
On Tuesday, 9 June 2015 at 15:26:43 UTC, Ilya Yaroshenko wrote:
> D definitely needs BLAS API support for matrix multiplication. Best BLAS libraries are written in assembler like openBLAS. Otherwise D will have last position in corresponding math benchmarks.

Yes, those programs on D, is clearly lagging behind the programmers Wolfram Mathematica :)
https://projecteuler.net/language=D
https://projecteuler.net/language=Mathematica

To solve these problems you need something like Blas. Perhaps BLAS - it's more practical way to enrich D techniques for working with matrices.
June 09, 2015
On Tuesday, 9 June 2015 at 16:14:24 UTC, Dennis Ritchie wrote:
> To solve these problems you need something like Blas. Perhaps BLAS - it's more practical way to enrich D techniques for working with matrices.

Actually, that's what you need to realize in D:
http://www.boost.org/doc/libs/1_58_0/libs/numeric/ublas/doc/index.html
June 09, 2015
On 10 June 2015 at 01:26, Ilya Yaroshenko via Digitalmars-d <digitalmars-d@puremagic.com> wrote:
>
>> I believe that Phobos must support some common methods of linear algebra and general mathematics. I have no desire to join D with Fortran libraries :)
>
>
> D definitely needs BLAS API support for matrix multiplication. Best BLAS libraries are written in assembler like openBLAS. Otherwise D will have last position in corresponding math benchmarks.

A complication for linear algebra (or other mathsy things in general)
is the inability to detect and implement compound operations.
We don't declare mathematical operators to be algebraic operations,
which I think is a lost opportunity.
If we defined the properties along with their properties
(commutativity, transitivity, invertibility, etc), then the compiler
could potentially do an algebraic simplification on expressions before
performing codegen and optimisation.
There are a lot of situations where the optimiser can't simplify
expressions because it runs into an arbitrary function call, and I've
never seen an optimiser that understands exp/log/roots, etc, to the
point where it can reduce those expressions properly. To compete with
maths benchmarks, we need some means to simplify expressions properly.
June 09, 2015
On 10 June 2015 at 02:17, Manu <turkeyman@gmail.com> wrote:
> ... If we defined the properties along with their properties ...

*operators* along with their properties
June 09, 2015
On Tuesday, 9 June 2015 at 16:18:06 UTC, Manu wrote:
> On 10 June 2015 at 01:26, Ilya Yaroshenko via Digitalmars-d <digitalmars-d@puremagic.com> wrote:
>>
>>> I believe that Phobos must support some common methods of linear algebra and general mathematics. I have no desire to join D with Fortran libraries :)
>>
>>
>> D definitely needs BLAS API support for matrix multiplication. Best BLAS libraries are written in assembler like openBLAS. Otherwise D will have last position in corresponding math benchmarks.
>
> A complication for linear algebra (or other mathsy things in general)
> is the inability to detect and implement compound operations.
> We don't declare mathematical operators to be algebraic operations,
> which I think is a lost opportunity.
> If we defined the properties along with their properties
> (commutativity, transitivity, invertibility, etc), then the compiler
> could potentially do an algebraic simplification on expressions before
> performing codegen and optimisation.
> There are a lot of situations where the optimiser can't simplify
> expressions because it runs into an arbitrary function call, and I've
> never seen an optimiser that understands exp/log/roots, etc, to the
> point where it can reduce those expressions properly. To compete with
> maths benchmarks, we need some means to simplify expressions properly.

Optimising floating point is a massive pain because of precision concerns and IEEE-754 conformance. Just because something is analytically the same doesn't mean you want the optimiser to go ahead and make the switch for you.

Of the things that can be done, lazy operations should make it easier/possible for the optimiser to spot.
June 09, 2015
On Tuesday, 9 June 2015 at 16:18:06 UTC, Manu wrote:
> On 10 June 2015 at 01:26, Ilya Yaroshenko via Digitalmars-d
> <digitalmars-d@puremagic.com> wrote:
>>
>>> I believe that Phobos must support some common methods of linear algebra
>>> and general mathematics. I have no desire to join D with Fortran libraries
>>> :)
>>
>>
>> D definitely needs BLAS API support for matrix multiplication. Best BLAS
>> libraries are written in assembler like openBLAS. Otherwise D will have last
>> position in corresponding math benchmarks.
>
> A complication for linear algebra (or other mathsy things in general)
> is the inability to detect and implement compound operations.
> We don't declare mathematical operators to be algebraic operations,
> which I think is a lost opportunity.
> If we defined the properties along with their properties
> (commutativity, transitivity, invertibility, etc), then the compiler
> could potentially do an algebraic simplification on expressions before
> performing codegen and optimisation.
> There are a lot of situations where the optimiser can't simplify
> expressions because it runs into an arbitrary function call, and I've
> never seen an optimiser that understands exp/log/roots, etc, to the
> point where it can reduce those expressions properly. To compete with
> maths benchmarks, we need some means to simplify expressions properly.

Simplified expressions would help because
1. On matrix (hight) level optimisation can be done very well by programer (algorithms with matrixes in terms of count of matrix multiplications are small).
2. Low level optimisation requires specific CPU/Cache optimisation. Modern implementations are optimised for all cache levels. See work by KAZUSHIGE GOTO http://www.cs.utexas.edu/users/pingali/CS378/2008sp/papers/gotoPaper.pdf
June 09, 2015
On Tuesday, 9 June 2015 at 16:16:39 UTC, Dennis Ritchie wrote:
> On Tuesday, 9 June 2015 at 16:14:24 UTC, Dennis Ritchie wrote:
>> To solve these problems you need something like Blas. Perhaps BLAS - it's more practical way to enrich D techniques for working with matrices.
>
> Actually, that's what you need to realize in D:
> http://www.boost.org/doc/libs/1_58_0/libs/numeric/ublas/doc/index.html

This is very good stuff. However I want to create something more simple:

[1]. n-dimensional slices (without matrix multiplication, "RowMajor/..." and other math features)
[2]. netlib like standart CBLAS API at `etc.blas.cblas`
[3]. High level bindings to connect [1] and 1-2D subset of [2].
June 09, 2015
On 10 June 2015 at 02:32, John Colvin via Digitalmars-d <digitalmars-d@puremagic.com> wrote:
> On Tuesday, 9 June 2015 at 16:18:06 UTC, Manu wrote:
>>
>> On 10 June 2015 at 01:26, Ilya Yaroshenko via Digitalmars-d <digitalmars-d@puremagic.com> wrote:
>>>
>>>
>>>> I believe that Phobos must support some common methods of linear algebra and general mathematics. I have no desire to join D with Fortran libraries :)
>>>
>>>
>>>
>>> D definitely needs BLAS API support for matrix multiplication. Best BLAS libraries are written in assembler like openBLAS. Otherwise D will have last position in corresponding math benchmarks.
>>
>>
>> A complication for linear algebra (or other mathsy things in general)
>> is the inability to detect and implement compound operations.
>> We don't declare mathematical operators to be algebraic operations,
>> which I think is a lost opportunity.
>> If we defined the properties along with their properties
>> (commutativity, transitivity, invertibility, etc), then the compiler
>> could potentially do an algebraic simplification on expressions before
>> performing codegen and optimisation.
>> There are a lot of situations where the optimiser can't simplify
>> expressions because it runs into an arbitrary function call, and I've
>> never seen an optimiser that understands exp/log/roots, etc, to the
>> point where it can reduce those expressions properly. To compete with
>> maths benchmarks, we need some means to simplify expressions properly.
>
>
> Optimising floating point is a massive pain because of precision concerns and IEEE-754 conformance. Just because something is analytically the same doesn't mean you want the optimiser to go ahead and make the switch for you.

We have flags to control this sort of thing (fast-math, strict ieee, etc).
I will worry about my precision, I just want the optimiser to do its
job and do the very best it possibly can. In the case of linear
algebra, the optimiser generally fails and I must manually simplify
expressions as much as possible.
In the event the expressions emerge as a result of a series of
inlines, or generic code (the sort that appears frequently as a result
of stream/range based programming), then there's nothing you can do
except to flatten and unroll your work loops yourself.

> Of the things that can be done, lazy operations should make it easier/possible for the optimiser to spot.

My experience is that they possibly make it harder, although I don't know why. I find the compiler becomes very unpredictable optimising deep lazy expressions. The backend inline heuristics may not be tuned for typical D expressions of this type?

I often wish I could address common compound operations myself, by implementing something like a compound operator which I can special case with an optimised path for particular expressions. But I can't think of any reasonable ways to approach that.