Thread overview
Library for Linear Algebra?
Mar 20, 2009
Trass3r
Mar 20, 2009
BCS
Mar 20, 2009
Trass3r
Mar 21, 2009
Don
Mar 21, 2009
Trass3r
Mar 22, 2009
Don
Mar 22, 2009
Fawzi Mohamed
Mar 23, 2009
Don
Mar 20, 2009
Bill Baxter
Mar 20, 2009
Fawzi Mohamed
March 20, 2009
Is there any working library for Linear Algebra?
I only found BLADE, which seems to be abandoned and quite unfinished, and Helix which only provides 3x3 and 4x4 matrices used in games :(
March 20, 2009
Reply to Trass3r,

> Is there any working library for Linear Algebra?
> I only found BLADE, which seems to be abandoned and quite unfinished,
> and Helix which only provides 3x3 and 4x4 matrices used in games :(


I havent used them but:

http://www.dsource.org/projects/mathextra
http://www.dsource.org/projects/lyla

if either works well for you, please comment in it.


March 20, 2009
BCS schrieb:
> http://www.dsource.org/projects/mathextra

This project contains BLADE, I didn't test it myself cause the author stated on Wed Oct 17, 2007 that "there are still some fairly large issues to work out" and there haven't been any updates to it since May 2008.

> http://www.dsource.org/projects/lyla

This seems to be something I should look further into. Though it is abandoned as well :(


That's a real pity, having a good scientific computation library is crucial for D being used at universities.
You know, Matlab is fine for writing clean, intuitive code, but when it comes to real performance requirements it totally sucks (damn Java ;) )
March 20, 2009
On Sat, Mar 21, 2009 at 3:38 AM, Trass3r <mrmocool@gmx.de> wrote:
> Is there any working library for Linear Algebra?
> I only found BLADE, which seems to be abandoned and quite unfinished, and
> Helix which only provides 3x3 and 4x4 matrices used in games :(
>

For D1 try Gobo and Dflat.  Gobo has a bunch of wrappers for Fortran libraries.  Dflat is a higher level matrix/sparse matrix interface.

Works with Tango using Tangobos.

See:
http://www.dsource.org/projects/multiarray

--bb
March 20, 2009
On 2009-03-20 20:46:21 +0100, Bill Baxter <wbaxter@gmail.com> said:

> On Sat, Mar 21, 2009 at 3:38 AM, Trass3r <mrmocool@gmx.de> wrote:
>> Is there any working library for Linear Algebra?
>> I only found BLADE, which seems to be abandoned and quite unfinished, and
>> Helix which only provides 3x3 and 4x4 matrices used in games :(
>> 
> 
> For D1 try Gobo and Dflat.  Gobo has a bunch of wrappers for Fortran
> libraries.  Dflat is a higher level matrix/sparse matrix interface.
> 
> Works with Tango using Tangobos.
> 
> See:
> http://www.dsource.org/projects/multiarray
> 
> --bb

Dflat gives you also sparse matrix formats, if all you are interested in are sense matrixes than blip
	http://dsource.org/projects/blip
(using the gobo wrappers) has NArray that gives you a nice interface to N dimensional arrays, and (compiling with -version=blas -version=lapack) also gives you access to most of lapack functions.
With it you can do things like these:

import blip.narray.NArray;

auto m=zeros!(float)([10,10]);
auto d=diag(m);
auto d[]=arange(0.0,1.0,0.1);
m[1,2]=0.4f;
auto v=dot(v,arange(0.1,1.1,0.1));
auto r=solve(m,v.asType!(float));
auto ev=eig(m);

Fawzi

March 21, 2009
Trass3r wrote:
> BCS schrieb:
>> http://www.dsource.org/projects/mathextra
> 
> This project contains BLADE, I didn't test it myself cause the author stated on Wed Oct 17, 2007 that "there are still some fairly large issues to work out" and there haven't been any updates to it since May 2008.

I abandoned it largely because array operations got into the language; since then I've been working on getting the low-level math language stuff working.
Don't worry, I haven't gone away!

> 
>> http://www.dsource.org/projects/lyla
> 
> This seems to be something I should look further into. Though it is abandoned as well :(
> 
> 
> That's a real pity, having a good scientific computation library is crucial for D being used at universities.
> You know, Matlab is fine for writing clean, intuitive code, but when it comes to real performance requirements it totally sucks (damn Java ;) )
March 21, 2009
Don schrieb:
> I abandoned it largely because array operations got into the language; since then I've been working on getting the low-level math language stuff working.
> Don't worry, I haven't gone away!

I see.

>>
>>> http://www.dsource.org/projects/lyla

Though array operations still only give us SIMD and no multithreading (?!).
I think the best approach is lyla's, taking an existing, optimized C BLAS library and writing some kind of wrapper using operator overloading etc. to make programming easier and more intuitive.
March 22, 2009
Trass3r wrote:
> Don schrieb:
>> I abandoned it largely because array operations got into the language; since then I've been working on getting the low-level math language stuff working.
>> Don't worry, I haven't gone away!
> 
> I see.
> 
>>>
>>>> http://www.dsource.org/projects/lyla
> 
> Though array operations still only give us SIMD and no multithreading (?!).

There's absolutely no way you'd want multithreading on a BLAS1 operation. It's not until BLAS3 that you become computation-limited.

> I think the best approach is lyla's, taking an existing, optimized C BLAS library and writing some kind of wrapper using operator overloading etc. to make programming easier and more intuitive.

In my opinion, we actually need matrices in the standard library, with a very small number of primitive operations built-in (much like Fortran does). Outside those, I agree, wrappers to an existing library should be used.
March 22, 2009
On 2009-03-22 09:45:32 +0100, Don <nospam@nospam.com> said:

> Trass3r wrote:
>> Don schrieb:
>>> I abandoned it largely because array operations got into the language; since then I've been working on getting the low-level math language stuff working.
>>> Don't worry, I haven't gone away!
>> 
>> I see.
>> 
>>>> 
>>>>> http://www.dsource.org/projects/lyla
>> 
>> Though array operations still only give us SIMD and no multithreading (?!).
> 
> There's absolutely no way you'd want multithreading on a BLAS1 operation. It's not until BLAS3 that you become computation-limited.

not true, if your vector is large you could still use several threads.
but you are right that using multiple thread at low level is a dangerous thing, because it might be better to use just one thread, and parallelize another operation at a higher level.
Thus you need sort of know how many threads are really available for that operation.
I am trying to tackle that problem in blip, by having a global scheduler, that I am rewriting.

>> I think the best approach is lyla's, taking an existing, optimized C BLAS library and writing some kind of wrapper using operator overloading etc. to make programming easier and more intuitive.

blyp.narray.NArray does that if compiled with -version=blas, but I think that for large vector/matrixes you can do better (exactly using multithreading).

> In my opinion, we actually need matrices in the standard library, with a very small number of primitive operations built-in (much like Fortran does). Outside those, I agree, wrappers to an existing library should be used.


March 23, 2009
Fawzi Mohamed wrote:
> On 2009-03-22 09:45:32 +0100, Don <nospam@nospam.com> said:
> 
>> Trass3r wrote:
>>> Don schrieb:
>>>> I abandoned it largely because array operations got into the language; since then I've been working on getting the low-level math language stuff working.
>>>> Don't worry, I haven't gone away!
>>>
>>> I see.
>>>
>>>>>
>>>>>> http://www.dsource.org/projects/lyla
>>>
>>> Though array operations still only give us SIMD and no multithreading (?!).
>>
>> There's absolutely no way you'd want multithreading on a BLAS1 operation. It's not until BLAS3 that you become computation-limited.
> 
> not true, if your vector is large you could still use several threads.

That's surprising. I confess to never having benchmarked it, though.
If the vector is large, all threads are competing for the same L2 and L3 cache bandwidth, right?
(Assuming a typical x86 situation where every CPU has an L1 cache and the L2 and L3 caches are shared).
So multiple cores should never be beneficial whenever the RAM->L3 or L3->L2 bandwidth is the bottleneck, which will be the case for most BLAS1-style operations at large sizes.
And at small sizes, the thread overhead is significant, wiping out any potential benefit.
What have I missed?

> but you are right that using multiple thread at low level is a dangerous thing, because it might be better to use just one thread, and parallelize another operation at a higher level.
> Thus you need sort of know how many threads are really available for that operation.

Yes, if you have a bit more context, it can be a clear win.

> I am trying to tackle that problem in blip, by having a global scheduler, that I am rewriting.

I look forward to seeing it!

> 
>>> I think the best approach is lyla's, taking an existing, optimized C BLAS library and writing some kind of wrapper using operator overloading etc. to make programming easier and more intuitive.
> 
> blyp.narray.NArray does that if compiled with -version=blas, but I think that for large vector/matrixes you can do better (exactly using multithreading).

I suspect that with 'shared' and 'immutable' arrays, D can do better than C, in theory. I hope it works out in practice.

> 
>> In my opinion, we actually need matrices in the standard library, with a very small number of primitive operations built-in (much like Fortran does). Outside those, I agree, wrappers to an existing library should be used.
> 
>