June 12, 2015
On 12 June 2015 at 15:22, Ilya Yaroshenko via Digitalmars-d <digitalmars-d@puremagic.com> wrote:
> On Friday, 12 June 2015 at 00:51:04 UTC, Manu wrote:
>>
>> On 10 June 2015 at 02:40, Ilya Yaroshenko via Digitalmars-d <digitalmars-d@puremagic.com> wrote:
>>>
>>> On Tuesday, 9 June 2015 at 16:18:06 UTC, Manu wrote:
>>>>
>>>>
>>>> On 10 June 2015 at 01:26, Ilya Yaroshenko via Digitalmars-d <digitalmars-d@puremagic.com> wrote:
>>>>>
>>>>>
>>>>>
>>>>>> I believe that Phobos must support some common methods of linear
>>>>>> algebra
>>>>>> and general mathematics. I have no desire to join D with Fortran
>>>>>> libraries
>>>>>> :)
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> D definitely needs BLAS API support for matrix multiplication. Best
>>>>> BLAS
>>>>> libraries are written in assembler like openBLAS. Otherwise D will have
>>>>> last
>>>>> position in corresponding math benchmarks.
>>>>
>>>>
>>>>
>>>> A complication for linear algebra (or other mathsy things in general)
>>>> is the inability to detect and implement compound operations.
>>>> We don't declare mathematical operators to be algebraic operations,
>>>> which I think is a lost opportunity.
>>>> If we defined the properties along with their properties
>>>> (commutativity, transitivity, invertibility, etc), then the compiler
>>>> could potentially do an algebraic simplification on expressions before
>>>> performing codegen and optimisation.
>>>> There are a lot of situations where the optimiser can't simplify
>>>> expressions because it runs into an arbitrary function call, and I've
>>>> never seen an optimiser that understands exp/log/roots, etc, to the
>>>> point where it can reduce those expressions properly. To compete with
>>>> maths benchmarks, we need some means to simplify expressions properly.
>>>
>>>
>>>
>>> Simplified expressions would [NOT] help because
>>> 1. On matrix (hight) level optimisation can be done very well by
>>> programer
>>> (algorithms with matrixes in terms of count of matrix multiplications are
>>> small).
>>
>>
>> Perhaps you've never worked with incompetent programmers (in my
>> experience, >50% of the professional workforce).
>> Programmers, on average, don't know maths. They literally have no idea
>> how to simplify an algebraic expression.
>> I think there are about 3-4 (being generous!) people in my office (of
>> 30-40) that could do it properly, and without spending heaps of time
>> on it.
>>
>>> 2. Low level optimisation requires specific CPU/Cache optimisation.
>>> Modern
>>> implementations are optimised for all cache levels. See work by KAZUSHIGE
>>> GOTO
>>> http://www.cs.utexas.edu/users/pingali/CS378/2008sp/papers/gotoPaper.pdf
>>
>>
>> Low-level optimisation is a sliding scale, not a binary position. Reaching 'optimal' state definitely requires careful consideration of all the details you refer to, but there are a lot of improvements that can be gained from quickly written code without full low-level optimisation. A lot of basic low-level optimisations (like just using appropriate opcodes, or eliding redundant operations; ie, squares followed by sqrt) can't be applied without first simplifying expressions.
>
>
> OK, generally you are talking about something we can name MathD. I understand the reasons. However I am strictly against algebraic operations (or eliding redundant operations for floating points) for basic routines in system programming language.

That's nice... I'm all for it :)

Perhaps if there were some distinction between a base type and an
algebraic type?
I wonder if it would be possible to express an algebraic expression
like a lazy range, and then capture the expression at the end and
simplify it with some fancy template...
I'd call that an abomination, but it might be possible. Hopefully
nobody in their right mind would ever use that ;)

> Even float/double internal conversion to real
> in math expressions is a huge headache when math algorithms are implemented
> (see first two comments at
> https://github.com/D-Programming-Language/phobos/pull/2991 ). In system PL
> sqrt(x)^2  should compiles as is.

Yeah... unless you -fast-math, in which case I want the compiler to do
whatever it can.
Incidentally, I don't think I've ever run into a case in practise
where precision was lost by doing _less_ operations.

> Such optimisations can be implemented over the basic routines (pow, sqrt, gemv, gemm, etc). We can use approach similar to D compile time regexp.

Not really. The main trouble is that many of these patterns only
emerge when inlining is performed.
It would be particularly awkward to express such expressions in some
DSL that spanned across conventional API boundaries.
June 12, 2015
On Friday, 12 June 2015 at 11:00:20 UTC, Manu wrote:
>>>
>>> Low-level optimisation is a sliding scale, not a binary position.
>>> Reaching 'optimal' state definitely requires careful consideration of
>>> all the details you refer to, but there are a lot of improvements that
>>> can be gained from quickly written code without full low-level
>>> optimisation. A lot of basic low-level optimisations (like just using
>>> appropriate opcodes, or eliding redundant operations; ie, squares
>>> followed by sqrt) can't be applied without first simplifying
>>> expressions.
>>
>>
>> OK, generally you are talking about something we can name MathD. I
>> understand the reasons. However I am strictly against algebraic operations
>> (or eliding redundant operations for floating points) for basic routines in
>> system programming language.
>
> That's nice... I'm all for it :)
>
> Perhaps if there were some distinction between a base type and an
> algebraic type?
> I wonder if it would be possible to express an algebraic expression
> like a lazy range, and then capture the expression at the end and
> simplify it with some fancy template...
> I'd call that an abomination, but it might be possible. Hopefully
> nobody in their right mind would ever use that ;)

... for example we can optimise matrix chain multiplication https://en.wikipedia.org/wiki/Matrix_chain_multiplication
----
//calls `this(MatrixExp!double chain)`
Matrix!double = m1*m2*m3*m4;
----

>> Even float/double internal conversion to real
>> in math expressions is a huge headache when math algorithms are implemented
>> (see first two comments at
>> https://github.com/D-Programming-Language/phobos/pull/2991 ). In system PL
>> sqrt(x)^2  should compiles as is.
>
> Yeah... unless you -fast-math, in which case I want the compiler to do
> whatever it can.
> Incidentally, I don't think I've ever run into a case in practise
> where precision was lost by doing _less_ operations.

Mathematics functions requires concrete order of operations
http://www.netlib.org/cephes/  (std.mathspecial and a bit of std.math/std.numeric are based on cephes).

>> Such optimisations can be implemented over the basic routines (pow, sqrt,
>> gemv, gemm, etc). We can use approach similar to D compile time regexp.
>
> Not really. The main trouble is that many of these patterns only
> emerge when inlining is performed.
> It would be particularly awkward to express such expressions in some
> DSL that spanned across conventional API boundaries.

If I am not wrong in both LLVM and GCC `fast-math` attribute can be defined for functions. This feature can be implemented in D.
June 12, 2015
On Friday, 12 June 2015 at 03:18:31 UTC, Tofu Ninja wrote:
>
> What would the new order of operations be for these new operators?

Hadn't honestly thought that far.  Like I said, it was more of a nascent idea than a coherent proposal (probably with a DIP and many more words).  It's an interesting question, though.

I think the approach taken by F# and OCaml may hit at the right notes, though: precedence and fixity are determined by the base operator.  In my head, extra operators would be represented in code by some annotation or affix on a built-in operator... say, braces around it or something (e.g. [*] or {+}, though this is just an example that sets a baseline for visibility).

-Wyatt
June 12, 2015
On Friday, 12 June 2015 at 03:35:31 UTC, Rikki Cattermole wrote:

> Humm, work on getting gl3n into phobos or work on my ODBC driver manager. Tough choice.

I can only speak for myself. I'm sure there's a lot of value in solid ODBC support. I use SQL some, but I use matrix math more.

I'm not that familiar with gl3n, but it looks like it's meant for the math used in OpenGL. My knowledge of OpenGL is limited. I had some cursory interest in the developments of Vulkan earlier in March, but without much of a background in OpenGL I didn't follow everything they were talking about. I don't think many other languages include OpenGL support in their standard libraries (though I imagine game developers would welcome it).
June 12, 2015
On Friday, 12 June 2015 at 17:10:08 UTC, jmh530 wrote:
> On Friday, 12 June 2015 at 03:35:31 UTC, Rikki Cattermole wrote:
>
>> Humm, work on getting gl3n into phobos or work on my ODBC driver manager. Tough choice.
>
> I can only speak for myself. I'm sure there's a lot of value in solid ODBC support. I use SQL some, but I use matrix math more.
>
> I'm not that familiar with gl3n, but it looks like it's meant for the math used in OpenGL. My knowledge of OpenGL is limited. I had some cursory interest in the developments of Vulkan earlier in March, but without much of a background in OpenGL I didn't follow everything they were talking about. I don't think many other languages include OpenGL support in their standard libraries (though I imagine game developers would welcome it).

Matrix math is matrix math, it being for ogl makes no real difference.

Also if you are waiting to learn vulkan but have not done any other graphics, don't, learn ogl now, vulkan will be harder.
June 12, 2015
On Friday, 12 June 2015 at 17:56:53 UTC, Tofu Ninja wrote:

> Matrix math is matrix math, it being for ogl makes no real difference.

I think it’s a little more complicated than that. BLAS and LAPACK (or variants on them) are low-level matrix math libraries that many higher-level libraries call. Few people actually use BLAS directly. So, clearly, not every matrix math library is the same. What differentiates BLAS from Armadillo is that you can be far more productive in Armadillo because the syntax is friendly (and quite similar to Matlab and others).

There’s a reason why people use glm in C++. It’s probably the most productive way to do matrix math with OpenGL. However, it may not be the most productive way to do more general matrix math. That’s why I hear about people using Armadillo, Eigen, and Blaze, but I’ve never heard anyone recommend using glm. Syntax matters.
June 13, 2015
On 13/06/2015 7:45 a.m., jmh530 wrote:
> On Friday, 12 June 2015 at 17:56:53 UTC, Tofu Ninja wrote:
>
>> Matrix math is matrix math, it being for ogl makes no real difference.
>
> I think it’s a little more complicated than that. BLAS and LAPACK (or
> variants on them) are low-level matrix math libraries that many
> higher-level libraries call. Few people actually use BLAS directly. So,
> clearly, not every matrix math library is the same. What differentiates
> BLAS from Armadillo is that you can be far more productive in Armadillo
> because the syntax is friendly (and quite similar to Matlab and others).
>
> There’s a reason why people use glm in C++. It’s probably the most
> productive way to do matrix math with OpenGL. However, it may not be the
> most productive way to do more general matrix math. That’s why I hear
> about people using Armadillo, Eigen, and Blaze, but I’ve never heard
> anyone recommend using glm. Syntax matters.

The reason I am considering gl3n is because it is old solid code. It's proven itself. It'll make the review process relatively easy.
But hey, if we want to do it right, we'll never get any implementation in.
June 13, 2015
On Friday, 12 June 2015 at 17:56:53 UTC, Tofu Ninja wrote:
> On Friday, 12 June 2015 at 17:10:08 UTC, jmh530 wrote:
>> On Friday, 12 June 2015 at 03:35:31 UTC, Rikki Cattermole wrote:
>>
>>> Humm, work on getting gl3n into phobos or work on my ODBC driver manager. Tough choice.
>>
>> I can only speak for myself. I'm sure there's a lot of value in solid ODBC support. I use SQL some, but I use matrix math more.
>>
>> I'm not that familiar with gl3n, but it looks like it's meant for the math used in OpenGL. My knowledge of OpenGL is limited. I had some cursory interest in the developments of Vulkan earlier in March, but without much of a background in OpenGL I didn't follow everything they were talking about. I don't think many other languages include OpenGL support in their standard libraries (though I imagine game developers would welcome it).
>
> Matrix math is matrix math, it being for ogl makes no real difference.

The tiny subset of numerical linear algebra that is relevant for graphics (mostly very basic operations, 2,3 or 4 dimensions) is not at all representative of the whole. The algorithms are different and the APIs are often necessarily different.

Even just considering scale, no one sane calls in to BLAS to multiply a 3*3 matrix by a 3 element vector, simultaneously no one sane *doesn't* call in to BLAS or an equivalent to multiply two 500*500 matrices.
June 13, 2015
On Sunday, 7 June 2015 at 18:27:16 UTC, Robert burner Schadek wrote:
> Phobos is awesome, the libs of go, python and rust only have better marketing.
> As discussed on dconf, phobos needs to become big and blow the rest out of the sky.
>
> http://wiki.dlang.org/DIP80
>
> lets get OT, please discuss

std.container.concurrent.*
June 13, 2015
On Saturday, 13 June 2015 at 08:45:20 UTC, John Colvin wrote:
> The tiny subset of numerical linear algebra that is relevant for graphics (mostly very basic operations, 2,3 or 4 dimensions) is not at all representative of the whole. The algorithms are different and the APIs are often necessarily different.
>
> Even just considering scale, no one sane calls in to BLAS to multiply a 3*3 matrix by a 3 element vector, simultaneously no one sane *doesn't* call in to BLAS or an equivalent to multiply two 500*500 matrices.

I think there is a conflict of interest with what people want. There seem to be people like me who only want or need simple matrices like glm to do basic geometric/graphics related stuff. Then there is the group of people who want large 500x500 matrices to do weird crazy maths stuff. Maybe they should be kept separate? In which case then we are really talking about adding two different things. Maybe have a std.math.matrix and a std.blas?