December 07, 2020
On Monday, 7 December 2020 at 02:14:41 UTC, 9il wrote:
> On Sunday, 6 December 2020 at 17:30:13 UTC, data pulverizer wrote:
>> On Saturday, 5 December 2020 at 07:44:33 UTC, 9il wrote:
>>>
>>> sweep_ndslice uses (2*N - 1) arrays to index U, this allows LDC to unroll the loop.
>>>
>
> I don't know. Tensors aren't so complex. The complex part is a design that allows Mir to construct and iterate various kinds of lazy tensors of any complexity and have quite a universal API, and all of these are boosted by the fact that the user-provided kernel(lambda) function is optimized by the compiler without the overhead.

Agreed. As a matter of fact the simplest convolutions of tensors are out of date. It is like there's no need to calculate inverse matrix. Mir is the usefull work for author, of course, and practically almost not used. Every one who needs something fast in his own tasks should make same things again in D.

December 07, 2020
On Monday, 7 December 2020 at 02:14:41 UTC, 9il wrote:
> I don't know. Tensors aren't so complex. The complex part is a design that allows Mir to construct and iterate various kinds of lazy tensors of any complexity and have quite a universal API, and all of these are boosted by the fact that the user-provided kernel(lambda) function is optimized by the compiler without the overhead.

I agree that a basic tensor is not hard to implement, but the specific design to choose is not always obvious. Your benchmarks shows that design choices have a large impact on performance, and performance is certainly a very important consideration in tensor design.

For example I had no idea that your ndslice variant was using more than one array internally to achieve its performance - it wasn't obvious to me.

I think literature that discuss various design choices and approaches would be useful and informative. There is plenty of literature on creating tree structures, linked lists, stacks, queues, hash tables and so forth, but virtually nothing on tensor data structures. It isn't as if implementing a linked list is any more complex than a tensor. I just think it's a bit strange that there is so little on the topic - given the widespread use of tensors in computational science.

December 07, 2020
On Monday, 7 December 2020 at 12:28:39 UTC, data pulverizer wrote:
> On Monday, 7 December 2020 at 02:14:41 UTC, 9il wrote:
>> I don't know. Tensors aren't so complex. The complex part is a design that allows Mir to construct and iterate various kinds of lazy tensors of any complexity and have quite a universal API, and all of these are boosted by the fact that the user-provided kernel(lambda) function is optimized by the compiler without the overhead.
>
> I agree that a basic tensor is not hard to implement, but the specific design to choose is not always obvious. Your benchmarks shows that design choices have a large impact on performance, and performance is certainly a very important consideration in tensor design.
>
> For example I had no idea that your ndslice variant was using more than one array internally to achieve its performance - it wasn't obvious to me.

ndslice tensor type uses exactly one iterator. However, the iterator is generic and lazy iterators may contain any number of other iterators and pointers.
December 07, 2020
On Monday, 7 December 2020 at 11:21:16 UTC, Igor Shirkalin wrote:
> [snip]
>
> Agreed. As a matter of fact the simplest convolutions of tensors are out of date. It is like there's no need to calculate inverse matrix. Mir is the usefull work for author, of course, and practically almost not used. Every one who needs something fast in his own tasks should make same things again in D.

"no need to calculate inverse matrix" What? Since when?
December 07, 2020
On Monday, 7 December 2020 at 13:17:47 UTC, jmh530 wrote:
> On Monday, 7 December 2020 at 11:21:16 UTC, Igor Shirkalin wrote:
>> [snip]
>>
>> Agreed. As a matter of fact the simplest convolutions of tensors are out of date. It is like there's no need to calculate inverse matrix. Mir is the usefull work for author, of course, and practically almost not used. Every one who needs something fast in his own tasks should make same things again in D.
>
> "no need to calculate inverse matrix" What? Since when?

I dont know what he meant in this context, but a common technique in computer graphics is to build the inverse as as you apply computations.
December 07, 2020
On Monday, 7 December 2020 at 13:41:17 UTC, Ola Fosheim Grostad wrote:
> On Monday, 7 December 2020 at 13:17:47 UTC, jmh530 wrote:
>> [snip]
>>
>> "no need to calculate inverse matrix" What? Since when?
>
> I dont know what he meant in this context, but a common technique in computer graphics is to build the inverse as as you apply computations.

Ah, well if you have a small matrix, then it's not so hard to calculate the inverse anyway.

December 07, 2020
On Monday, 7 December 2020 at 13:48:51 UTC, jmh530 wrote:
> On Monday, 7 December 2020 at 13:41:17 UTC, Ola Fosheim Grostad wrote:
>> On Monday, 7 December 2020 at 13:17:47 UTC, jmh530 wrote:
>>> [snip]
>>>
>>> "no need to calculate inverse matrix" What? Since when?
>>
>> I dont know what he meant in this context, but a common technique in computer graphics is to build the inverse as as you apply computations.
>
> Ah, well if you have a small matrix, then it's not so hard to calculate the inverse anyway.

It is an optimization, maybe also for accuracy, dunno.
So, instead of ending up with a transform from coordinate system A to B, you also get the transform from B to A for cheap. This may matter when the next step is to go from B to C... And so on...
December 10, 2020
On Monday, 7 December 2020 at 13:17:47 UTC, jmh530 wrote:
> On Monday, 7 December 2020 at 11:21:16 UTC, Igor Shirkalin wrote:
>> [snip]
>>
>> Agreed. As a matter of fact the simplest convolutions of tensors are out of date. It is like there's no need to calculate inverse matrix. Mir is the usefull work for author, of course, and practically almost not used. Every one who needs something fast in his own tasks should make same things again in D.
>
> "no need to calculate inverse matrix" What? Since when?

Since when highly optimized algorithms are required. This does not mean that you should not know the algorithms for calculating the inverse matrix.
December 10, 2020
On Monday, 7 December 2020 at 13:54:26 UTC, Ola Fosheim Grostad wrote:
> On Monday, 7 December 2020 at 13:48:51 UTC, jmh530 wrote:
>> On Monday, 7 December 2020 at 13:41:17 UTC, Ola Fosheim Grostad wrote:
>>> On Monday, 7 December 2020 at 13:17:47 UTC, jmh530 wrote:
>>>> [snip]
>>>>
>>>> "no need to calculate inverse matrix" What? Since when?
>>>
>>> I dont know what he meant in this context, but a common technique in computer graphics is to build the inverse as as you apply computations.
>>
>> Ah, well if you have a small matrix, then it's not so hard to calculate the inverse anyway.
>
> It is an optimization, maybe also for accuracy, dunno.

Exactly. Optimization plus accuracy.

December 10, 2020
On Monday, 7 December 2020 at 13:41:17 UTC, Ola Fosheim Grostad wrote:
> On Monday, 7 December 2020 at 13:17:47 UTC, jmh530 wrote:
>> On Monday, 7 December 2020 at 11:21:16 UTC, Igor Shirkalin wrote:
>>> [snip]
>>>
>>> Agreed. As a matter of fact the simplest convolutions of tensors are out of date. It is like there's no need to calculate inverse matrix. Mir is the usefull work for author, of course, and practically almost not used. Every one who needs something fast in his own tasks should make same things again in D.
>>
>> "no need to calculate inverse matrix" What? Since when?
>
> I dont know what he meant in this context, but a common technique in computer graphics is to build the inverse as as you apply computations.

It makes sense for simplest cases if matrices are small (2x2, 3x3 or even 4x4) and you have to multiply them at least thousands of times.