December 10, 2020
On Monday, 7 December 2020 at 13:54:26 UTC, Ola Fosheim Grostad wrote:
> On Monday, 7 December 2020 at 13:48:51 UTC, jmh530 wrote:
>> On Monday, 7 December 2020 at 13:41:17 UTC, Ola Fosheim Grostad wrote:
>>> On Monday, 7 December 2020 at 13:17:47 UTC, jmh530 wrote:
>>>> [snip]
>>>>
>>>> "no need to calculate inverse matrix" What? Since when?
>>>
>>> I dont know what he meant in this context, but a common technique in computer graphics is to build the inverse as as you apply computations.
>>
>> Ah, well if you have a small matrix, then it's not so hard to calculate the inverse anyway.
>
> It is an optimization, maybe also for accuracy, dunno.
> So, instead of ending up with a transform from coordinate system A to B, you also get the transform from B to A for cheap. This may matter when the next step is to go from B to C... And so on...

A good example is a Simplex method for linear programming. It can be done such as you have to calculate inverse [m x m] matrix every step. Better, make a transform from one inverse matrix to another, that speeds up algorithm from O(3) to O(2) and even more. You don't even need to calculate the first inverse matrix if the algorithm is built in such a way that it is trivial. It is just one example.
December 10, 2020
On Monday, 7 December 2020 at 13:07:23 UTC, 9il wrote:
> On Monday, 7 December 2020 at 12:28:39 UTC, data pulverizer wrote:
>> On Monday, 7 December 2020 at 02:14:41 UTC, 9il wrote:
>>> I don't know. Tensors aren't so complex. The complex part is a design that allows Mir to construct and iterate various kinds of lazy tensors of any complexity and have quite a universal API, and all of these are boosted by the fact that the user-provided kernel(lambda) function is optimized by the compiler without the overhead.
>>
>> I agree that a basic tensor is not hard to implement, but the specific design to choose is not always obvious. Your benchmarks shows that design choices have a large impact on performance, and performance is certainly a very important consideration in tensor design.
>>
>> For example I had no idea that your ndslice variant was using more than one array internally to achieve its performance - it wasn't obvious to me.
>
> ndslice tensor type uses exactly one iterator. However, the iterator is generic and lazy iterators may contain any number of other iterators and pointers.

How does the iterator of Mir differ from the concept of an iterator in D and the use of your own design of tensors and the actions that need to be performed on them then in terms of speed of execution if we know how to achive it?


December 10, 2020
On Thursday, 10 December 2020 at 11:07:06 UTC, Igor Shirkalin wrote:
> On Monday, 7 December 2020 at 13:17:47 UTC, jmh530 wrote:
>> "no need to calculate inverse matrix" What? Since when?
>
> Since when highly optimized algorithms are required. This does not mean that you should not know the algorithms for calculating the inverse matrix.

I still find myself inverting large matrices from time to time. Maybe there are ways to reduce the number of times I do it, but it still needs to get done for some types of problems.
December 10, 2020
On Thursday, 10 December 2020 at 14:49:08 UTC, jmh530 wrote:
> On Thursday, 10 December 2020 at 11:07:06 UTC, Igor Shirkalin wrote:
>> On Monday, 7 December 2020 at 13:17:47 UTC, jmh530 wrote:
>>> "no need to calculate inverse matrix" What? Since when?
>>
>> Since when highly optimized algorithms are required. This does not mean that you should not know the algorithms for calculating the inverse matrix.
>
> I still find myself inverting large matrices from time to time. Maybe there are ways to reduce the number of times I do it, but it still needs to get done for some types of problems.

It is what I do for science purposes too, from time to time.
1 2 3 4
Next ›   Last »