January 13, 2020
On Monday, 13 January 2020 at 13:34:40 UTC, jmh530 wrote:
> On Monday, 13 January 2020 at 11:51:20 UTC, Joseph Rushton Wakeling wrote:
> On 3, I didn't have time to look into the Julia results, but someone above made the comment that Julia was optimizing away the calculation itself. Dennis also had some interesting points above.

All you have to do is look at the timings. Julia calls into lapack to do these operations, just like everybody else. No amount of optimization will result in the timings reported for Julia - it would be a revolution unlike any ever seen in computing if they were accurate.
January 13, 2020
On Monday, 13 January 2020 at 14:26:55 UTC, bachmeier wrote:
> [snip]
>
> All you have to do is look at the timings. Julia calls into lapack to do these operations, just like everybody else. No amount of optimization will result in the timings reported for Julia - it would be a revolution unlike any ever seen in computing if they were accurate.

Yes, that was my initial reaction as well.
January 13, 2020
On Monday, 13 January 2020 at 14:26:55 UTC, bachmeier wrote:
> On Monday, 13 January 2020 at 13:34:40 UTC, jmh530 wrote:
>> On Monday, 13 January 2020 at 11:51:20 UTC, Joseph Rushton Wakeling wrote:
>> On 3, I didn't have time to look into the Julia results, but someone above made the comment that Julia was optimizing away the calculation itself. Dennis also had some interesting points above.
>
> All you have to do is look at the timings. Julia calls into lapack to do these operations, just like everybody else. No amount of optimization will result in the timings reported for Julia - it would be a revolution unlike any ever seen in computing if they were accurate.

Indeed, my experience with Julia is zero and I don't know what @btime is actually testing. Just copied it from howto benchmark Julia code page. I would honestly test the time it takes the actual script to run like "$ time julia demo.jl" because it does take some time before it precompiles the code so you don't feel those reported 0.1 seconds at all. And if you do data processing and scripting, console responsiveness and processing speed is all that matters to me at least.
January 13, 2020
On Monday, 13 January 2020 at 14:52:59 UTC, jmh530 wrote:
> On Monday, 13 January 2020 at 14:26:55 UTC, bachmeier wrote:
>> [snip]
>>
>> All you have to do is look at the timings. Julia calls into lapack to do these operations, just like everybody else. No amount of optimization will result in the timings reported for Julia - it would be a revolution unlike any ever seen in computing if they were accurate.
>
> Yes, that was my initial reaction as well.

Well, see the link I posted for some details on how they achieve that -- for example, when doing QR or LU decomposition, instead of doing in-place calculations where they have to replace every element of a m*n matrix, they define custom types that store the matrix factorizations in packed representations that only include the non-zero elements.
January 13, 2020
On Monday, 13 January 2020 at 17:09:29 UTC, Joseph Rushton Wakeling wrote:
> [snip]
>
> Well, see the link I posted for some details on how they achieve that -- for example, when doing QR or LU decomposition, instead of doing in-place calculations where they have to replace every element of a m*n matrix, they define custom types that store the matrix factorizations in packed representations that only include the non-zero elements.

...take a look at the Julia benchmark in the first post. They are about 350x faster than the Numpy and D versions that are basically just calling C code. Do you really that the people who write linear algebra code are missing 350x improvements? Maybe their algorithm is faster than what lapack does, but I'm skeptical that - properly benchmarked - it could be that much faster.
January 13, 2020
On Monday, 13 January 2020 at 17:33:16 UTC, jmh530 wrote:
> Do you really that the people who write linear algebra code are missing 350x improvements? Maybe their algorithm is faster than what lapack does, but I'm skeptical that - properly benchmarked - it could be that much faster.

Fair.  Is it possible that Julia is evaluating these calculations lazily, so you don't pay for the factorization or SVD until you genuinely start using the numbers?
January 14, 2020
On Saturday, 11 January 2020 at 21:54:13 UTC, p.shkadzko wrote:
> Today I decided to write a couple of benchmarks to compare D mir with lubeck against Python numpy, then I also added Julia snippets. The results appeared to be quite interesting.
>
> [...]

Lubeck doesn't use it's own sad implementation. Instead it uses the native library. The speed depends on the kind of BLAS library used. For speed program it's also recommended to use mir-lapack in pair with Intel MKL instead of Lubeck.
1 2 3
Next ›   Last »