June 11, 2020
On Tuesday, 9 June 2020 at 21:30:24 UTC, tastyminerals wrote:
> FYI, I have a couple of Julia benchmarks timed against NumPy here:
> https://github.com/tastyminerals/mir_benchmarks#general-purpose-multi-thread

Interesting. There is a recent Julia package called LoopVectorization which by all accounts performs much better than base Julia: https://discourse.julialang.org/t/ann-loopvectorization/32843



June 11, 2020
On Thursday, 11 June 2020 at 22:11:41 UTC, data pulverizer wrote:
> On Tuesday, 9 June 2020 at 21:30:24 UTC, tastyminerals wrote:
>> FYI, I have a couple of Julia benchmarks timed against NumPy here:
>> https://github.com/tastyminerals/mir_benchmarks#general-purpose-multi-thread
>
> Interesting. There is a recent Julia package called LoopVectorization which by all accounts performs much better than base Julia: https://discourse.julialang.org/t/ann-loopvectorization/32843

True, a very solid improvement indeed.
Sigh, wish D received as much attention as Julia continues to get.
June 12, 2020
On Thursday, 11 June 2020 at 23:08:45 UTC, tastyminerals wrote:
> On Thursday, 11 June 2020 at 22:11:41 UTC, data pulverizer wrote:
>> On Tuesday, 9 June 2020 at 21:30:24 UTC, tastyminerals wrote:
>>> FYI, I have a couple of Julia benchmarks timed against NumPy here:
>>> https://github.com/tastyminerals/mir_benchmarks#general-purpose-multi-thread
>>
>> Interesting. There is a recent Julia package called LoopVectorization which by all accounts performs much better than base Julia: https://discourse.julialang.org/t/ann-loopvectorization/32843
>
> True, a very solid improvement indeed.
> Sigh, wish D received as much attention as Julia continues to get.

It sounds like @avx for Julia is a bit like @fastmath [1]. I was re-reading this [2] recently. You may find interesting.

[1] https://wiki.dlang.org/LDC-specific_language_changes#.40.28ldc.attributes.fastmath.29
[2] http://johanengelen.github.io/ldc/2016/10/11/Math-performance-LDC.html
June 13, 2020
On Friday, 12 June 2020 at 00:24:39 UTC, jmh530 wrote:
> It sounds like @avx for Julia is a bit like @fastmath [1]. I was re-reading this [2] recently. You may find interesting.
>
> [1] https://wiki.dlang.org/LDC-specific_language_changes#.40.28ldc.attributes.fastmath.29
> [2] http://johanengelen.github.io/ldc/2016/10/11/Math-performance-LDC.html

Interesting. I didn't know that fast math vectorized calculations - automatically using SIMD. That feature isn't mentioned on the LLVM fast math documentation https://llvm.org/docs/LangRef.html#fast-math-flags. Julia's approach to SIMD and fast math seems effective - the practice of being able to label individual statements to direct the compiler to optimize those specific statements.
June 13, 2020
On Saturday, 13 June 2020 at 05:29:34 UTC, data pulverizer wrote:
> Interesting. I didn't know that fast math vectorized calculations - automatically using SIMD. That feature isn't mentioned on the LLVM fast math documentation https://llvm.org/docs/LangRef.html#fast-math-flags. Julia's approach to SIMD and fast math seems effective - the practice of being able to label individual statements to direct the compiler to optimize those specific statements.

p.s. @simd in Julia was written by Intel's Arch Robinson the architect of Intel's Threading Building Blocks. That kind of support is very helpful indeed https://software.intel.com/content/www/us/en/develop/articles/vectorization-in-julia.html.
1 2
Next ›   Last »