Jump to page: 1 2
Thread overview
On the D Blog: A Looat at Chapel, D, and Julia Using Kernel Matrix Calculations
Jun 03
jmh530
Jun 03
9il
Jun 03
jmh530
Jun 03
jmh530
Jun 03
jmh530
Jun 03
9il
Jun 03
9il
June 03
Some of you may have seen a draft of this post from user "data pulverizer"  elsewhere on the forums. The final draft is now on the D Blog under his real name and ready for your perusal.

The blog:
https://dlang.org/blog/2020/06/03/a-look-at-chapel-d-and-julia-using-kernel-matrix-calculations/

Reddit:
https://www.reddit.com/r/programming/comments/gvuy59/a_look_at_chapel_d_and_julia_using_kernel_matrix/

I'll be posting on HN, too, but please don't share a direct link. I did some digging around and it really does affect the ranking -- your upvotes won't count.
June 03
On Wednesday, 3 June 2020 at 14:34:02 UTC, Mike Parker wrote:
> Some of you may have seen a draft of this post from user "data pulverizer"  elsewhere on the forums. The final draft is now on the D Blog under his real name and ready for your perusal.
>
> The blog:
> https://dlang.org/blog/2020/06/03/a-look-at-chapel-d-and-julia-using-kernel-matrix-calculations/
>
> Reddit:
> https://www.reddit.com/r/programming/comments/gvuy59/a_look_at_chapel_d_and_julia_using_kernel_matrix/
>
> I'll be posting on HN, too, but please don't share a direct link. I did some digging around and it really does affect the ranking -- your upvotes won't count.

Very excited and proud to have my first D article. I'm on reddit now but people can ask me anything here also.

Cheers
June 03
On Wednesday, 3 June 2020 at 15:55:53 UTC, data pulverizer wrote:
> On Wednesday, 3 June 2020 at 14:34:02 UTC, Mike Parker wrote:
>> Some of you may have seen a draft of this post from user "data pulverizer"  elsewhere on the forums. The final draft is now on the D Blog under his real name and ready for your perusal.
>>
>> The blog:
>> https://dlang.org/blog/2020/06/03/a-look-at-chapel-d-and-julia-using-kernel-matrix-calculations/
>>
>> Reddit:
>> https://www.reddit.com/r/programming/comments/gvuy59/a_look_at_chapel_d_and_julia_using_kernel_matrix/
>>
>> I'll be posting on HN, too, but please don't share a direct link. I did some digging around and it really does affect the ranking -- your upvotes won't count.
>
> Very excited and proud to have my first D article. I'm on reddit now but people can ask me anything here also.
>
> Cheers

Very nice. Overall, I think the article is very fair to the other languages.

Also, I'm curious if you know how the Julia functions (like pow/log) are implemented, i.e. are they also calling C/Fortran functions or are they natively implemented in Julia?

Typo (other than Mike's headline):
"In our exercsie"
"Chapel’s arrays are more difficult to get started with than Julia’s but are designed to be run on single-core, multicore, and computer clusters using the same or very similar code, which is a good unique selling point." (should have comma between Julia's and but)

This is unclear:
The chart below shows matrix implementation times minus ndslice times; negative means that ndslice is slower, indicating that the implementation used here does not negatively represent D’s performance.
June 03
On Wednesday, 3 June 2020 at 14:34:02 UTC, Mike Parker wrote:
> Some of you may have seen a draft of this post from user "data pulverizer"  elsewhere on the forums. The final draft is now on the D Blog under his real name and ready for your perusal.
>
> The blog:
> https://dlang.org/blog/2020/06/03/a-look-at-chapel-d-and-julia-using-kernel-matrix-calculations/
>
> Reddit:
> https://www.reddit.com/r/programming/comments/gvuy59/a_look_at_chapel_d_and_julia_using_kernel_matrix/
>
> I'll be posting on HN, too, but please don't share a direct link. I did some digging around and it really does affect the ranking -- your upvotes won't count.

Please fix the link
http://docs.algorithm.dlang.io/latest/mir_ndslice.html

to
http://mir-algorithm.libmir.org/

docs.algorithm.dlang.io is outdated.
June 03
On Wednesday, 3 June 2020 at 16:15:41 UTC, jmh530 wrote:
> This is unclear:
> The chart below shows matrix implementation times minus ndslice times; negative means that ndslice is slower, indicating that the implementation used here does not negatively represent D’s performance.

This means that ndslice (in the way how it was used here) is slower than the custom matrix type used.
June 03
On Wednesday, 3 June 2020 at 16:39:38 UTC, 9il wrote:
> On Wednesday, 3 June 2020 at 16:15:41 UTC, jmh530 wrote:
>> This is unclear:
>> The chart below shows matrix implementation times minus ndslice times; negative means that ndslice is slower, indicating that the implementation used here does not negatively represent D’s performance.
>
> This means that ndslice (in the way how it was used here) is slower than the custom matrix type used.

I understood "negative means that ndslice is slower, indicating that the implementation used here does not negatively represent D’s performance."

The problem is that "The chart below shows matrix implementation times minus ndslice times;" is not clear. The use of "times" in particular makes the whole thing sound like a mathematical equation. I think it would have been clearer if he had said, "The chart below shows the elapsed time of running the matrix implementation times subtracted by the elapsed time of an ndslice implementation."

Regardless, a relative comparison, such as (matrix time / ndslice time) - 1, would be better for controlling for the size of the input.

June 03
On Wednesday, 3 June 2020 at 17:07:34 UTC, jmh530 wrote:
> [snip]
> "The chart below shows the elapsed time of running the matrix implementation times subtracted by the elapsed time of an ndslice implementation."

Should be
"The chart below shows the elapsed time of running the matrix implementation subtracted by the elapsed time of an ndslice implementation."
June 03
On Wednesday, 3 June 2020 at 16:15:41 UTC, jmh530 wrote:
> Also, I'm curious if you know how the Julia functions (like pow/log) are implemented, i.e. are they also calling C/Fortran functions or are they natively implemented in Julia?

It's not 100% clear but Julia does appear to implement a fair few mathematical functions apart from things like abs, sqrt, pow, which all come from C/llvm:

* https://github.com/JuliaLang/julia/blob/c3f6542aa3f90485f4b5fbac0486c390df7284d5/src/runtime_intrinsics.c#L902
* https://github.com/JuliaLang/julia/blob/be9ab4873d42f52bc776aa29d6e301d55b314033/src/julia_internal.h#L1006

However I can't see definitions for basic functions like log, exp there (not just special math functions):

* https://github.com/JuliaLang/julia/tree/v1.4.2/base/special

But in the math.jl file:

* https://github.com/JuliaLang/julia/blob/v1.4.2/base/math.jl all the basic math functions are imported from .Base:

Which might mean that some basic definition/declaration is imported from somewhere to be overridden by functions declared in special so it is entirely possible that things like sin, cos and tan defined in  special/trig.jl file are being used as the de-facto Julia trig functions, but I'm not 100% on that one. The long and short is that at least *some* basic math functions come from C/LLVM.

In addition Julia has fast math options - LLVM fast math implementation, which I only just remembered when looking at their code:

* https://github.com/JuliaLang/julia/blob/479097cf8c5a7675689cb069568d6b1077df8ba7/base/fastmath.jl

Which are obviously Clang/LLVM based.

I think it's a good idea to get std.math implementations more competitive performance wise because people will naturally gravitate towards that as the standard library for basic math functions.

> Typo (other than Mike's headline):
> "In our exercsie"
> "Chapel’s arrays are more difficult to get started with than Julia’s but are designed to be run on single-core, multicore, and computer clusters using the same or very similar code, which is a good unique selling point." (should have comma between Julia's and but)
>
> This is unclear:
> The chart below shows matrix implementation times minus ndslice times; negative means that ndslice is slower, indicating that the implementation used here does not negatively represent D’s performance.

Fair point on the typo grammar.

Thanks

June 03
On Wednesday, 3 June 2020 at 16:15:41 UTC, jmh530 wrote:
> This is unclear:
> The chart below shows matrix implementation times minus ndslice times; negative means that ndslice is slower, indicating that the implementation used here does not negatively represent D’s performance.

hmm, ndslice isn't slower on my machine.

https://github.com/dataPulverizer/KernelMatrixBenchmark/pull/2
https://github.com/dataPulverizer/KernelMatrixBenchmark/issues/3
June 03
On Wednesday, 3 June 2020 at 17:37:59 UTC, data pulverizer wrote:
> [snip]

Thanks. Very thorough!

« First   ‹ Prev
1 2