January 19, 2022

On Wednesday, 19 January 2022 at 03:21:38 UTC, Tejas wrote:

>

On Tuesday, 18 January 2022 at 22:21:40 UTC, Ola Fosheim Grøstad wrote:

>

It is not uncommon to interact with plots that are too big for matplotlib to handle well. The python visualization solutions are very primitive. Having something better than numpy+matplotlib is obviously an advantage, a selling point for other offerings.

Wow, this is the first time I've read that matplotlib is inadequate. Can you please give an example of a visualisation library(any language) which you consider good?

There are commercial products for visualizing large datasets. I dont use them as I either create my own or use a soundeditor.

But yes matplotlib feels more like a homegrown solution than a solid product. It also has layout issues with labeling. You can make it work, but it is clunky.

January 19, 2022
On Wednesday, 19 January 2022 at 06:58:55 UTC, Paulo Pinto wrote:
> On Wednesday, 19 January 2022 at 04:45:20 UTC, forkit wrote:
>> On Tuesday, 18 January 2022 at 22:21:40 UTC, Ola Fosheim Grøstad wrote:
>>> ...D's potential strength here is not so much in being able to bind to C++ in a limited fashion (like Python), but being able to port C++ to D and improve on it. To get there you need feature parity, which is what this thread is about.
>>
>> Not just 'feature' parity, but 'performance' parity too:
>>
>> "Broad adoption of high-level languages by the scientific community is unlikely without compiler optimizations to mitigate the performance penalties these languages abstractions impose." - https://www.cs.rice.edu/~vs3/PDF/Joyner-MainThesis.pdf
>
> That paper is from 2008, meanwhile in 2021,
>
> https://www.hpcwire.com/off-the-wire/julia-joins-petaflop-club//
>
> This is what D has to compete against, not only C++ with the existing SYSCL/CUDA tooling and their ongoing integration into ISO C++.

I am not sure what the article tells: that Julia is now popular and people use it? Or that D (and other languages) need to compete against self-written PR articles?

(Many system-programming languages can achieve the same performance as what the article describes, when several research institutes combine forces on just that.)

But yes, Julia's focus on small niche, and its popularity in that niche makes it attractive for contributors.
January 19, 2022
On Wednesday, 19 January 2022 at 07:24:09 UTC, M.M. wrote:
> On Wednesday, 19 January 2022 at 06:58:55 UTC, Paulo Pinto wrote:
>> On Wednesday, 19 January 2022 at 04:45:20 UTC, forkit wrote:
>>> On Tuesday, 18 January 2022 at 22:21:40 UTC, Ola Fosheim Grøstad wrote:
>>>> ...D's potential strength here is not so much in being able to bind to C++ in a limited fashion (like Python), but being able to port C++ to D and improve on it. To get there you need feature parity, which is what this thread is about.
>>>
>>> Not just 'feature' parity, but 'performance' parity too:
>>>
>>> "Broad adoption of high-level languages by the scientific community is unlikely without compiler optimizations to mitigate the performance penalties these languages abstractions impose." - https://www.cs.rice.edu/~vs3/PDF/Joyner-MainThesis.pdf
>>
>> That paper is from 2008, meanwhile in 2021,
>>
>> https://www.hpcwire.com/off-the-wire/julia-joins-petaflop-club//
>>
>> This is what D has to compete against, not only C++ with the existing SYSCL/CUDA tooling and their ongoing integration into ISO C++.
>
> I am not sure what the article tells: that Julia is now popular and people use it? Or that D (and other languages) need to compete against self-written PR articles?
>
> (Many system-programming languages can achieve the same performance as what the article describes, when several research institutes combine forces on just that.)
>
> But yes, Julia's focus on small niche, and its popularity in that niche makes it attractive for contributors.

You might call it self-written PR articles, or educate yourself who is using it.

https://juliacomputing.com/case-studies versus https://dlang.org/orgs-using-d.html

Also I did mention C++, which you glossed over on your eagerness to devalue Julia's market domain versus D among HPC communities.

As someone that spent two years at ATLAS TDAQ HLT, I know which languages those folks would be adopting, but hey it is a piece of self-written PR.
January 19, 2022
On Wednesday, 19 January 2022 at 07:29:23 UTC, Paulo Pinto wrote:
> On Wednesday, 19 January 2022 at 07:24:09 UTC, M.M. wrote:
>> On Wednesday, 19 January 2022 at 06:58:55 UTC, Paulo Pinto wrote:
>>> On Wednesday, 19 January 2022 at 04:45:20 UTC, forkit wrote:
>>>> On Tuesday, 18 January 2022 at 22:21:40 UTC, Ola Fosheim Grøstad wrote:
>>>>> ...D's potential strength here is not so much in being able to bind to C++ in a limited fashion (like Python), but being able to port C++ to D and improve on it. To get there you need feature parity, which is what this thread is about.
>>>>
>>>> Not just 'feature' parity, but 'performance' parity too:
>>>>
>>>> "Broad adoption of high-level languages by the scientific community is unlikely without compiler optimizations to mitigate the performance penalties these languages abstractions impose." - https://www.cs.rice.edu/~vs3/PDF/Joyner-MainThesis.pdf
>>>
>>> That paper is from 2008, meanwhile in 2021,
>>>
>>> https://www.hpcwire.com/off-the-wire/julia-joins-petaflop-club//
>>>
>>> This is what D has to compete against, not only C++ with the existing SYSCL/CUDA tooling and their ongoing integration into ISO C++.
>>
>> I am not sure what the article tells: that Julia is now popular and people use it? Or that D (and other languages) need to compete against self-written PR articles?
>>
>> (Many system-programming languages can achieve the same performance as what the article describes, when several research institutes combine forces on just that.)
>>
>> But yes, Julia's focus on small niche, and its popularity in that niche makes it attractive for contributors.
>
> You might call it self-written PR articles, or educate yourself who is using it.
>
> https://juliacomputing.com/case-studies versus https://dlang.org/orgs-using-d.html
>
> Also I did mention C++, which you glossed over on your eagerness to devalue Julia's market domain versus D among HPC communities.
>
> As someone that spent two years at ATLAS TDAQ HLT, I know which languages those folks would be adopting, but hey it is a piece of self-written PR.

I am sorry that you took my post as an attack:
- the article itself is written by Julia people (the bottom of the article says "Source: Julia Computing"). Using this fact to tell me to "educate myself on a non-relevant topic, i.e., on who uses Julia" seems quite irrelevant to my note on who wrote the text. (Being sarcastic now: I am sure that whatever education I will do from now on till the end of my life will not change who wrote the article)
- I also acknowledged that Julia is popular in the scientific computing.

I do not understand where in my text I devalue Julia as a language/tool.
(Again, I do not like that self-written articles are used in arguments. But I did not say anything about Julia "being not good".)

What I did not write, but think, is that Julia is a very nice project, and I am a fan of its development.
January 19, 2022

On Wednesday, 19 January 2022 at 04:45:20 UTC, forkit wrote:

>

On Tuesday, 18 January 2022 at 22:21:40 UTC, Ola Fosheim Grøstad wrote:

>

...D's potential strength here is not so much in being able to bind to C++ in a limited fashion (like Python), but being able to port C++ to D and improve on it. To get there you need feature parity, which is what this thread is about.

Not just 'feature' parity, but 'performance' parity too:

Yes, that is the issue I wanted to discuss in the OP.

If hardware vendors create close source C++ compiler that uses internal knowledge of how their GPUs work, then it might be difficult to compete for other languages. You'd have to compile to metal/vulkan and fine tune it for each GPU.

Or just compile to C++…

I don't know. I guess we will find out in the years to come.

January 19, 2022

On Wednesday, 19 January 2022 at 09:34:38 UTC, Ola Fosheim Grøstad wrote:

>

If hardware vendors create close source C++ compiler that uses internal knowledge of how their GPUs work, then it might be difficult to compete for other languages. You'd have to compile to metal/vulkan and fine tune it for each GPU.

Arguably that already describes Nvidia. Luckily for us, it has an intermediate layer in PTX that LLVM can target, and that's exactly what dcompute does. Unlike C++, D can much more easily statically condition on aspects of the hardware, making the tuning process faster to navigate the parameter configuration space.

January 19, 2022

On Wednesday, 19 January 2022 at 09:49:59 UTC, Nicholas Wilson wrote:

>

Arguably that already describes Nvidia. Luckily for us, it has an intermediate layer in PTX that LLVM can target, and that's exactly what dcompute does.

For desktop applications one has to support Intel, AMD, Nvidia, Apple. So, does that mean that one have to support Metal, Vulkan, PTX and RocM? Sounds like too much…

>

Unlike C++, D can much more easily statically condition on aspects of the hardware, making the tuning process faster to navigate the parameter configuration space.

Not sure what you meant here?

January 19, 2022
On Wednesday, 19 January 2022 at 06:58:55 UTC, Paulo Pinto wrote:
>
> That paper is from 2008, meanwhile in 2021,
>
> https://www.hpcwire.com/off-the-wire/julia-joins-petaflop-club//
>
> This is what D has to compete against, not only C++ with the existing SYSCL/CUDA tooling and their ongoing integration into ISO C++.

Oh. so dissmisive of it because its from 2008?

It's focus is on methods for compiler optimisation, for one of the most important data structures in scientific computing -> arrays.

As such, the more D can do to generate even more efficient parallel array computations, the more chance it has of attracting 'some' from the scientific community.

January 19, 2022
On Wednesday, 19 January 2022 at 11:43:25 UTC, forkit wrote:
> On Wednesday, 19 January 2022 at 06:58:55 UTC, Paulo Pinto wrote:
>>
>> That paper is from 2008, meanwhile in 2021,
>>
>> https://www.hpcwire.com/off-the-wire/julia-joins-petaflop-club//
>>
>> This is what D has to compete against, not only C++ with the existing SYSCL/CUDA tooling and their ongoing integration into ISO C++.
>
> Oh. so dissmisive of it because its from 2008?
>
> It's focus is on methods for compiler optimisation, for one of the most important data structures in scientific computing -> arrays.
>
> As such, the more D can do to generate even more efficient parallel array computations, the more chance it has of attracting 'some' from the scientific community.

Yes, because in 2008 CUDA and SYSCL were of little importance in HPC universe, almost everyone was focused on OpenMP, still thought OpenCL would cater with its C only API, and OpenAAC was yet to show up.

Unless D comes with this in the package and those logos adopt it, just being a better language isn't enough.

https://developer.nvidia.com/hpc-sdk

https://www.intel.com/content/www/us/en/developer/tools/oneapi/overview.html#gs.mbnkph

https://www.amd.com/en/technologies/open-compute

It also needs to plug into the libraries, IDEs and GPGPU debuggers available to the community.
January 19, 2022

On Wednesday, 19 January 2022 at 12:49:11 UTC, Paulo Pinto wrote:

>

It also needs to plug into the libraries, IDEs and GPGPU debuggers available to the community.

But the presentation is not only about HPC, but making parallel GPU computing as easy as writing regular C++ code and being able to debug that code on the CPU.

I actually think it is sufficient to support Metal and Vulkan for this to be of value. The question is how much more performance Nvidia manage to get out of their their nvc++ compiler for regular GPUs in comparison to a Vulkan solution.