January 13, 2022

On Thursday, 13 January 2022 at 03:56:00 UTC, bachmeier wrote:

>

platform isn't going to matter. There are lots of pieces here and there in our community, but it's going to take some effort to (a) make it easy to use the different parts together, (b) document everything, and (c) write the missing pieces.

What C++ seems to do for (a) is adding a library construct for fully configurable multidimensional non-owning slices (mdspan).

January 13, 2022

On Thursday, 13 January 2022 at 09:10:48 UTC, Ola Fosheim Grøstad wrote:

>

Is dcompute being actively developed or is it in a "frozen" state? longevity is important for adoption, I think.

not actively per se, but I have been adding features recently...

>

Maybe it would be possible to do something with a more limited scope, but more low level? Like something targeting Metal and Vulkan directly? Something like this might be possible to do well if D would change the focus and build a high level IR.

... one of which was compiler support for Vulkan compute shaders (no runtime yet Ethan didn't need that, and graphics APIs are large, and I'm not sure if there are any good bindings).
Metal is annoyingly different is its kernel signatures, which could be done fairly easily, but

  • LDC lacks Objective-C support so even if the compiler side of Metal support worked the runtime side would not. (N.B. adding Objective-C support shouldn't be too difficult. but I don't have particular need for it.)
  • kernels written for metal would not be compatible with the OpenCL and CUDA ones (not that I suppose that would be a particular problem if all you care about is Metal.
>

I think one of Bryce's main points is that there is more long term stability in C++ than in the other APIs for parallel computing, so for long term development it would be better to express parallel code in terms of a C++ standard library construct than other compute-APIs.

That argument makes sense for me, I don't want to deal with CUDA or OpenCL as dependencies. I'd rather have something sit directly on top of the lower level APIs.

Dcompute essentially sits as a thin layer over both, but importantly automates the crap out of the really tedious and error prone usage of the APIs. It would be entirely possible to create a thicker API agnostic layer over the top of both of them.

> >

By contrast, I just don't see the C++ crowd getting to sanity/simplicity any time soon... not unless ideas from the circle compiler or similar make their way to mainstream.

It does look a bit complex, but what I find promising for C++ is that Nvidia is pushing their hardware by creating backends for C++ parallel libraries that targets multiple GPUs. That in turn might push Apple to do the same for Metal and so on.

If C++20 had what Bryce presented then I would've considered using it for signal processing. Right now it would make more sense to target Metal/Vulkan directly, but that is time consuming, so I probably won't.

If there is sufficient interest for it, I might have a go at adding Metal compute support to ldc.

January 13, 2022

On Thursday, 13 January 2022 at 07:46:32 UTC, Paulo Pinto wrote:

>

On Wednesday, 12 January 2022 at 22:50:38 UTC, Ola Fosheim Grøstad wrote:

>

...
What do you think?

...

D can have a go at it, but only by plugging into the LLVM ecosystem where C++ is the name of the game, and given it is approaching Linux level of industry contributors it isn't going anywhere.

Yes. The language independent work in LLVM in the accelerator area is hugely important for dcompute, essential. Gotta surf that wave as we don't have the manpower to go independent. I dont think anybody has that amount of manpower, hence the collaboration/consolidation around LLVM as a back-end for accelerators.

>

There was a time to try overthrow C++, that was 10 years ago, LLVM was hardly relevant and GPGPU computing still wasn't mainstream.

Yes. The "overthrow" of C++ should be a non-goal, IMO, starting yesterday.

January 13, 2022

On Thursday, 13 January 2022 at 07:23:40 UTC, Bruce Carneal wrote:

>

On Thursday, 13 January 2022 at 03:56:00 UTC, bachmeier wrote:

>

On Wednesday, 12 January 2022 at 22:50:38 UTC, Ola Fosheim Grøstad wrote:

>

My gut feeling is that it will be very difficult for other languages to stand up to C++, Python and Julia in parallel computing. I get a feeling that the distance will only increase as time goes on.

What do you think?

It doesn't matter all that much for D TBH. Without the basic infrastructure for scientific computing like you get out of the box with those three languages, the ability to target another platform isn't going to matter. There are lots of pieces here and there in our community, but it's going to take some effort to (a) make it easy to use the different parts together, (b) document everything, and (c) write the missing pieces.

I disagree. D/dcompute can be used as a better general purpose GPU kernel language now (superior meta programming, sane nested functions, ...). If you are concerned about "infrastructure" you embed in C++.

I was referring to libraries like numpy for Python or the numerical capabilities built into Julia. D just isn't in a state where a researcher is going to say "let's write a D program for that simulation". You can call some things in Mir and cobble together an interface to some C libraries or whatever. That's not the same as Julia, where you write the code you need for the task at hand. That's the starting point to make it into scientific computing.

On the embedding, yes, that is the strength of D. If you write code in Python, it's realistically only for the Python world. Probably the same for Julia.

January 13, 2022

On Thursday, 13 January 2022 at 14:50:59 UTC, bachmeier wrote:

>

If you write code in Python, it's realistically only for the Python world. Probably the same for Julia.

Does scipy provide the functionality you would need? Could it in some sense be considered a baseline for scientific computing APIs?

January 13, 2022

On Thursday, 13 January 2022 at 09:42:04 UTC, Nicholas Wilson wrote:

>

If there is sufficient interest for it, I might have a go at adding Metal compute support to ldc.

I don't know if there is enough interest for it today. Right now, maybe easy visualization is more important. But when GUI/visualization is in place then I think a compute solution that supports lower level GPU APIs would be valuable for desktop application development.

Not sure if it is a good idea to do a compute-only runtime as I would think that the application developer would want to balance resources used for compute and visualization in some way?

January 13, 2022

On Thursday, 13 January 2022 at 07:46:32 UTC, Paulo Pinto wrote:

>

I think the ship has already sailed, given the industry standards of SYSCL and C++ for OpenCL, and their integration into clang (check the CppCon talks on the same) and FPGA generation.

The SYSCL/FPGA presentation was interesting, but he said it should be considered a research project at this point?

I am a bit weary of all the solutions that are coming from Khronos. It is difficult to say what becomes prevalent across many platforms. Both Microsoft and Apple have undermined open standards such as OpenGL in their desire to lock in developers to their own "monopolistic eco system" …

So, a focused language solution of limited scope might actually be better for developers than (big) open standards.

>

There was a time to try overthrow C++, that was 10 years ago, LLVM was hardly relevant and GPGPU computing still wasn't mainstream.

Overthrowing C++ isn't possible, but D could focus more on desktop application development and provide a framework for it. Then you need to have a set of features/modules/libraries in place in a way that fits well together. GPU-compute would be one of those I think.

January 13, 2022

On Thursday, 13 January 2022 at 15:09:13 UTC, Ola Fosheim Grøstad wrote:

>

On Thursday, 13 January 2022 at 14:50:59 UTC, bachmeier wrote:

>

If you write code in Python, it's realistically only for the Python world. Probably the same for Julia.

Does scipy provide the functionality you would need? Could it in some sense be considered a baseline for scientific computing APIs?

SciPy is fairly useful but it is only one amongst a constellation of Python scientific computing libraries. It emulates a fair amount of what is provided by MATLAB, and it sits on top of numpy. Using SciPy, numpy, and matplotlib in tandem gives a user access to roughly the same functionality as a vanilla installation of MATLAB.

SciPy and numpy are built on top of a substrate of old and stable packages written in Fortran and C (BLAS, LAPACK, fftw, etc.).

Python, MATLAB, and Julia are basically targeted at scientists and engineers writing "application code". These languages aren't appropriate for "low-level" scientific computing along the lines of the libraries mentioned above. Julia does make a claim to the contrary: it is feasible to write fast low-level kernels in it, but (last time I checked) it is not so straightforward to export them to other languages, since Julia likes to do things at runtime.

Fortran and C remain good choices for low-level kernel development because they are easily consumed by Python et al. And as far as parallelism goes, OpenMP is the most common since it is straightforward conceptually. C++ is also fairly popular but since consuming something like a highly templatized header-only C++ library using e.g. Python's FFI is a pain, it is a less natural choice. (It's easier using pybind11, but the compile times will make you weep.)

Fortran, C, and C++ are also all standardized. This is valuable. The people developing these libraries are---more often than not---academics, who aren't able to devote much of their time to software development. Having some confidence that their programming language isn't going to change underneath gives them some assurance that they aren't going to be forced to spend an inordinate amount of time keeping their code in compliance for it to remain usable. Either that, or they write a library in Python and abandon it later.

As an aside, people lament the use of MATLAB, but one of its stated goals is backwards compatibility. Consequently, there's rather a lot of old MATLAB code floating around still in use.

"High-level" D is currently not that interesting for high-level scientific application code. There is a long list of "everyday" scientific computing tasks I could think of which I'd like to be able to execute in a small number of lines, but this is currently impossible using any flavor of D. See https://www.numerical-tours.com for some ideas.

"BetterC" D could be useful for developing numerical kernels. An interesting idea would to use D's introspection capabilities to automatically generate wrappers and documentation for each commonly used scientific programming language (Python, MATLAB, Julia). But D not being standardized makes it less attractive than C or Fortran. It is also unclear how stable D is as an open source project. The community surrounding it is rather small and doesn't seem to have much momentum. There also do not appear to be any scientific computing success stories with D.

My personal view is that people in science are generally more interested in actually doing science than in playing around with programming trivia. Having to spend time to understand something like C++'s argument dependent lookup is generally viewed as undesirable and a waste of time.

January 13, 2022

On Thursday, 13 January 2022 at 14:50:59 UTC, bachmeier wrote:

>

On Thursday, 13 January 2022 at 07:23:40 UTC, Bruce Carneal wrote:

>

On Thursday, 13 January 2022 at 03:56:00 UTC, bachmeier wrote:

>

On Wednesday, 12 January 2022 at 22:50:38 UTC, Ola Fosheim Grøstad wrote:

>

My gut feeling is that it will be very difficult for other languages to stand up to C++, Python and Julia in parallel computing. I get a feeling that the distance will only increase as time goes on.

What do you think?

It doesn't matter all that much for D TBH. Without the basic infrastructure for scientific computing like you get out of the box with those three languages, the ability to target another platform isn't going to matter. There are lots of pieces here and there in our community, but it's going to take some effort to (a) make it easy to use the different parts together, (b) document everything, and (c) write the missing pieces.

I disagree. D/dcompute can be used as a better general purpose GPU kernel language now (superior meta programming, sane nested functions, ...). If you are concerned about "infrastructure" you embed in C++.

I was referring to libraries like numpy for Python or the numerical capabilities built into Julia. D just isn't in a state where a researcher is going to say "let's write a D program for that simulation". You can call some things in Mir and cobble together an interface to some C libraries or whatever. That's not the same as Julia, where you write the code you need for the task at hand. That's the starting point to make it into scientific computing.

I agree. If the heavy lifting for a new project is accomplished by libraries that you can't easily co-opt then better to employ D as the GPU language or not at all.

More broadly, I don't think we should set ourselves a task of displacing language X in community Y. Better to focus on making accelerator programming "no big deal" in general so that people opt-in more often (first as accelerator language sub-component, then maybe more).

While my present day use of dcompute is in real time video, where it works a treat, I'm most excited about the possibilities dcompute would afford on SoCs. World class perf/watt from dead simple code deployable to billions of units? Yes, please.

>

...

January 13, 2022

On Thursday, 13 January 2022 at 14:24:59 UTC, Bruce Carneal wrote:

>

Yes. The language independent work in LLVM in the accelerator area is hugely important for dcompute, essential.

Sorry if this sounds ignorant, but does SPIR-V count for nothing?

>

Gotta surf that wave as we don't have the manpower to go independent. I dont think anybody has that amount of manpower, hence the collaboration/consolidation around LLVM as a back-end for accelerators.

>

There was a time to try overthrow C++, that was 10 years ago, LLVM was hardly relevant and GPGPU computing still wasn't mainstream.

Yes. The "overthrow" of C++ should be a non-goal, IMO, starting yesterday.

Overthrowing may be hopeless, but I feel we should at least be a really competitive with them.
Because it doesn't matter whether we're competing with C++ or not, people will compare us with it since that's the other choice when people will want to write extremely performant GPU code(if they care about ease of setup and productivity and not performance-at-any-cost, Julia and Python have beat us to it :-(
)