Jump to page: 1 27  
Page
Thread overview
Scientific computing and parallel computing C++23/C++26
5 days ago
forkit
5 days ago
IGotD-
5 days ago
H. S. Teoh
4 days ago
forkit
4 days ago
forkit
4 days ago
H. S. Teoh
4 days ago
jmh530
4 days ago
H. S. Teoh
4 days ago
jmh530
4 days ago
jmh530
21 hours ago
Era Scarecrow
6 hours ago
bioinfornatics
6 hours ago
H. S. Teoh
5 days ago
Bruce Carneal
4 days ago
bachmeier
4 days ago
Bruce Carneal
4 days ago
Nicholas Wilson
4 days ago
bachmeier
4 days ago
sfp
4 days ago
Bruce Carneal
4 days ago
Paulo Pinto
4 days ago
Bruce Carneal
4 days ago
Tejas
4 days ago
Bruce Carneal
4 days ago
bachmeier
4 days ago
Bruce Carneal
4 days ago
Bruce Carneal
4 days ago
Nicholas Wilson
3 days ago
Bruce Carneal
3 days ago
Bruce Carneal
3 days ago
Nicholas Wilson
2 days ago
Paulo Pinto
2 days ago
Nicholas Wilson
2 days ago
Bruce Carneal
2 days ago
Bruce Carneal
2 days ago
Paulo Pinto
2 days ago
max haughton
2 days ago
Bruce Carneal
4 days ago
Nicholas Wilson
4 days ago
Nicholas Wilson
3 days ago
Nicholas Wilson
4 days ago
Bruce Carneal
4 days ago
Nicholas Wilson
5 days ago

I found the CppCon 2021 presentation
C++ Standard Parallelism by Bryce Adelstein Lelbach very interesting, unusually clear and filled with content. I like this man. No nonsense.

It provides a view into what is coming for relatively high level and hardware agnostic parallel programming in C++23 or C++26. Basically a portable "high level" high performance solution.

He also mentions the Nvidia C++ compiler nvc++ which will make it possible to compile C++ to Nvidia GPUs in a somewhat transparent manner. (Maybe it already does, I have never tried to use it.)

My gut feeling is that it will be very difficult for other languages to stand up to C++, Python and Julia in parallel computing. I get a feeling that the distance will only increase as time goes on.

What do you think?

5 days ago
On Wednesday, 12 January 2022 at 22:50:38 UTC, Ola Fosheim Grøstad wrote:
>
> What do you think?

For the general programmers/developer, parallelism needs to be deeply integrated into the language and it's std library, so that it can be 'inferred' (by the compiler/optimiser).

Perhaps a language like D, could adopt @parallelNO to instruct the compiler/optimiser to never infer parallelism in the code that follows.

The O/S should also has a very important role in inferring parallelism.

parallelism has been promoted as the new thing..for a very..very...long time now.

I've had 8 cores available on my pc for well over 10 years now. I don't think anything running on my pc has the slighest clue that they even exist ;-)  (except the o/s).

I expect 'explicitly' coding parallelism will continue to be relegated to a niche subset of programmers/developers, due to the very considerable knowledge/skillset needed, to design/develop/test/debug parallel code.

5 days ago
On Thursday, 13 January 2022 at 00:41:25 UTC, forkit wrote:
>
> parallelism has been promoted as the new thing..for a very..very...long time now.
>
> I've had 8 cores available on my pc for well over 10 years now. I don't think anything running on my pc has the slighest clue that they even exist ;-)  (except the o/s).
>
> I expect 'explicitly' coding parallelism will continue to be relegated to a niche subset of programmers/developers, due to the very considerable knowledge/skillset needed, to design/develop/test/debug parallel code.

Yes, that parallelism is for many applications a dead end as you need something that can take advantage of it. Often forcing parallel execution can often instead reduce performance.

In order to exploit parallelism you need to understand your program and how it can take advantage of it. Languages that tries to make things in parallel under the hood without the programmer knowledge has been a fantasy for decades and it still is.

I'm not saying that the additions in C++ aren't useful, people will probably find good use for it. The presentation just reminds me how C++ just gets more ugly for every iteration and I'm happy I jumped off that horror train.

5 days ago
On Thu, Jan 13, 2022 at 12:41:25AM +0000, forkit via Digitalmars-d wrote: [...]
> I've had 8 cores available on my pc for well over 10 years now. I don't think anything running on my pc has the slighest clue that they even exist ;-)  (except the o/s).

Recently, I wanted to use POVRay to render frames for a short video clip. It was taking far too long because it was running on a single core at a time, so I wrote this:

	import std.parallellism, std.process;
	foreach (frame; frames.parallel) {
		execute([ "povray" ] ~ povrayOpts ~ [
			"+I", frame.infile,
			"+O", frame.outfile ]);
	}

Instant 8x render speedup. (Well, almost 8x... there's of course a little bit of overhead. But you get the point.)


> I expect 'explicitly' coding parallelism will continue to be relegated to a niche subset of programmers/developers, due to the very considerable knowledge/skillset needed, to design/develop/test/debug parallel code.

For simple cases, the above example serves as a counterexample. ;-)

Of course, for more complex situations things may not be quite so simple.  But still, it doesn't have to be as complex as languages like C++ make it seem.  In the above example I literally just added ".parallel" to the code and it Just Worked(tm).


T

-- 
The best way to destroy a cause is to defend it poorly.
5 days ago

On Wednesday, 12 January 2022 at 22:50:38 UTC, Ola Fosheim Grøstad wrote:

>

I found the CppCon 2021 presentation
C++ Standard Parallelism by Bryce Adelstein Lelbach very interesting, unusually clear and filled with content. I like this man. No nonsense.

It provides a view into what is coming for relatively high level and hardware agnostic parallel programming in C++23 or C++26. Basically a portable "high level" high performance solution.

He also mentions the Nvidia C++ compiler nvc++ which will make it possible to compile C++ to Nvidia GPUs in a somewhat transparent manner. (Maybe it already does, I have never tried to use it.)

My gut feeling is that it will be very difficult for other languages to stand up to C++, Python and Julia in parallel computing. I get a feeling that the distance will only increase as time goes on.

What do you think?

Given the emergence of ML in the commercial space and the prevalence of accelerator HW on SoCs and elsewhere, this is a timely topic Ola.

We have at least two options: 1) try to mimic or sit atop the, often byzantine, interfaces that creak out of the C++ community or 2) go direct to the evolving metal with D meta-programming shouldering most of the load. I favor the second of course.

For reference, CUDA/C++ was my primary programming language for 5+ years prior to taking up D and, even in its admittedly less-than-newbie-friendly state, I prefer dcompute to CUDA.

With some additional work dcompute could become a broadly accessible path to world beating performance/watt libraries and apps. Code that you can actually understand at a glance when you pick it up down the road.

Kudos to the dcompute contributors, especially Nicholas.

4 days ago

On Wednesday, 12 January 2022 at 22:50:38 UTC, Ola Fosheim Grøstad wrote:

>

My gut feeling is that it will be very difficult for other languages to stand up to C++, Python and Julia in parallel computing. I get a feeling that the distance will only increase as time goes on.

What do you think?

It doesn't matter all that much for D TBH. Without the basic infrastructure for scientific computing like you get out of the box with those three languages, the ability to target another platform isn't going to matter. There are lots of pieces here and there in our community, but it's going to take some effort to (a) make it easy to use the different parts together, (b) document everything, and (c) write the missing pieces.

4 days ago
On Thursday, 13 January 2022 at 01:19:07 UTC, H. S. Teoh wrote:
>
> Recently, I wanted to use POVRay to render frames for a short video clip. It was taking far too long because it was running on a single core at a time, so I wrote this:
>
> 	import std.parallellism, std.process;
> 	foreach (frame; frames.parallel) {
> 		execute([ "povray" ] ~ povrayOpts ~ [
> 			"+I", frame.infile,
> 			"+O", frame.outfile ]);
> 	}
>
> Instant 8x render speedup. (Well, almost 8x... there's of course a little bit of overhead. But you get the point.)
>

I'd like to see D simplify this even further:

@parallel foreach (frame; frames) { .. }


that's it. just annotate it. that's all I have to do. Let the language tools do the rest.
4 days ago

On Thursday, 13 January 2022 at 03:56:00 UTC, bachmeier wrote:

>

On Wednesday, 12 January 2022 at 22:50:38 UTC, Ola Fosheim Grøstad wrote:

>

My gut feeling is that it will be very difficult for other languages to stand up to C++, Python and Julia in parallel computing. I get a feeling that the distance will only increase as time goes on.

What do you think?

It doesn't matter all that much for D TBH. Without the basic infrastructure for scientific computing like you get out of the box with those three languages, the ability to target another platform isn't going to matter. There are lots of pieces here and there in our community, but it's going to take some effort to (a) make it easy to use the different parts together, (b) document everything, and (c) write the missing pieces.

I disagree. D/dcompute can be used as a better general purpose GPU kernel language now (superior meta programming, sane nested functions, ...). If you are concerned about "infrastructure" you embed in C++.

There are improvements to be made but, by my lights, dcompute is already better than CUDA in many ways. If we improve usability, make dcompute accessible to "mere mortals", make it a "no big deal" choice instead of a "here be dragons" choice, we'd really have something.

By contrast, I just don't see the C++ crowd getting to sanity/simplicity any time soon... not unless ideas from the circle compiler or similar make their way to mainstream.

4 days ago

On Wednesday, 12 January 2022 at 22:50:38 UTC, Ola Fosheim Grøstad wrote:

>

I found the CppCon 2021 presentation
C++ Standard Parallelism by Bryce Adelstein Lelbach very interesting, unusually clear and filled with content. I like this man. No nonsense.

It provides a view into what is coming for relatively high level and hardware agnostic parallel programming in C++23 or C++26. Basically a portable "high level" high performance solution.

He also mentions the Nvidia C++ compiler nvc++ which will make it possible to compile C++ to Nvidia GPUs in a somewhat transparent manner. (Maybe it already does, I have never tried to use it.)

My gut feeling is that it will be very difficult for other languages to stand up to C++, Python and Julia in parallel computing. I get a feeling that the distance will only increase as time goes on.

What do you think?

I think the ship has already sailed, given the industry standards of SYSCL and C++ for OpenCL, and their integration into clang (check the CppCon talks on the same) and FPGA generation.

D can have a go at it, but only by plugging into the LLVM ecosystem where C++ is the name of the game, and given it is approaching Linux level of industry contributors it isn't going anywhere.

There was a time to try overthrow C++, that was 10 years ago, LLVM was hardly relevant and GPGPU computing still wasn't mainstream.

4 days ago

On Thursday, 13 January 2022 at 07:23:40 UTC, Bruce Carneal wrote:

>

I disagree. D/dcompute can be used as a better general purpose GPU kernel language now (superior meta programming, sane nested functions, ...).

Is dcompute being actively developed or is it in a "frozen" state? longevity is important for adoption, I think.

>

There are improvements to be made but, by my lights, dcompute is already better than CUDA in many ways. If we improve usability, make dcompute accessible to "mere mortals", make it a "no big deal" choice instead of a "here be dragons" choice, we'd really have something.

Maybe it would be possible to do something with a more limited scope, but more low level? Like something targeting Metal and Vulkan directly? Something like this might be possible to do well if D would change the focus and build a high level IR.

I think one of Bryce's main points is that there is more long term stability in C++ than in the other APIs for parallel computing, so for long term development it would be better to express parallel code in terms of a C++ standard library construct than other compute-APIs.

That argument makes sense for me, I don't want to deal with CUDA or OpenCL as dependencies. I'd rather have something sit directly on top of the lower level APIs.

>

By contrast, I just don't see the C++ crowd getting to sanity/simplicity any time soon... not unless ideas from the circle compiler or similar make their way to mainstream.

It does look a bit complex, but what I find promising for C++ is that Nvidia is pushing their hardware by creating backends for C++ parallel libraries that targets multiple GPUs. That in turn might push Apple to do the same for Metal and so on.

If C++20 had what Bryce presented then I would've considered using it for signal processing. Right now it would make more sense to target Metal/Vulkan directly, but that is time consuming, so I probably won't.

« First   ‹ Prev
1 2 3 4 5 6 7