Thread overview
DConf '22 Talk: Structured Concurrency
Oct 12, 2022
Markk
Oct 12, 2022
max haughton
Oct 13, 2022
Sebastiaan Koppe
Oct 13, 2022
Andrej Mitrovic
Oct 13, 2022
Markk
Oct 14, 2022
Sebastiaan Koppe
October 12, 2022

Hi,

having watched the Structured Concurrency talk, and very likely missing something, I wondered if @Sebastiaan Koppe, or anybody else, has ever compared this to OpenMP. If I'm not mistaken (and after just having watched the very nice Intel tutorial to refresh my limited knowledge), OpenMP seems to use the same kind of scoping/structuring, and allowing composition.

So, again unless I'm missing something, I guess the term "Structured Concurrency" and "Composability" would equally apply to the much older (1997) OpenMP solution, right? If true, it was not missed for 30 years, as Walter Bright wondered, during Q&A 😉

Note: Most OpenMP examples just obsess about for loops and plain parallelism, but make no mistake, there is much more to it. One must also understand the section and task directives, to grasp the full capabilities.

IMHO this is all extremely elegant:

It seems to me that such a language integrated and mostly declarative solution, would be very "D"-ish. Declarative IMHO is the "right" way to go with the increasing NUMA characteristics of systems (also think the problematic of E- and P-cores). One must let the compiler/runtime decide the best threading/scheduling strategy on a given platform, otherwise the code will likely age badly, quickly.

I really wonder if this is one of these famous cases where the same old concept is "reinvented" using different buzzwords? Just because a tech is old does not mean it is bad, I would say au contraire! I sometimes wonder: are the supercomputing grandpas just too unfashionable for the cloud/web-kiddies to talk with?

-Mark

October 12, 2022

On Wednesday, 12 October 2022 at 09:28:07 UTC, Markk wrote:

>

Hi,

having watched the Structured Concurrency talk, and very likely missing something, I wondered if @Sebastiaan Koppe, or anybody else, has ever compared this to OpenMP. If I'm not mistaken (and after just having watched the very nice Intel tutorial to refresh my limited knowledge), OpenMP seems to use the same kind of scoping/structuring, and allowing composition.

So, again unless I'm missing something, I guess the term "Structured Concurrency" and "Composability" would equally apply to the much older (1997) OpenMP solution, right? If true, it was not missed for 30 years, as Walter Bright wondered, during Q&A 😉

Note: Most OpenMP examples just obsess about for loops and plain parallelism, but make no mistake, there is much more to it. One must also understand the section and task directives, to grasp the full capabilities.

IMHO this is all extremely elegant:

It seems to me that such a language integrated and mostly declarative solution, would be very "D"-ish. Declarative IMHO is the "right" way to go with the increasing NUMA characteristics of systems (also think the problematic of E- and P-cores). One must let the compiler/runtime decide the best threading/scheduling strategy on a given platform, otherwise the code will likely age badly, quickly.

I really wonder if this is one of these famous cases where the same old concept is "reinvented" using different buzzwords? Just because a tech is old does not mean it is bad, I would say au contraire! I sometimes wonder: are the supercomputing grandpas just too unfashionable for the cloud/web-kiddies to talk with?

-Mark

It is true that OpenMP has often usually done something "new" being touted - this library is more focused on strict concurrency though: parallelism can come as a consequence of a rigorous concurrency model, but the library is at the moment more aimed at concurrent processing of data (i.e. get data -> process data -> output -> wait for more data, rather than all the data arrives then we process it all in parallel then stop).

I had a play with finding a natural way to express OpenMP constructs in D, the issue, I think, is that the spec is huge and seems to be happy with the assumption that it can be inserted into the syntax of the language (which is a little too brutalist for my tastes). That being said we could probably just add a slot for a pragma on loops and so on.

October 13, 2022

On Wednesday, 12 October 2022 at 09:28:07 UTC, Markk wrote:

>

Hi,

having watched the Structured Concurrency talk, and very likely missing something, I wondered if @Sebastiaan Koppe, or anybody else, has ever compared this to OpenMP.

Thanks for watching!

I haven't looked into OpenMP at all, beyond the short example from one of your links, so take everything I say with a grain of salt. Or two...

Personally I don't find the #pragma approach that elegant to be honest. It also seems to be limited to just one machine.

The sender/receiver model is literally just the abstraction of an asynchronous computation. This allows you to use it as a building block to manage work across multiple compute resources.

October 13, 2022

On Thursday, 13 October 2022 at 08:04:46 UTC, Sebastiaan Koppe wrote:

>

The sender/receiver model is literally just the abstraction of an asynchronous computation. This allows you to use it as a building block to manage work across multiple compute resources.

Really cool presentation!

Btw I've noticed some outdated comments:

https://github.com/symmetryinvestments/concurrency/blob/7e870ffecb651a3859fac0c05296a0656d9ee9bf/source/concurrency/utils.d#L58
https://github.com/symmetryinvestments/concurrency/blob/c648b1af23efb7930077451a372a15d72d78f056/source/concurrency/stoptoken.d#L132

pause() is in fact supported now in DMD. Druntime uses it for its backoff spinlock implemented here: https://github.com/dlang/dmd/blob/09d04945bdbc0cba36f7bb1e19d5bd009d4b0ff2/druntime/src/core/internal/spinlock.d

It was fixed in: https://issues.dlang.org/show_bug.cgi?id=14120

Someone should probably remove this outdated code and replace it with 'pause' (the net effect is the same, it's just rep; nop;) https://github.com/dlang/dmd/blob/09d04945bdbc0cba36f7bb1e19d5bd009d4b0ff2/druntime/src/core/internal/atomic.d#L668-L669

October 13, 2022

On Thursday, 13 October 2022 at 08:04:46 UTC, Sebastiaan Koppe wrote:

>

I haven't looked into OpenMP at all, ...

Perhaps I should clarify that my question was not so much about the actual "manifestation" of OpenMP, but rather about the underlying concepts. A large part of your talk presents the benefits of structured programming over the "goto mess", as an analog for the benefits of "Structured Concurrency" over anything else. It is there on this conceptual level that I do not see any innovation over 1997 OpenMP.

I'm not so much talking about whether syntax is fashionable, or the approach of a built-in compiler feature the right design choice.

>

Personally I don't find the #pragma approach that elegant to be honest. It also seems to be limited to just one machine.

Clearly, this could be "modernized" and translated to attributes on variables, loops etc.

>

The sender/receiver model is literally just the abstraction of an asynchronous computation. This allows you to use it as a building block to manage work across multiple compute resources.

Firstly, I do think OpenMP covers multiple compute resources, as long as there is a networked memory abstraction, or compiler support (LLVM).
https://stackoverflow.com/questions/13475838/openmp-program-on-different-hosts

Secondly, I don't see why an OpenMP task couldn't equally interact with multiple compute resources in a similar way. It is not that using your solution, a structured code block is sent to other compute resources, and magically executed there, right?

It all boils down to presenting the Fork-Join-Model nicely and safely, the rest is your code doing whatever it likes.

Or maybe I missed something.

_Mark

October 14, 2022

On Thursday, 13 October 2022 at 19:54:31 UTC, Markk wrote:

>

On Thursday, 13 October 2022 at 08:04:46 UTC, Sebastiaan Koppe wrote:

>

I haven't looked into OpenMP at all, ...

Perhaps I should clarify that my question was not so much about the actual "manifestation" of OpenMP, but rather about the underlying concepts. A large part of your talk presents the benefits of structured programming over the "goto mess", as an analog for the benefits of "Structured Concurrency" over anything else. It is there on this conceptual level that I do not see any innovation over 1997 OpenMP.

I'm not so much talking about whether syntax is fashionable, or the approach of a built-in compiler feature the right design choice.

I see.

I don't know. But if I wanted to find out I would look into how OpenMP supports cancellation, error handling and/or composing of (custom) asynchronous algorithms.

retry is a good example that hits all three. It needs to be cancellable - so that it stops retrying and cancels any running task - and it needs to go into (custom) retry logic whenever the underlying task errors, restarting it until it hits the retry limit - however that is defined.

How would one write such a thing using OpenMP, supposing it doesn't exist?

Obviously that doesn't invalidate your claim that these ideas are nothing new. However, I am approaching this much more from a practical standpoint and not as a CS historian.

To me it seems - from my brief googling - that the Sender/Receivers model is a more low-level abstraction of asynchronous computation and allows more fine-grained control.

I don't know whether that counts as a new idea or not.


When researching for structured concurrency I found a book from the late 90's that mentioned unstructured concurrency. It nailed the definition.

So yes, these concepts definitely were around before. Although funnily enough, the book mentioned nothing of structured concurrency.