Hi,
having watched the Structured Concurrency talk, and very likely missing something, I wondered if @Sebastiaan Koppe, or anybody else, has ever compared this to OpenMP. If I'm not mistaken (and after just having watched the very nice Intel tutorial to refresh my limited knowledge), OpenMP seems to use the same kind of scoping/structuring, and allowing composition.
So, again unless I'm missing something, I guess the term "Structured Concurrency" and "Composability" would equally apply to the much older (1997) OpenMP solution, right? If true, it was not missed for 30 years, as Walter Bright wondered, during Q&A 😉
Note: Most OpenMP examples just obsess about for
loops and plain parallelism, but make no mistake, there is much more to it. One must also understand the section
and task
directives, to grasp the full capabilities.
IMHO this is all extremely elegant:
- Parallelize a program without breaking the serial version.
- Going from this problem (and after a very inelegant manual detour), you get this solution using
task
(amazing!).
It seems to me that such a language integrated and mostly declarative solution, would be very "D"-ish. Declarative IMHO is the "right" way to go with the increasing NUMA characteristics of systems (also think the problematic of E- and P-cores). One must let the compiler/runtime decide the best threading/scheduling strategy on a given platform, otherwise the code will likely age badly, quickly.
I really wonder if this is one of these famous cases where the same old concept is "reinvented" using different buzzwords? Just because a tech is old does not mean it is bad, I would say au contraire! I sometimes wonder: are the supercomputing grandpas just too unfashionable for the cloud/web-kiddies to talk with?
-Mark