Jump to page: 1 2
Thread overview
"Almost there" version of TDPL updated on Safari Rough Cuts
Dec 10, 2009
bearophile
Dec 10, 2009
Sean Kelly
Dec 10, 2009
dsimcha
Dec 10, 2009
Sean Kelly
Dec 11, 2009
dsimcha
Dec 11, 2009
Sean Kelly
Dec 11, 2009
Sean Kelly
Dec 11, 2009
dsimcha
Dec 11, 2009
bearophile
Dec 11, 2009
Sean Kelly
December 10, 2009
http://my.safaribooksonline.com/roughcuts

The current version includes virtually the entire book except (a) overloaded operators, (b) qualifiers, (c) threads. In the meantime I have finished the new design and wrote the chapter on overloaded operators. The design got Walter's seal of approval but I'm still waiting for Don's.

I plan to write the chapter on qualifiers in the next few days, mostly on the plane to and from Quebec. Hopefully Walter ad I will zero in on a solution to code duplication due to qualifiers, probably starting from Steven Schveighoffer's proposal.

I'll then have one month to design a small but compelling core concurrency framework together with Walter, Sean, and whomever would want to participate. The initial framework will emphasize de facto isolation between threads and message passing. It will build on an Erlang-inspired message passing design defined and implemented by Sean.


Andrei
December 10, 2009
Andrei Alexandrescu:

> The initial framework will emphasize de facto isolation between threads and message passing. It will build on an Erlang-inspired message passing design defined and implemented by Sean.

Sounds good. When you have 30+ CPU cores you don't want shared memory, in those situations message passing (and actors, agents, etc) seems better :-)

Bye,
bearophile
December 10, 2009
Andrei Alexandrescu Wrote:

> http://my.safaribooksonline.com/roughcuts
> 
> The current version includes virtually the entire book except (a) overloaded operators, (b) qualifiers, (c) threads. In the meantime I have finished the new design and wrote the chapter on overloaded operators. The design got Walter's seal of approval but I'm still waiting for Don's.
> 
> I plan to write the chapter on qualifiers in the next few days, mostly on the plane to and from Quebec. Hopefully Walter ad I will zero in on a solution to code duplication due to qualifiers, probably starting from Steven Schveighoffer's proposal.
> 
> I'll then have one month to design a small but compelling core concurrency framework together with Walter, Sean, and whomever would want to participate. The initial framework will emphasize de facto isolation between threads and message passing. It will build on an Erlang-inspired message passing design defined and implemented by Sean.
> 
> 
> Andrei


Are these threads going to be green, stackless threads? (I think they are actually recursive functions) Or is mostly the share-nothing approach what you bring from Erlang, using system threads? More info please! :)

From my point of view, I also think this is the best approach to scalable concurrency.
December 10, 2009
Álvaro Castro-Castilla Wrote:

> Andrei Alexandrescu Wrote:
> 
> > http://my.safaribooksonline.com/roughcuts
> > 
> > The current version includes virtually the entire book except (a) overloaded operators, (b) qualifiers, (c) threads. In the meantime I have finished the new design and wrote the chapter on overloaded operators. The design got Walter's seal of approval but I'm still waiting for Don's.
> > 
> > I plan to write the chapter on qualifiers in the next few days, mostly on the plane to and from Quebec. Hopefully Walter ad I will zero in on a solution to code duplication due to qualifiers, probably starting from Steven Schveighoffer's proposal.
> > 
> > I'll then have one month to design a small but compelling core concurrency framework together with Walter, Sean, and whomever would want to participate. The initial framework will emphasize de facto isolation between threads and message passing. It will build on an Erlang-inspired message passing design defined and implemented by Sean.
> 
> 
> Are these threads going to be green, stackless threads? (I think they are actually recursive functions)

Not initially, though that may happen later.  The default static storage class is thread-local, which would be confusing if the "thread" you're using shares static data with some other thread.  I'm pretty sure this could be fixed with some library work, but it isn't done right now.  In short, for now you're more likely to want to use a small number of threads than zillions like you would in Erlang.

>Or is mostly the share-nothing approach what you bring from Erlang, using system threads? More info please! :)

The share-nothing approach is the initial goal.  If green threads are used later it wouldn't change the programming model anyway, just the number of threads an app could use with reasonable performance.

> From my point of view, I also think this is the best approach to scalable concurrency.

Glad you agree :-)
December 10, 2009
== Quote from Sean Kelly (sean@invisibleduck.org)'s article
> Álvaro Castro-Castilla Wrote:
> > Andrei Alexandrescu Wrote:
> >
> > > http://my.safaribooksonline.com/roughcuts
> > >
> > > The current version includes virtually the entire book except (a)
> > > overloaded operators, (b) qualifiers, (c) threads. In the meantime I
> > > have finished the new design and wrote the chapter on overloaded
> > > operators. The design got Walter's seal of approval but I'm still
> > > waiting for Don's.
> > >
> > > I plan to write the chapter on qualifiers in the next few days, mostly on the plane to and from Quebec. Hopefully Walter ad I will zero in on a solution to code duplication due to qualifiers, probably starting from Steven Schveighoffer's proposal.
> > >
> > > I'll then have one month to design a small but compelling core concurrency framework together with Walter, Sean, and whomever would want to participate. The initial framework will emphasize de facto isolation between threads and message passing. It will build on an Erlang-inspired message passing design defined and implemented by Sean.
> >
> >
> > Are these threads going to be green, stackless threads? (I think they are
actually recursive functions)
> Not initially, though that may happen later.  The default static storage class
is thread-local, which would be confusing if the "thread" you're using shares static data with some other thread.  I'm pretty sure this could be fixed with some library work, but it isn't done right now.  In short, for now you're more likely to want to use a small number of threads than zillions like you would in Erlang.
> >Or is mostly the share-nothing approach what you bring from Erlang, using
system threads? More info please! :)
> The share-nothing approach is the initial goal.  If green threads are used later
it wouldn't change the programming model anyway, just the number of threads an app could use with reasonable performance.
> > From my point of view, I also think this is the best approach to scalable
concurrency.
> Glad you agree :-)

This is great for super-scalable concurrency, the kind you need for things like servers, but what about the case where you need concurrency mostly to exploit data parallelism in a multicore environment?  Are we considering things like parallel foreach, map, reduce, etc. to be orthogonal to what's being discussed here, or do they fit together somehow?
December 10, 2009
dsimcha Wrote:
> 
> This is great for super-scalable concurrency, the kind you need for things like servers, but what about the case where you need concurrency mostly to exploit data parallelism in a multicore environment?  Are we considering things like parallel foreach, map, reduce, etc. to be orthogonal to what's being discussed here, or do they fit together somehow?

I think it probably depends on the relative efficiency of a message passing approach to one using a thread pool for the small-N case (particularly for very large datasets).  If message passing can come close to the thread pool in performance then it's clearly preferable.  It may come down to whether pass by reference is allowed in some instances.  It's always possible to use casts to bypass checking and pass by reference anyway, but it would be nice if this weren't necessary.
December 11, 2009
== Quote from Sean Kelly (sean@invisibleduck.org)'s article
> dsimcha Wrote:
> >
> > This is great for super-scalable concurrency, the kind you need for things like servers, but what about the case where you need concurrency mostly to exploit data parallelism in a multicore environment?  Are we considering things like parallel foreach, map, reduce, etc. to be orthogonal to what's being discussed here, or do they fit together somehow?
> I think it probably depends on the relative efficiency of a message passing
approach to one using a thread pool for the small-N case (particularly for very large datasets).  If message passing can come close to the thread pool in performance then it's clearly preferable.  It may come down to whether pass by reference is allowed in some instances.  It's always possible to use casts to bypass checking and pass by reference anyway, but it would be nice if this weren't necessary.

What about simplicity?  Message passing is definitely safer.  Parallel foreach (the kind that allows implicit sharing of basically everything, including stack variables) is basically a cowboy approach that leaves all safety concerns to the programmer.  OTOH, parallel foreach is a very easy construct to understand and use in situations where you have data parallelism and you're doing things that are obviously (from the programmer's perspective) safe, even though they can't be statically proven safe (from the compiler's perspective).

Don't get me wrong, I definitely think message passing-style concurrency has its place.  It's just the wrong tool for the job if your goal is simply to exploit data parallelism to use as many cores as you can.
December 11, 2009
dsimcha Wrote:

> == Quote from Sean Kelly (sean@invisibleduck.org)'s article
> > dsimcha Wrote:
> > >
> > > This is great for super-scalable concurrency, the kind you need for things like servers, but what about the case where you need concurrency mostly to exploit data parallelism in a multicore environment?  Are we considering things like parallel foreach, map, reduce, etc. to be orthogonal to what's being discussed here, or do they fit together somehow?
> > I think it probably depends on the relative efficiency of a message passing
> approach to one using a thread pool for the small-N case (particularly for very large datasets).  If message passing can come close to the thread pool in performance then it's clearly preferable.  It may come down to whether pass by reference is allowed in some instances.  It's always possible to use casts to bypass checking and pass by reference anyway, but it would be nice if this weren't necessary.
> 
> What about simplicity?  Message passing is definitely safer.  Parallel foreach (the kind that allows implicit sharing of basically everything, including stack variables) is basically a cowboy approach that leaves all safety concerns to the programmer.  OTOH, parallel foreach is a very easy construct to understand and use in situations where you have data parallelism and you're doing things that are obviously (from the programmer's perspective) safe, even though they can't be statically proven safe (from the compiler's perspective).

I'd like to think it isn't necessary to expose the internals of the algorithm to the user.  Parallel foreach (or map, since they're the same thing), could as easily divide up the dataset and send slices to worker threads via messages as with a more visible threading model.  The only issue I can think of is that for very large datasets you really have to pass references of one kind or another.  Scatter/gather with copying just isn't feasible when you're at the limits of virtual memory.
December 11, 2009
Sean Kelly Wrote:

> dsimcha Wrote:
> 
> > == Quote from Sean Kelly (sean@invisibleduck.org)'s article
> > > dsimcha Wrote:
> > > >
> > > > This is great for super-scalable concurrency, the kind you need for things like servers, but what about the case where you need concurrency mostly to exploit data parallelism in a multicore environment?  Are we considering things like parallel foreach, map, reduce, etc. to be orthogonal to what's being discussed here, or do they fit together somehow?
> > > I think it probably depends on the relative efficiency of a message passing
> > approach to one using a thread pool for the small-N case (particularly for very large datasets).  If message passing can come close to the thread pool in performance then it's clearly preferable.  It may come down to whether pass by reference is allowed in some instances.  It's always possible to use casts to bypass checking and pass by reference anyway, but it would be nice if this weren't necessary.
> > 
> > What about simplicity?  Message passing is definitely safer.  Parallel foreach (the kind that allows implicit sharing of basically everything, including stack variables) is basically a cowboy approach that leaves all safety concerns to the programmer.  OTOH, parallel foreach is a very easy construct to understand and use in situations where you have data parallelism and you're doing things that are obviously (from the programmer's perspective) safe, even though they can't be statically proven safe (from the compiler's perspective).
> 
> I'd like to think it isn't necessary to expose the internals of the algorithm to the user.  Parallel foreach (or map, since they're the same thing), could as easily divide up the dataset and send slices to worker threads via messages as with a more visible threading model.  The only issue I can think of is that for very large datasets you really have to pass references of one kind or another.  Scatter/gather with copying just isn't feasible when you're at the limits of virtual memory.

Language extensions for message passing, such as Kilim for Java send messages giving away the ownership of data, not copying it. That's a reason for the need of compiler/runtime support.

Also, parallel map/foreach is more feasible as a library-only solution, whether the message passing requires some support from the runtime environment.

Please, correct me if I'm wrong.

December 11, 2009
Álvaro Castro-Castilla Wrote:
> 
> Language extensions for message passing, such as Kilim for Java send messages giving away the ownership of data, not copying it. That's a reason for the need of compiler/runtime support.

Knowledge of unique ownership can obviate the need for copying, but copying is a reasonable fall-back in most cases.

> Also, parallel map/foreach is more feasible as a library-only solution, whether the message passing requires some support from the runtime environment.

It really depends on the language and what your goals are.  There are message passing libraries for C, for example, but they don't provide much in the way of safety.
« First   ‹ Prev
1 2