Jump to page: 1 2
Thread overview
std.concurrency and efficient returns
Aug 01, 2010
Jonathan M Davis
Aug 01, 2010
Robert Jacques
Aug 01, 2010
Jonathan M Davis
Aug 02, 2010
Jonathan M Davis
Aug 02, 2010
Robert Jacques
Aug 01, 2010
dsimcha
Aug 01, 2010
awishformore
Aug 01, 2010
awishformore
Aug 01, 2010
Pelle
Aug 01, 2010
Robert Jacques
Aug 02, 2010
dsimcha
August 01, 2010
Okay. From what I can tell, it seems to be a recurring pattern with threads that it's useful to spawn a thread, have it do some work, and then have it return the result and terminate. The appropriate way to do that seems to spawn the thread with the data that needs to be passed and then using send to send what would normally be the return value before the function (and therefore the spawned thread) terminates. I see 2 problems with this, both stemming from immutability.

1. _All_ of the arguments passed to spawn must be immutable. It's not that hard to be in a situation where you need to pass it arguments that the parent thread will never use, and it's highly probable that that data will have to be copied to make it immutable so that it can be passed. The result is that you're forced to make pointless copies. If you're passing a lot of data, that could be expensive.

2. _All_ of the arguments returned via send must be immutable. In the scenario that I'm describing here, the thread is going away after sending the message, so there's no way that it's going to do anything with the data, and having to copy it to make it immutable (as will likely have to be done) can be highly inefficient.

Is there a better way to do this? Or if not, can one be created? It seems to me that it would be highly desirable to be able to pass mutable reference types between threads where the thread doing the receiving takes control of the object/array being passed. Due to D's threading model, a copy may still have to be done behind the scenes, but if you could pass mutable data across while passing ownership, you could have at most 1 copy rather than the 2 - 3 copies that would have to be taking place when you have a mutable obect that you're trying to send across threads (so, one copy to make it immutable, possibly a copy from one thread local storage to another of the immutable data (though I'd hope that that wouldn't require a copy), and one copy on the other end to get mutable data from the immutable data). As it stands, it seems painfully inefficient to me when you're passing anything other than small amounts of data across.

Also, this recurring pattern that I'm seeing makes me wonder if it would be advantageous to have an addititon to std.concurrency where you spawned a thread which returned a value when it was done (rather than having to use a send with a void function), and the parent thread used a receive call of some kind to get the return value. Ideally, you could spawn a series of threads which were paired with the variables that their return values would be assigned to, and you could do it all as one function call.

Overall, I really like D's threading model, but it seems to me that it could be streamlined a bit.

- Jonathan M Davis
August 01, 2010
On Sun, 01 Aug 2010 06:24:18 -0400, Jonathan M Davis <jmdavisprog@gmail.com> wrote:
> Okay. From what I can tell, it seems to be a recurring pattern with threads that
> it's useful to spawn a thread, have it do some work, and then have it return the
> result and terminate. The appropriate way to do that seems to spawn the thread
> with the data that needs to be passed and then using send to send what would
> normally be the return value before the function (and therefore the spawned
> thread) terminates. I see 2 problems with this, both stemming from immutability.
>
> 1. _All_ of the arguments passed to spawn must be immutable. It's not that hard
> to be in a situation where you need to pass it arguments that the parent thread
> will never use, and it's highly probable that that data will have to be copied
> to make it immutable so that it can be passed. The result is that you're forced
> to make pointless copies. If you're passing a lot of data, that could be
> expensive.
>
> 2. _All_ of the arguments returned via send must be immutable. In the scenario
> that I'm describing here, the thread is going away after sending the message, so
> there's no way that it's going to do anything with the data, and having to copy
> it to make it immutable (as will likely have to be done) can be highly
> inefficient.
>
> Is there a better way to do this? Or if not, can one be created? It seems to me
> that it would be highly desirable to be able to pass mutable reference types
> between threads where the thread doing the receiving takes control of the
> object/array being passed. Due to D's threading model, a copy may still have to
> be done behind the scenes, but if you could pass mutable data across while
> passing ownership, you could have at most 1 copy rather than the 2 - 3 copies
> that would have to be taking place when you have a mutable obect that you're
> trying to send across threads (so, one copy to make it immutable, possibly a
> copy from one thread local storage to another of the immutable data (though I'd
> hope that that wouldn't require a copy), and one copy on the other end to get
> mutable data from the immutable data). As it stands, it seems painfully
> inefficient to me when you're passing anything other than small amounts of data
> across.
>
> Also, this recurring pattern that I'm seeing makes me wonder if it would be
> advantageous to have an addititon to std.concurrency where you spawned a thread
> which returned a value when it was done (rather than having to use a send with a
> void function), and the parent thread used a receive call of some kind to get
> the return value. Ideally, you could spawn a series of threads which were paired
> with the variables that their return values would be assigned to, and you could
> do it all as one function call.
>
> Overall, I really like D's threading model, but it seems to me that it could be
> streamlined a bit.
>
> - Jonathan M Davis

Hi Jonathan,
It sounds like what you really want is a task-based parallel programming library, as opposed to concurrent thread. I'd recommend Dave Simcha's parallelFuture library if you want to play around with this in D (http://www.dsource.org/projects/scrapple/browser/trunk/parallelFuture/parallelFuture.d). However, parallelFuture is currently unsafe - you need to make sure that logically speaking that data the task is being passed is immutable. Shared/const/immutable delegates have been brought up before as a way to formalize the implicit assumptions of libraries like parallelFuture, but nothing has come of it yet.
As for std.concurrency, immutability is definitely the correct way to go, even if it means extra copying: for most jobs the processing should greatly out way the cost of copying and thread initialization (though under the hood thread pools should help with the latter). A large amount of experience dictates that shared mutable data, let alone unprotected mutable data, is a bug waiting to happen.
On a more practical note, if you relaxing either 1) or 2) can cause major problems with certain modern GCs, so at a minimum casts should be involved.
August 01, 2010
== Quote from Jonathan M Davis (jmdavisprog@gmail.com)'s article
> Okay. From what I can tell, it seems to be a recurring pattern with threads that it's useful to spawn a thread, have it do some work, and then have it return the result and terminate. The appropriate way to do that seems to spawn the thread with the data that needs to be passed and then using send to send what would normally be the return value before the function (and therefore the spawned thread) terminates. I see 2 problems with this, both stemming from immutability.

I think the bottom line is that D's threading model is designed to put safety and simplicity over performance and flexibility.  Given the amount of bugs that are apparently generated when using threading for concurrency in large-scale software written by hordes of programmers, this may be a reasonable tradeoff.

Within the message-passing model, one thing that would help a lot is a Unique type that can be implicitly and destructively converted to immutable or shared.  In D as it stands right now, immutable is basically useless in all but the simplest cases because it's just too hard to build complex immutable data structures, especially if you want to avoid unnecessary copying or having to rely on casts and manually checked assumptions in at least small areas of the program.  In theory, immutable solves tons of problems, but in practice it solves very few.  While I don't understand shared that well, I guess a Unique type would help in creating shared data, too.

There are two reasons for using multithreading:  Parallelism (using multiple cores to increase throughput) and concurrency (making things appear to be happening simultaneously to decrease latency; this makes sense even on a single-core machine).  One may come as a side effect of the other, but usually only one is the goal.  It sounds like you're looking for parallelism.  When using threading for parallelism as opposed to concurrency, this tradeoff of simplicity and safety in exchange for flexibility and performance doesn't work so well because:

1.  When using threading for parallelism instead of concurrency, it's reasonable to do some unsafe stuff to get better performance, since performance is the whole point anyhow.

2.  Unlike the concurrency case, the parallelism case usually occurs only in small hotspots of a program, or in small scientific computing programs.  In these cases it's not that hard for the programmer to manually track what's shared, etc.

3.  In my experience at least, parallelism often requires finer grained communication between threads than concurrency.  For example, an OS timeslice is about 15 milliseconds, meaning that on single core machines threads being used for concurrency simply can't communicate more often than that.  I've written useful parallel code that scaled to at least 4 cores and required communication between threads several times per millisecond.  It could have been written more efficiently w.r.t. communication between threads, but it would have required a lot more memory allocations and been less efficient in other respects.

While I completely agree that message passing should be D's **flagship** threading model because it's been proven to work well in a lot of cases, I'm not sure if it should be the **only** one well-supported out of the box because it's just too inflexible when you want pull-out-all-stops parallelism.  As Robert Jacques mentioned, I've been working on a parallelism library.  The code is at:

http://dsource.org/projects/scrapple/browser/trunk/parallelFuture/parallelFuture.d

The docs are at:

http://cis.jhu.edu/~dsimcha/parallelFuture.html

I've been thinking lately about how to integrate this into the new threading
model, as it's currently completely unsafe, doesn't use shared at all, and was
written before the new threading model was implemented.  (core.thread still takes
an unshared delegate).  I think before we can solve the problems you've brought
up, we need to clarify how non-message passing based multithreading (i.e. using
shared) is going to work in D, as right now it is completely unclear at least to me.
August 01, 2010
On 01/08/2010 19:17, dsimcha wrote:
> == Quote from Jonathan M Davis (jmdavisprog@gmail.com)'s article
>> Okay. From what I can tell, it seems to be a recurring pattern with threads that
>> it's useful to spawn a thread, have it do some work, and then have it return the
>> result and terminate. The appropriate way to do that seems to spawn the thread
>> with the data that needs to be passed and then using send to send what would
>> normally be the return value before the function (and therefore the spawned
>> thread) terminates. I see 2 problems with this, both stemming from immutability.
>
> I think the bottom line is that D's threading model is designed to put safety and
> simplicity over performance and flexibility.  Given the amount of bugs that are
> apparently generated when using threading for concurrency in large-scale software
> written by hordes of programmers, this may be a reasonable tradeoff.
>
> Within the message-passing model, one thing that would help a lot is a Unique type
> that can be implicitly and destructively converted to immutable or shared.  In D
> as it stands right now, immutable is basically useless in all but the simplest
> cases because it's just too hard to build complex immutable data structures,
> especially if you want to avoid unnecessary copying or having to rely on casts and
> manually checked assumptions in at least small areas of the program.  In theory,
> immutable solves tons of problems, but in practice it solves very few.  While I
> don't understand shared that well, I guess a Unique type would help in creating
> shared data, too.
>
> There are two reasons for using multithreading:  Parallelism (using multiple cores
> to increase throughput) and concurrency (making things appear to be happening
> simultaneously to decrease latency; this makes sense even on a single-core
> machine).  One may come as a side effect of the other, but usually only one is the
> goal.  It sounds like you're looking for parallelism.  When using threading for
> parallelism as opposed to concurrency, this tradeoff of simplicity and safety in
> exchange for flexibility and performance doesn't work so well because:
>
> 1.  When using threading for parallelism instead of concurrency, it's reasonable
> to do some unsafe stuff to get better performance, since performance is the whole
> point anyhow.
>
> 2.  Unlike the concurrency case, the parallelism case usually occurs only in small
> hotspots of a program, or in small scientific computing programs.  In these cases
> it's not that hard for the programmer to manually track what's shared, etc.
>
> 3.  In my experience at least, parallelism often requires finer grained
> communication between threads than concurrency.  For example, an OS timeslice is
> about 15 milliseconds, meaning that on single core machines threads being used for
> concurrency simply can't communicate more often than that.  I've written useful
> parallel code that scaled to at least 4 cores and required communication between
> threads several times per millisecond.  It could have been written more
> efficiently w.r.t. communication between threads, but it would have required a lot
> more memory allocations and been less efficient in other respects.
>
> While I completely agree that message passing should be D's **flagship** threading
> model because it's been proven to work well in a lot of cases, I'm not sure if it
> should be the **only** one well-supported out of the box because it's just too
> inflexible when you want pull-out-all-stops parallelism.  As Robert Jacques
> mentioned, I've been working on a parallelism library.  The code is at:
>
> http://dsource.org/projects/scrapple/browser/trunk/parallelFuture/parallelFuture.d
>
> The docs are at:
>
> http://cis.jhu.edu/~dsimcha/parallelFuture.html
>
> I've been thinking lately about how to integrate this into the new threading
> model, as it's currently completely unsafe, doesn't use shared at all, and was
> written before the new threading model was implemented.  (core.thread still takes
> an unshared delegate).  I think before we can solve the problems you've brought
> up, we need to clarify how non-message passing based multithreading (i.e. using
> shared) is going to work in D, as right now it is completely unclear at least to me.

I completely agree with everything you said and I really dislike how D2 currently seems to virtually impose an application architecture based on the message passing model if you don't want to circumvent and thus break the entire type system. While I do agree that message passing makes a lot of sense as the default choice, there also has to be well thought-out and extensive support for the shared memory model if D2 is really focusing on the concurrency issue as much as it claims.

Personally, I've found hybrid architectures where both models are combined as needed to be the most flexible and best performing approach and there is no way a language touted to be a systems language should impose one model over the other and stop the programmer from doing things the way he wants.

/Max
August 01, 2010
On 01/08/2010 21:25, awishformore wrote:
> On 01/08/2010 19:17, dsimcha wrote:
>> == Quote from Jonathan M Davis (jmdavisprog@gmail.com)'s article
>>> Okay. From what I can tell, it seems to be a recurring pattern with
>>> threads that
>>> it's useful to spawn a thread, have it do some work, and then have it
>>> return the
>>> result and terminate. The appropriate way to do that seems to spawn
>>> the thread
>>> with the data that needs to be passed and then using send to send
>>> what would
>>> normally be the return value before the function (and therefore the
>>> spawned
>>> thread) terminates. I see 2 problems with this, both stemming from
>>> immutability.
>>
>> I think the bottom line is that D's threading model is designed to put
>> safety and
>> simplicity over performance and flexibility. Given the amount of bugs
>> that are
>> apparently generated when using threading for concurrency in
>> large-scale software
>> written by hordes of programmers, this may be a reasonable tradeoff.
>>
>> Within the message-passing model, one thing that would help a lot is a
>> Unique type
>> that can be implicitly and destructively converted to immutable or
>> shared. In D
>> as it stands right now, immutable is basically useless in all but the
>> simplest
>> cases because it's just too hard to build complex immutable data
>> structures,
>> especially if you want to avoid unnecessary copying or having to rely
>> on casts and
>> manually checked assumptions in at least small areas of the program.
>> In theory,
>> immutable solves tons of problems, but in practice it solves very few.
>> While I
>> don't understand shared that well, I guess a Unique type would help in
>> creating
>> shared data, too.
>>
>> There are two reasons for using multithreading: Parallelism (using
>> multiple cores
>> to increase throughput) and concurrency (making things appear to be
>> happening
>> simultaneously to decrease latency; this makes sense even on a
>> single-core
>> machine). One may come as a side effect of the other, but usually only
>> one is the
>> goal. It sounds like you're looking for parallelism. When using
>> threading for
>> parallelism as opposed to concurrency, this tradeoff of simplicity and
>> safety in
>> exchange for flexibility and performance doesn't work so well because:
>>
>> 1. When using threading for parallelism instead of concurrency, it's
>> reasonable
>> to do some unsafe stuff to get better performance, since performance
>> is the whole
>> point anyhow.
>>
>> 2. Unlike the concurrency case, the parallelism case usually occurs
>> only in small
>> hotspots of a program, or in small scientific computing programs. In
>> these cases
>> it's not that hard for the programmer to manually track what's shared,
>> etc.
>>
>> 3. In my experience at least, parallelism often requires finer grained
>> communication between threads than concurrency. For example, an OS
>> timeslice is
>> about 15 milliseconds, meaning that on single core machines threads
>> being used for
>> concurrency simply can't communicate more often than that. I've
>> written useful
>> parallel code that scaled to at least 4 cores and required
>> communication between
>> threads several times per millisecond. It could have been written more
>> efficiently w.r.t. communication between threads, but it would have
>> required a lot
>> more memory allocations and been less efficient in other respects.
>>
>> While I completely agree that message passing should be D's
>> **flagship** threading
>> model because it's been proven to work well in a lot of cases, I'm not
>> sure if it
>> should be the **only** one well-supported out of the box because it's
>> just too
>> inflexible when you want pull-out-all-stops parallelism. As Robert
>> Jacques
>> mentioned, I've been working on a parallelism library. The code is at:
>>
>> http://dsource.org/projects/scrapple/browser/trunk/parallelFuture/parallelFuture.d
>>
>>
>> The docs are at:
>>
>> http://cis.jhu.edu/~dsimcha/parallelFuture.html
>>
>> I've been thinking lately about how to integrate this into the new
>> threading
>> model, as it's currently completely unsafe, doesn't use shared at all,
>> and was
>> written before the new threading model was implemented. (core.thread
>> still takes
>> an unshared delegate). I think before we can solve the problems you've
>> brought
>> up, we need to clarify how non-message passing based multithreading
>> (i.e. using
>> shared) is going to work in D, as right now it is completely unclear
>> at least to me.
>
> I completely agree with everything you said and I really dislike how D2
> currently seems to virtually impose an application architecture based on
> the message passing model if you don't want to circumvent and thus break
> the entire type system. While I do agree that message passing makes a
> lot of sense as the default choice, there also has to be well
> thought-out and extensive support for the shared memory model if D2 is
> really focusing on the concurrency issue as much as it claims.
>
> Personally, I've found hybrid architectures where both models are
> combined as needed to be the most flexible and best performing approach
> and there is no way a language touted to be a systems language should
> impose one model over the other and stop the programmer from doing
> things the way he wants.
>
> /Max

P.S.: I find this to be especially true when taking into account the pragmatic approach under which D is supposed to be designed. D2 sounds a lot more idealistic than pragmatic, especially when it comes to concurrency, and I find that to be a very worrisome development.

/Max
August 01, 2010
On 08/01/2010 09:28 PM, awishformore wrote:
> On 01/08/2010 21:25, awishformore wrote:
>> On 01/08/2010 19:17, dsimcha wrote:
>>> == Quote from Jonathan M Davis (jmdavisprog@gmail.com)'s article
>>>> Okay. From what I can tell, it seems to be a recurring pattern with
>>>> threads that
>>>> it's useful to spawn a thread, have it do some work, and then have it
>>>> return the
>>>> result and terminate. The appropriate way to do that seems to spawn
>>>> the thread
>>>> with the data that needs to be passed and then using send to send
>>>> what would
>>>> normally be the return value before the function (and therefore the
>>>> spawned
>>>> thread) terminates. I see 2 problems with this, both stemming from
>>>> immutability.
>>>
>>> I think the bottom line is that D's threading model is designed to put
>>> safety and
>>> simplicity over performance and flexibility. Given the amount of bugs
>>> that are
>>> apparently generated when using threading for concurrency in
>>> large-scale software
>>> written by hordes of programmers, this may be a reasonable tradeoff.
>>>
>>> Within the message-passing model, one thing that would help a lot is a
>>> Unique type
>>> that can be implicitly and destructively converted to immutable or
>>> shared. In D
>>> as it stands right now, immutable is basically useless in all but the
>>> simplest
>>> cases because it's just too hard to build complex immutable data
>>> structures,
>>> especially if you want to avoid unnecessary copying or having to rely
>>> on casts and
>>> manually checked assumptions in at least small areas of the program.
>>> In theory,
>>> immutable solves tons of problems, but in practice it solves very few.
>>> While I
>>> don't understand shared that well, I guess a Unique type would help in
>>> creating
>>> shared data, too.
>>>
>>> There are two reasons for using multithreading: Parallelism (using
>>> multiple cores
>>> to increase throughput) and concurrency (making things appear to be
>>> happening
>>> simultaneously to decrease latency; this makes sense even on a
>>> single-core
>>> machine). One may come as a side effect of the other, but usually only
>>> one is the
>>> goal. It sounds like you're looking for parallelism. When using
>>> threading for
>>> parallelism as opposed to concurrency, this tradeoff of simplicity and
>>> safety in
>>> exchange for flexibility and performance doesn't work so well because:
>>>
>>> 1. When using threading for parallelism instead of concurrency, it's
>>> reasonable
>>> to do some unsafe stuff to get better performance, since performance
>>> is the whole
>>> point anyhow.
>>>
>>> 2. Unlike the concurrency case, the parallelism case usually occurs
>>> only in small
>>> hotspots of a program, or in small scientific computing programs. In
>>> these cases
>>> it's not that hard for the programmer to manually track what's shared,
>>> etc.
>>>
>>> 3. In my experience at least, parallelism often requires finer grained
>>> communication between threads than concurrency. For example, an OS
>>> timeslice is
>>> about 15 milliseconds, meaning that on single core machines threads
>>> being used for
>>> concurrency simply can't communicate more often than that. I've
>>> written useful
>>> parallel code that scaled to at least 4 cores and required
>>> communication between
>>> threads several times per millisecond. It could have been written more
>>> efficiently w.r.t. communication between threads, but it would have
>>> required a lot
>>> more memory allocations and been less efficient in other respects.
>>>
>>> While I completely agree that message passing should be D's
>>> **flagship** threading
>>> model because it's been proven to work well in a lot of cases, I'm not
>>> sure if it
>>> should be the **only** one well-supported out of the box because it's
>>> just too
>>> inflexible when you want pull-out-all-stops parallelism. As Robert
>>> Jacques
>>> mentioned, I've been working on a parallelism library. The code is at:
>>>
>>> http://dsource.org/projects/scrapple/browser/trunk/parallelFuture/parallelFuture.d
>>>
>>>
>>>
>>> The docs are at:
>>>
>>> http://cis.jhu.edu/~dsimcha/parallelFuture.html
>>>
>>> I've been thinking lately about how to integrate this into the new
>>> threading
>>> model, as it's currently completely unsafe, doesn't use shared at all,
>>> and was
>>> written before the new threading model was implemented. (core.thread
>>> still takes
>>> an unshared delegate). I think before we can solve the problems you've
>>> brought
>>> up, we need to clarify how non-message passing based multithreading
>>> (i.e. using
>>> shared) is going to work in D, as right now it is completely unclear
>>> at least to me.
>>
>> I completely agree with everything you said and I really dislike how D2
>> currently seems to virtually impose an application architecture based on
>> the message passing model if you don't want to circumvent and thus break
>> the entire type system. While I do agree that message passing makes a
>> lot of sense as the default choice, there also has to be well
>> thought-out and extensive support for the shared memory model if D2 is
>> really focusing on the concurrency issue as much as it claims.
>>
>> Personally, I've found hybrid architectures where both models are
>> combined as needed to be the most flexible and best performing approach
>> and there is no way a language touted to be a systems language should
>> impose one model over the other and stop the programmer from doing
>> things the way he wants.
>>
>> /Max
>
> P.S.: I find this to be especially true when taking into account the
> pragmatic approach under which D is supposed to be designed. D2 sounds a
> lot more idealistic than pragmatic, especially when it comes to
> concurrency, and I find that to be a very worrisome development.
>
> /Max

import core.thread;

You don't have to use the message passing interface if you don't want to.
August 01, 2010
On Sun, 01 Aug 2010 16:02:43 -0400, Pelle <pelle.mansson@gmail.com> wrote:
> On 08/01/2010 09:28 PM, awishformore wrote:
>
> import core.thread;
>
> You don't have to use the message passing interface if you don't want to.

Or use shared classes; you can pass those around too.
August 01, 2010
On Sunday 01 August 2010 08:55:54 Robert Jacques wrote:
> Hi Jonathan,
> It sounds like what you really want is a task-based parallel programming
> library, as opposed to concurrent thread. I'd recommend Dave Simcha's
> parallelFuture library if you want to play around with this in D
> (http://www.dsource.org/projects/scrapple/browser/trunk/parallelFuture/para
> llelFuture.d). However, parallelFuture is currently unsafe - you need to
> make sure that logically speaking that data the task is being passed is
> immutable. Shared/const/immutable delegates have been brought up before as
> a way to formalize the implicit assumptions of libraries like
> parallelFuture, but nothing has come of it yet.
> As for std.concurrency, immutability is definitely the correct way to go,
> even if it means extra copying: for most jobs the processing should
> greatly out way the cost of copying and thread initialization (though
> under the hood thread pools should help with the latter). A large amount
> of experience dictates that shared mutable data, let alone unprotected
> mutable data, is a bug waiting to happen.
> On a more practical note, if you relaxing either 1) or 2) can cause major
> problems with certain modern GCs, so at a minimum casts should be involved.

I totally agree that for the most part, the message passing as-is is a very good idea. It's just that there are cases where it would be desirable to actually hand over data, so that the thread receiving the data owns it, and it doesn't exist anymore in the sending thread. I'm not sure that that's possible in the general case as nice as it would be. However, in my specific case - effectively returning data upon thread termination - I should think that it would be totally possible since the sending thread is terminating. That would require some extra functions in std.concurrency however rather than using receive() and send() as we have them.

In any case, I'll have to look at Dave Simcha's parallelism library. Thanks for the info.

- Jonathan M Davis
August 02, 2010
FWIW, I posted an enhancement request on the subject:

http://d.puremagic.com/issues/show_bug.cgi?id=4566

- Jonthan M Davis

August 02, 2010
On Sun, 01 Aug 2010 19:22:10 -0400, Jonathan M Davis <jmdavisprog@gmail.com> wrote:

> On Sunday 01 August 2010 08:55:54 Robert Jacques wrote:
>> Hi Jonathan,
>> It sounds like what you really want is a task-based parallel programming
>> library, as opposed to concurrent thread. I'd recommend Dave Simcha's
>> parallelFuture library if you want to play around with this in D
>> (http://www.dsource.org/projects/scrapple/browser/trunk/parallelFuture/para
>> llelFuture.d). However, parallelFuture is currently unsafe - you need to
>> make sure that logically speaking that data the task is being passed is
>> immutable. Shared/const/immutable delegates have been brought up before as
>> a way to formalize the implicit assumptions of libraries like
>> parallelFuture, but nothing has come of it yet.
>> As for std.concurrency, immutability is definitely the correct way to go,
>> even if it means extra copying: for most jobs the processing should
>> greatly out way the cost of copying and thread initialization (though
>> under the hood thread pools should help with the latter). A large amount
>> of experience dictates that shared mutable data, let alone unprotected
>> mutable data, is a bug waiting to happen.
>> On a more practical note, if you relaxing either 1) or 2) can cause major
>> problems with certain modern GCs, so at a minimum casts should be involved.
>
> I totally agree that for the most part, the message passing as-is is a very good
> idea. It's just that there are cases where it would be desirable to actually
> hand over data, so that the thread receiving the data owns it, and it doesn't
> exist anymore in the sending thread. I'm not sure that that's possible in the
> general case as nice as it would be.

Oh, sorry I missed that point. That has been seriously discussed before under the moniker of a 'unique'/'mobile' type. You might want to look up the dmd-concurrency mailing list or Bartoz's old blogs bartoszmilewski.wordpress.com. If I recall correctly, there was some plans to support the library unique struct in std.concurrency. However, Walter found that trying to fit it into the type system as a whole was too complex, so the concept is being left to the simpler library solution.
« First   ‹ Prev
1 2