October 23, 2009 Re: parallelFuture | ||||
---|---|---|---|---|
| ||||
Posted in reply to Tim Matthews | == Quote from Tim Matthews (tim.matthews7@gmail.com)'s article
> dsimcha wrote:
> > For now, parallelFuture was designed with a single producer, multiple worker model. Absolutely no attempt was made to allow for tasks running in the task pool to themselves submit jobs to the same task pool, because it would have made things more complicated and I couldn't think of any use cases.
> like recursion a function is gona need to call another function, a thread needs to be able to spawn threads and task should be able to create new tasks. Making newly spawned tasks stay on the same thread is good optimization. This shouldn't need a specific use case.
> > I designed the lib with
> > the types of use cases I encounter in my work in mind. (mathy, pure throughput
> > oriented computing on large, embarrassingly parallel problems.) If someone comes
> > up with a compelling use case, though, I'd certainly consider adding such
> > abilities provided they don't interfere with performance or API simplicity for the
> > more common cases.
> >
> > To make this discussion simple, let's define F1 as a future/task submitted by the main producer thread, and F2 as a task/future submitted by F1. The queue is (for now) strictly FIFO, except that if you have a pointer to the Task/Future object you can steal a job. When F1 submits F2 to the queue, F2 goes to the back of the queue like anything else. This means when F1 waits on F2, it is possible to have a cyclical dependency (F1 waiting on F2, F2 waiting for a worker thread populated by F1). This is mitigated by work stealing (F1 may just steal F2 and do it in its own thread).
> I don't like that ^ idea of simple discussion with the many F1 and F2 all over the place. I hope this video can help visualize some ideas: http://channel9.msdn.com/pdc2008/TL26/
> >
> > In parallel map and foreach, I should probably document this, but for now it's undefined behavior for the mapping function or parallel foreach loop body to submit jobs to the task pool and wait on them, and in practice will likely result in deadlocks.
> You want to document that as undefined behavior? It can be made to work.
Ok, I thought of my use case and an easy fix. It will probably be fixed soon.
|
October 24, 2009 Re: parallelFuture | ||||
---|---|---|---|---|
| ||||
Posted in reply to Tim Matthews | == Quote from Tim Matthews (tim.matthews7@gmail.com)'s article > dsimcha wrote: > > For now, parallelFuture was designed with a single producer, multiple worker model. Absolutely no attempt was made to allow for tasks running in the task pool to themselves submit jobs to the same task pool, because it would have made things more complicated and I couldn't think of any use cases. > like recursion a function is gona need to call another function, a thread needs to be able to spawn threads and task should be able to create new tasks. Making newly spawned tasks stay on the same thread is good optimization. This shouldn't need a specific use case. > > I designed the lib with > > the types of use cases I encounter in my work in mind. (mathy, pure throughput > > oriented computing on large, embarrassingly parallel problems.) If someone comes > > up with a compelling use case, though, I'd certainly consider adding such > > abilities provided they don't interfere with performance or API simplicity for the > > more common cases. > > > > To make this discussion simple, let's define F1 as a future/task submitted by the main producer thread, and F2 as a task/future submitted by F1. The queue is (for now) strictly FIFO, except that if you have a pointer to the Task/Future object you can steal a job. When F1 submits F2 to the queue, F2 goes to the back of the queue like anything else. This means when F1 waits on F2, it is possible to have a cyclical dependency (F1 waiting on F2, F2 waiting for a worker thread populated by F1). This is mitigated by work stealing (F1 may just steal F2 and do it in its own thread). > I don't like that ^ idea of simple discussion with the many F1 and F2 all over the place. I hope this video can help visualize some ideas: http://channel9.msdn.com/pdc2008/TL26/ > > > > In parallel map and foreach, I should probably document this, but for now it's undefined behavior for the mapping function or parallel foreach loop body to submit jobs to the task pool and wait on them, and in practice will likely result in deadlocks. > You want to document that as undefined behavior? It can be made to work. Ok, I added the ability for tasks to submit more tasks to the pool by implementing what I'd call a "selfish thread" model (not sure if there's a more formal name for it). Basically, a thread will steal any job it's waiting on if the job hasn't been started yet, no matter where the job was in the queue. This was made somewhat difficult to implement because I was using lazy submitting for parallel foreach and map and recycling objects. This enables parallel foreach over random access ranges without generating any heap activity. It also enables parallel foreach over lazy forward ranges that might not even fit in memory if they were converted to arrays up front, without having to synchronize on every call to front(). Before each submission, a small portion of the range is converted to an array, and the memory used for this is recycled when the object is recycled. However, the effort was worth it, as I thought of a killer use case: Nested parallel foreach over matrices. Again, code: http://dsource.org/projects/scrapple/browser/trunk/parallelFuture/parallelFuture.d Docs: http://cis.jhu.edu/~dsimcha/parallelFuture.html |
October 24, 2009 Re: parallelFuture | ||||
---|---|---|---|---|
| ||||
Posted in reply to dsimcha | dsimcha wrote:
> Again, code:
>
> http://dsource.org/projects/scrapple/browser/trunk/parallelFuture/parallelFuture.d
>
> Docs:
>
> http://cis.jhu.edu/~dsimcha/parallelFuture.html
What license is the library under?
Andrei
|
October 24, 2009 Re: parallelFuture | ||||
---|---|---|---|---|
| ||||
Posted in reply to Andrei Alexandrescu | == Quote from Andrei Alexandrescu (SeeWebsiteForEmail@erdani.org)'s article > dsimcha wrote: > > Again, code: > > > > http://dsource.org/projects/scrapple/browser/trunk/parallelFuture/parallelFuture.d > > > > Docs: > > > > http://cis.jhu.edu/~dsimcha/parallelFuture.html > What license is the library under? > Andrei Boost. I borrowed a few lines of assembler for atomic ops from Tango, though. These snippets are "standard" atomic ops code and are probably not copyrightable. Nevertheless, this section of code is clearly marked, and D2/Phobos should eventually get a full-fledged atomics lib anyhow. At that point, these ad-hoc functions can be replaced with more generic functions and the issue will be completely resolved. |
October 24, 2009 Re: parallelFuture | ||||
---|---|---|---|---|
| ||||
Posted in reply to Andrei Alexandrescu | Andrei Alexandrescu wrote:
> dsimcha wrote:
>> Again, code:
>>
>> http://dsource.org/projects/scrapple/browser/trunk/parallelFuture/parallelFuture.d
>>
>>
>> Docs:
>>
>> http://cis.jhu.edu/~dsimcha/parallelFuture.html
>
> What license is the library under?
>
> Andrei
Boost. I suppose you didn't want to look at the source in case the license wasn't sufficiently liberal, but it's easy enough to scan through the module's doc comments for a license block, without reading the source code.
|
October 24, 2009 Re: parallelFuture | ||||
---|---|---|---|---|
| ||||
Posted in reply to Christopher Wright | Christopher Wright wrote:
> Andrei Alexandrescu wrote:
>> dsimcha wrote:
>>> Again, code:
>>>
>>> http://dsource.org/projects/scrapple/browser/trunk/parallelFuture/parallelFuture.d
>>>
>>>
>>> Docs:
>>>
>>> http://cis.jhu.edu/~dsimcha/parallelFuture.html
>>
>> What license is the library under?
>>
>> Andrei
>
> Boost. I suppose you didn't want to look at the source in case the license wasn't sufficiently liberal, but it's easy enough to scan through the module's doc comments for a license block, without reading the source code.
I actually did take a look but couldn't visually find the license block at the beginning or end of the code, sorry. Now I see it's a bit below the beginning.
Great - it would be cool if we could adapt this for Phobos. I'm hoping Sean (who is fleshing out his messaging API) and David could find ways to integrate their work.
Andrei
|
October 24, 2009 Re: parallelFuture | ||||
---|---|---|---|---|
| ||||
Posted in reply to Andrei Alexandrescu | == Quote from Andrei Alexandrescu (SeeWebsiteForEmail@erdani.org)'s article > Christopher Wright wrote: > > Andrei Alexandrescu wrote: > >> dsimcha wrote: > >>> Again, code: > >>> > >>> http://dsource.org/projects/scrapple/browser/trunk/parallelFuture/parallelFuture.d > >>> > >>> > >>> Docs: > >>> > >>> http://cis.jhu.edu/~dsimcha/parallelFuture.html > >> > >> What license is the library under? > >> > >> Andrei > > > > Boost. I suppose you didn't want to look at the source in case the license wasn't sufficiently liberal, but it's easy enough to scan through the module's doc comments for a license block, without reading the source code. > I actually did take a look but couldn't visually find the license block > at the beginning or end of the code, sorry. Now I see it's a bit below > the beginning. > Great - it would be cool if we could adapt this for Phobos. I'm hoping > Sean (who is fleshing out his messaging API) and David could find ways > to integrate their work. > Andrei If that's the case, someone please point me to some use cases for message passing so I can understand it better. parallelFuture was designed mostly with pure throughput-oriented multithreading of embarrassingly parallel tasks in mind. I hadn't given any thought to message passing because I don't encounter many use cases for it in the type of work I do. |
October 24, 2009 Re: parallelFuture | ||||
---|---|---|---|---|
| ||||
Posted in reply to Andrei Alexandrescu | Andrei Alexandrescu wrote:
> Christopher Wright wrote:
>> Andrei Alexandrescu wrote:
>>> dsimcha wrote:
>>>> Again, code:
>>>>
>>>> http://dsource.org/projects/scrapple/browser/trunk/parallelFuture/parallelFuture.d
>>>>
>>>>
>>>> Docs:
>>>>
>>>> http://cis.jhu.edu/~dsimcha/parallelFuture.html
>>>
>>> What license is the library under?
>>>
>>> Andrei
>>
>> Boost. I suppose you didn't want to look at the source in case the license wasn't sufficiently liberal, but it's easy enough to scan through the module's doc comments for a license block, without reading the source code.
>
> I actually did take a look but couldn't visually find the license block at the beginning or end of the code, sorry. Now I see it's a bit below the beginning.
>
> Great - it would be cool if we could adapt this for Phobos. I'm hoping Sean (who is fleshing out his messaging API) and David could find ways to integrate their work.
I was actually going to suggest just that. :) I definitely think this deserves a place in Phobos. The parallel foreach alone covers a lot of use cases for multithreaded code in the simplest way.
-Lars
|
Copyright © 1999-2021 by the D Language Foundation