February 04, 2008
Sean Kelly Wrote:
> This is basically how futures work.  It's a pretty useful approach.

Agreed.  Steve Dekorte has been working with them for a long time and integrated them into his iolanguage.  He found he could regularly get comparable performance to Apache even in a pure OO framework (even Number!?) just 'cause his parallelization was better.

I personally believe the best way though is to take advantage of lock instructions for *allocation* of memory.  Once memory is allocated, it's "yours" to do with as you please.  I haven't looked at this for a few months but I remember seeing an algorithm that did first-through concurrency loop-locks for malloc and free and had practically no overhead ever.

Regards,
Dan
February 04, 2008
Craig Black wrote:
> 
> "Daniel Lewis" <murpsoft@hotmail.com> wrote in message news:fo5vdf$2q2e$1@digitalmars.com...
>> Craig Black Wrote:
>>> Walter also has said recently that he wants to implement automatic
>>> parallelization, and is working on features to will support this (const,
>>> invariant, pure).  I think Andrei is pushing this.  I have my doubts that
>>> this will be useful for most programs.  I think that to leverage this
>>> automatic parallelization, you will have to code in a functional style, or
>>> build your application using pure functions.  Granularity will also probably
>>> be an issue.  Because of these drawbacks, automatic parallelization may not
>>> be so automatic, but may require careful programming, just like manual
>>> parallelization.  But maybe I'm wrong and it will be the greatest thing
>>> ever.
>>>
>>> -Craig
>>>
>>
>> Craig, I'm not sure if you noticed that AMD and Intel had "HT" for a long time and are now pushing multicore on desktop users now, as well as servers.  Const and pure are also relevant to live application migration, embedded application interfacing, optimization, and debugging.
>>
> 
> Yes everything is going multi-threaded and multi-core.  Any feature that aids programmers in writing multi-threaded software is a plus.  However, I'm skeptical that a compiler will be able to take code that is written without any consideration for threading, and parallelize it.
> 
>> D is moving towards supporting some assertions that data isn't changed by an algorithm, and/or that it must not be changed.  That doesn't require any more work than deciding whether something should be constant, and then making it compile.
> 
> Consider that the compiler is relying on pure functions for parallelization. If (1) the programmer doesn't write any pure functions, or (2) the granularity of the pure function does not justify the overhead of parallelization, then there's no benefit.  Thus, careful consideration will be required to leverage automatic parallelization.

I'm curious how automatic parallelization might work with delegates. It probably won't, unless you put the 'pure' keyword in the signature of the delegates. In that case, I hope that pure delegates are implicitly convertible to non-pure delegates.

I was wondering because I work with a highly event-driven application in C# that might benefit from automatic parallelization, though some event subscribers probably modify data that they don't own.
February 04, 2008
Sean Kelly Wrote:

> Bedros Hanounik wrote:
> > I think the best way to tackle concurrency is to have two types of functions
> > 
> > blocking functions (like in the old sequential code execution)
> > 
> > and non-blocking functions (the new parallel code execution)
> > 
> > for non-blocking functions, the function returns additional type which is true when function execution is completed
> 
> This is basically how futures work.  It's a pretty useful approach.
> 
> 
> Sean


I've never heard of that.  Does anyone have a good link for extra detail on futures?

February 04, 2008
Jason House wrote:
> Sean Kelly Wrote:
> 
>> Bedros Hanounik wrote:
>>> I think the best way to tackle concurrency is to have two types of functions
>>>
>>> blocking functions (like in the old sequential code execution)
>>>
>>> and non-blocking functions (the new parallel code execution)
>>>
>>> for non-blocking functions, the function returns additional type which is true when function execution is completed
>> This is basically how futures work.  It's a pretty useful approach.
>>
>>
>> Sean
> 
> 
> I've never heard of that.  Does anyone have a good link for extra detail on futures?
> 
Basically, it comes down to a function that takes a delegate dg, and runs it on a threadpool, returning a wrapper object.
The wrapper object can be evaluated, in which case it blocks until the original dg has returned a value. This value is then returned by the wrapper, as well as cached.
The idea is that you create a future for a value that you know you'll need soon, then do some other task and query it later. :)

 scrapple.tools' ThreadPool class has a futures implementation.
 Here's an example:

auto t = new Threadpool(2);
auto f = t.future(&do_complicated_calculation);
auto g = t.future(&do_complicated_calculation2);
return f() + g();

 --downs
February 04, 2008
Daniel Lewis wrote:
> Sean Kelly Wrote:
>> This is basically how futures work.  It's a pretty useful approach.
> 
> Agreed.  Steve Dekorte has been working with them for a long time and integrated them into his iolanguage.  He found he could regularly get comparable performance to Apache even in a pure OO framework (even Number!?) just 'cause his parallelization was better.
> 
> I personally believe the best way though is to take advantage of lock instructions for *allocation* of memory.  Once memory is allocated, it's "yours" to do with as you please.  I haven't looked at this for a few months but I remember seeing an algorithm that did first-through concurrency loop-locks for malloc and free and had practically no overhead ever.

Actually, it's entirely possible to do lock-free allocation and deletion.  HOARD does lock-free allocation, for example, and lock-free deletion would be a matter of appending the block to a lock-free slist on the appropriate heap.  A GC could do basically the same thing, but collections would be a bit more complex.  I've considered writing such a GC, but it's an involved project and I simply don't have the time.


Sean
February 04, 2008
Jason House wrote:
> Sean Kelly Wrote:
> 
>> Bedros Hanounik wrote:
>>> I think the best way to tackle concurrency is to have two types of functions
>>>
>>> blocking functions (like in the old sequential code execution)
>>>
>>> and non-blocking functions (the new parallel code execution)
>>>
>>> for non-blocking functions, the function returns additional type which is true when function execution is completed
>> This is basically how futures work.  It's a pretty useful approach.
> 
> 
> I've never heard of that.  Does anyone have a good link for extra detail on futures?

Futures are basically Herb Sutter's rehashing of Hoare's CSP model. Here's a presentation of his where he talks about it:

http://irbseminars.intel-research.net/HerbSutter.pdf


Sean
February 04, 2008
downs wrote:
> Jason House wrote:
>> I've never heard of that.  Does anyone have a good link for extra detail on futures?
>>
> Basically, it comes down to a function that takes a delegate dg, and runs it on a threadpool, returning a wrapper object.
> The wrapper object can be evaluated, in which case it blocks until the original dg has returned a value. This value is then returned by the wrapper, as well as cached.
> The idea is that you create a future for a value that you know you'll need soon, then do some other task and query it later. :)

… while Sean Kelly wrote:
> Futures are basically Herb Sutter's rehashing of Hoare's CSP model.

More specifically, this sounds like a special case of a CSP-like channel where only one datum is ever transmitted.  (Generally, channels are comparable to UNIX pipes and can transmit many data.)

Russ Cox has a nice introduction to channel/thread programming at <http://swtch.com/~rsc/talks/threads07> and an overview of the field at <http://swtch.com/~rsc/thread>.

- --Joel
February 04, 2008
Joel C. Salomon wrote:
> downs wrote:
>> Jason House wrote:
>>> I've never heard of that.  Does anyone have a good link for extra detail on futures?
>>>
>> Basically, it comes down to a function that takes a delegate dg, and runs it on a threadpool, returning a wrapper object.
>> The wrapper object can be evaluated, in which case it blocks until the original dg has returned a value. This value is then returned by the wrapper, as well as cached.
>> The idea is that you create a future for a value that you know you'll need soon, then do some other task and query it later. :)
> 
> & while Sean Kelly wrote:
>> Futures are basically Herb Sutter's rehashing of Hoare's CSP model.
> 
> More specifically, this sounds like a special case of a CSP-like channel where only one datum is ever transmitted.  (Generally, channels are comparable to UNIX pipes and can transmit many data.)
> 

Heh.
Funny coincidence.
Let's take a look at the implementation of Future(T):

  class Future(T) {
    T res; bool done;
    MessageChannel!(T) channel;
    this() { New(channel); }
    T eval() { if (!done) { res=channel.get(); done=true; } return res; }
    alias eval opCall;
    bool finished() { return channel.canGet; }
  }


:)

 --downs
February 04, 2008
"Christopher Wright" <dhasenan@gmail.com> wrote in message news:fo74ij$2asd$1@digitalmars.com...
> Craig Black wrote:
>>
>> "Daniel Lewis" <murpsoft@hotmail.com> wrote in message news:fo5vdf$2q2e$1@digitalmars.com...
>>> Craig Black Wrote:
>>>> Walter also has said recently that he wants to implement automatic
>>>> parallelization, and is working on features to will support this
>>>> (const,
>>>> invariant, pure).  I think Andrei is pushing this.  I have my doubts
>>>> that
>>>> this will be useful for most programs.  I think that to leverage this
>>>> automatic parallelization, you will have to code in a functional style,
>>>> or
>>>> build your application using pure functions.  Granularity will also
>>>> probably
>>>> be an issue.  Because of these drawbacks, automatic parallelization may
>>>> not
>>>> be so automatic, but may require careful programming, just like manual
>>>> parallelization.  But maybe I'm wrong and it will be the greatest thing
>>>> ever.
>>>>
>>>> -Craig
>>>>
>>>
>>> Craig, I'm not sure if you noticed that AMD and Intel had "HT" for a long time and are now pushing multicore on desktop users now, as well as servers.  Const and pure are also relevant to live application migration, embedded application interfacing, optimization, and debugging.
>>>
>>
>> Yes everything is going multi-threaded and multi-core.  Any feature that aids programmers in writing multi-threaded software is a plus.  However, I'm skeptical that a compiler will be able to take code that is written without any consideration for threading, and parallelize it.
>>
>>> D is moving towards supporting some assertions that data isn't changed by an algorithm, and/or that it must not be changed.  That doesn't require any more work than deciding whether something should be constant, and then making it compile.
>>
>> Consider that the compiler is relying on pure functions for parallelization. If (1) the programmer doesn't write any pure functions, or (2) the granularity of the pure function does not justify the overhead of parallelization, then there's no benefit.  Thus, careful consideration will be required to leverage automatic parallelization.
>
> I'm curious how automatic parallelization might work with delegates. It probably won't, unless you put the 'pure' keyword in the signature of the delegates. In that case, I hope that pure delegates are implicitly convertible to non-pure delegates.

Good question.  Yes, it would seem necessary that delegates be pure or non-pure.  And I agree, pure should convert easily to non-pure, but not vice-versa.

> I was wondering because I work with a highly event-driven application in C# that might benefit from automatic parallelization, though some event subscribers probably modify data that they don't own.

In that case, it may be beneficial to somehow separate parallel and sequential events, perhaps with separate event queues.  However, it would require that each event knows whether it is "pure" or not, so that it is placed on the appropriate queue.

-Craig


February 05, 2008
Craig Black wrote:
> In that case, it may be beneficial to somehow separate parallel and sequential events, perhaps with separate event queues.  However, it would require that each event knows whether it is "pure" or not, so that it is placed on the appropriate queue.

A static if or two in the event broker would solve it. There would be a method:
void subscribe (T)(EventTopic topic, T delegate) {
   static assert (is (T == delegate));
   static if (is (T == pure)) {
      // add to the pure event subscribers for auto parallelization
   } else {
      // add to the impure ones
   }
}

> -Craig