February 05, 2008
Guys,

take a look at transactional memory concept;  very interesting type of locking (or should I say sharing) of memory allocations.

http://en.wikipedia.org/wiki/Software_transactional_memory



-Bedros


Sean Kelly Wrote:

> Daniel Lewis wrote:
> > Sean Kelly Wrote:
> >> This is basically how futures work.  It's a pretty useful approach.
> > 
> > Agreed.  Steve Dekorte has been working with them for a long time and integrated them into his iolanguage.  He found he could regularly get comparable performance to Apache even in a pure OO framework (even Number!?) just 'cause his parallelization was better.
> > 
> > I personally believe the best way though is to take advantage of lock instructions for *allocation* of memory.  Once memory is allocated, it's "yours" to do with as you please.  I haven't looked at this for a few months but I remember seeing an algorithm that did first-through concurrency loop-locks for malloc and free and had practically no overhead ever.
> 
> Actually, it's entirely possible to do lock-free allocation and deletion.  HOARD does lock-free allocation, for example, and lock-free deletion would be a matter of appending the block to a lock-free slist on the appropriate heap.  A GC could do basically the same thing, but collections would be a bit more complex.  I've considered writing such a GC, but it's an involved project and I simply don't have the time.
> 
> 
> Sean

February 05, 2008
"Christopher Wright" <dhasenan@gmail.com> wrote in message news:fo8o62$2m1t$1@digitalmars.com...
> Craig Black wrote:
>> In that case, it may be beneficial to somehow separate parallel and sequential events, perhaps with separate event queues.  However, it would require that each event knows whether it is "pure" or not, so that it is placed on the appropriate queue.
>
> A static if or two in the event broker would solve it. There would be a
> method:
> void subscribe (T)(EventTopic topic, T delegate) {
>    static assert (is (T == delegate));
>    static if (is (T == pure)) {
>       // add to the pure event subscribers for auto parallelization
>    } else {
>       // add to the impure ones
>    }
> }
>
>> -Craig

It might not be as fancy as using static if, but it might be simpler to use overloading (if the syntax will support it).

void subscribe(EventTopic topic, void delegate() del)  { ... }
void subscribe(EventTopic topic, pure void delegate() del) { ... }


February 05, 2008
There's also a presentation about how it might apply to D here:

http://s3.amazonaws.com/dconf2007/DSTM.ppt http://www.relisoft.com/D/STM_pptx_files/v3_document.htm

Bedros Hanounik wrote:
> Guys,
> 
> take a look at transactional memory concept;  very interesting type of locking (or should I say sharing) of memory allocations.
> 
> http://en.wikipedia.org/wiki/Software_transactional_memory
> 
> 
> 
> -Bedros
> 
> 
> Sean Kelly Wrote:
> 
>> Daniel Lewis wrote:
>>> Sean Kelly Wrote:
>>>> This is basically how futures work.  It's a pretty useful approach.
>>> Agreed.  Steve Dekorte has been working with them for a long time and integrated them into his iolanguage.  He found he could regularly get comparable performance to Apache even in a pure OO framework (even Number!?) just 'cause his parallelization was better.
>>>
>>> I personally believe the best way though is to take advantage of lock instructions for *allocation* of memory.  Once memory is allocated, it's "yours" to do with as you please.  I haven't looked at this for a few months but I remember seeing an algorithm that did first-through concurrency loop-locks for malloc and free and had practically no overhead ever.
>> Actually, it's entirely possible to do lock-free allocation and deletion.  HOARD does lock-free allocation, for example, and lock-free deletion would be a matter of appending the block to a lock-free slist on the appropriate heap.  A GC could do basically the same thing, but collections would be a bit more complex.  I've considered writing such a GC, but it's an involved project and I simply don't have the time.
>>
>>
>> Sean
> 
February 09, 2008
How does garbage collection currently work in a multi-processor environment?

My plan is to only have one thread per processor in addition to the main thread. When GC runs, does it pause all threads on all processors or does it only pause threads on a per-processor basis?


Denton Cockburn Wrote:

> Ok, Walter's said previously (I think) that he's going to wait to see what C++ does in regards to multicore concurrency.
> 
> Ignoring this for now, for fun, what ideas do you guys have regarding multicore concurrency?

February 10, 2008
Mike Koehmstedt wrote:
> How does garbage collection currently work in a multi-processor environment?
> 
> My plan is to only have one thread per processor in addition to the main thread. When GC runs, does it pause all threads on all processors or does it only pause threads on a per-processor basis?
> 
> 
> Denton Cockburn Wrote:
> 
>> Ok, Walter's said previously (I think) that he's going to wait to see what
>> C++ does in regards to multicore concurrency.
>>
>> Ignoring this for now, for fun, what ideas do you guys have regarding
>> multicore concurrency?
> 

It pauses all threads on all processors.
1 2 3
Next ›   Last »