View mode: basic / threaded / horizontal-split · Log in · Help
May 30, 2007
Re: The future of concurrent programming
BCS Wrote:
> Regan Heath wrote:
> > That just leaves the deadlock you get when you say:
> > 
> > synchronize(a) { synchronize(b) { .. } }
> > 
> > and in another thread:
> > 
> > synchronize(b) { synchronize(a) { .. } }
> > 
> 
> what D need is a:
> 
> synchronize(a, b) // gets lock on a and b but not until it can get both
> 
> Now what about where the lock are in different functions.... :b

Exactly.  In my reply to Sean I mentioned a possible solution which is perhaps more robust and flexible:

<quote me>
In the case I mention above you can at least solve it by giving each mutex an id, or priority.  Upon aquisition you ensure that no other mutex of lower priority is currently held, if it is you release both and re-aquire in the correct order (high to low or low to high whichever you decide, all that matters is that there is an order defined and adhered to in all cases).
</quote>

In other words you solve it by defining an order of aquisition in the implementation itself, so the programmer cannot make that mistake.

Regan Heath
May 30, 2007
Re: The future of concurrent programming
Regan Heath wrote:
> Daniel Keep Wrote:
>> freeagle wrote:
>>> Why do people think there is a need for another language/paradigm to
>>> solve concurrent problem? OSes deal with parallelism for decades,
>>> without special purpose languages. Just plain C, C++. Just check Task
>>> manager in windows and you'll notice there's about 100+ threads running.
>>> If microsoft can manage it with current languages, why we cant?
>>>
>>> freeagle
>> We can; it's just hard as hell and thoroughly unenjoyable.  Like I said
>> before: I can and have written multithreaded code, but it's so utterly
>> painful that I avoid it wherever possible.
> 
> I must be strange then because after 5+ years of multithreaded
> programming it's the sort I prefer to do.  Each to their own I
> guess.
> 
> I think perhaps it's something that can be learnt, but it
> takes a bit of time, similar in fact to learning to program
> in the first place.  I enjoy the challenge of it and I think
> once you understand the fundamental problems/rules/practices
> with multithreaded development it becomes almost easy, almost.

It seems that most people, on finding a deep understanding of
multi-threaded programming and concurrent design, find that it
is hugely more complicated to do well than designs which do
not need concurrency, in most situations.  They also find that
making effective use of a large number of processors is a
very difficult problem (except in the case of so-called
"embarrassingly parallel" tasks).  Many problems split into
a number of naturally parallelizable parts, and exploiting
that isn't very hard, but efficiency and true scalability is
a lot more work that just creating some threads and using
message passing and/or synchronization for shared state.

I've seen a lot of code written by a lot of professionals,
and the multi-threaded code generally has close to an order
of magnitude more defects than the single-threaded code.
You may actually be proficient, but sadly most of them
also think that they are proficient.  The better ones tend
to be very wary of concurrency -- not that they avoid it,
but they take great care when working with parallelism.

-- James
May 30, 2007
Re: The future of concurrent programming
Mike Capp wrote:
> == Quote from Sean Kelly (sean@f4.ca)'s article
> 
>> Transactions are another idea, though the common
>> implementation of software transactional memory
>> (cloning objects and such) isn't really ideal.
> 
> Would genuine compiler guarantees regarding const (or invariant, or final, or
> whatever it's called today) reduce the need for cloning?

Word-based STM doesn't require cloning except when necessary to preserve 
logical consistency, and then it doesn't require whole-object cloning. 
On the other hand, it may not always be as efficient because it only 
knows about words and not objects.  It's all a trade-off.

Dave
May 30, 2007
Re: The future of concurrent programming
Sean Kelly wrote:
> [...]
> Sorry, I misundertood.  For some reason I thought you were saying Apache 
> could scale to thousands of threads.  In any case, D has something 
> roughly akin to Erlang's thread with Mikola Lysenko's StackThreads and 
> Tango's Fibers.

Erlang's threads are better than fibers because they are pre-emptive. 
However, this is only possible because Erlang runs on a VM. 
Context-switching in the VM is much cheaper than in the CPU (ironically 
enough), which means that D isn't going to get near Erlang's threads 
except on a VM that supports it (somehow I doubt the JVM or CLR come close).

Fibers are nice when you don't need pre-emption, but having to think 
about pre-emption makes the parallelism intrude on your problem-solving, 
which is what we would like to avoid.

Dave
May 30, 2007
Re: The future of concurrent programming
David B. Held wrote:
> Mike Capp wrote:
>> == Quote from Sean Kelly (sean@f4.ca)'s article
>>
>>> Transactions are another idea, though the common
>>> implementation of software transactional memory
>>> (cloning objects and such) isn't really ideal.
>>
>> Would genuine compiler guarantees regarding const (or invariant, or 
>> final, or
>> whatever it's called today) reduce the need for cloning?
> 
> Word-based STM doesn't require cloning except when necessary to preserve 
> logical consistency, and then it doesn't require whole-object cloning. 
> On the other hand, it may not always be as efficient because it only 
> knows about words and not objects.  It's all a trade-off.
> 
> Dave

Objects (or memory locations) that aren't changing don't get cloned. 
Constants are a stronger case of something not changing (because it 
can't by language rules).  So, const (or invariant, or final) really 
doesn't assist in STM in any way.
May 30, 2007
Re: The future of concurrent programming
> Fibers are nice when you don't need pre-emption, but having to think
> about pre-emption makes the parallelism intrude on your problem-solving,
> which is what we would like to avoid.
Do you know of any good guides or "design patterns" for when using explicit
pre-emption?

- Paul
May 30, 2007
Re: The future of concurrent programming
David B. Held wrote:
> Sean Kelly wrote:
>> [...]
>> Sorry, I misundertood.  For some reason I thought you were saying 
>> Apache could scale to thousands of threads.  In any case, D has 
>> something roughly akin to Erlang's thread with Mikola Lysenko's 
>> StackThreads and Tango's Fibers.
> 
> Erlang's threads are better than fibers because they are pre-emptive. 
> However, this is only possible because Erlang runs on a VM. 
> Context-switching in the VM is much cheaper than in the CPU (ironically 
> enough), which means that D isn't going to get near Erlang's threads 
> except on a VM that supports it (somehow I doubt the JVM or CLR come 
> close).
> 
> Fibers are nice when you don't need pre-emption, but having to think 
> about pre-emption makes the parallelism intrude on your problem-solving, 
> which is what we would like to avoid.

If I understand you correctly, I don't think either are a clear win. 
Preemptive multithreading, be it in a single kernel thread or in 
multiple kernel threads, require mutexes to protect shared data. 
Cooperative multithreading does not, but requires explicit yielding 
instead.  So it's mostly a choice between deadlocks and starvation.

However, if the task is "fire and forget" then preemption is a clear 
win, since that eliminates the need for mutexes, while cooperation still 
requires yielding.

I like that Sun's pthread implementation in Solaris will spawn both user 
and kernel threads based on the number of CPUs available.  It saves the 
programmer from having to think too much about it, and guarantees a 
decent distribution of load across available resources.  I'm not aware 
of any other OS that does this though.


Sean
May 30, 2007
Re: The future of concurrent programming
On Wed, 30 May 2007 18:01:26 +0400, Sean Kelly <sean@f4.ca> wrote:

>>  Erlang's threads are better than fibers because they are pre-emptive.  
>> However, this is only possible because Erlang runs on a VM.  
>> Context-switching in the VM is much cheaper than in the CPU (ironically  
>> enough), which means that D isn't going to get near Erlang's threads  
>> except on a VM that supports it (somehow I doubt the JVM or CLR come  
>> close).
>>  Fibers are nice when you don't need pre-emption, but having to think  
>> about pre-emption makes the parallelism intrude on your  
>> problem-solving, which is what we would like to avoid.
>
> If I understand you correctly, I don't think either are a clear win.  
> Preemptive multithreading, be it in a single kernel thread or in  
> multiple kernel threads, require mutexes to protect shared data.  
> Cooperative multithreading does not, but requires explicit yielding  
> instead.  So it's mostly a choice between deadlocks and starvation.

AFAIK, there isn't shared data in Erlang -- processes in Erlang VM  
(threads in D) communicate each to another by sending and receiving  
messages. And message passing mechanism is very efficient in Erlang VM.

-- 
Regards,
Yauheni Akhotnikau
May 30, 2007
Re: Intel TBB ?
Daniel919 wrote:
> Hi, what do you think about approaches like
> Intel Threading Building Blocks ?
> http://www.intel.com/cd/software/products/asmo-na/eng/threading/294797.htm
> 
> "It uses common C++ templates and coding style to eliminate tedious 
> threading implementation work."
> 
> Anyone has made experiences with it ?

It looks like a good library, but I've never actually used it.  I 
imagine we'll get a lot of similar things in D before long.


Sean
May 30, 2007
Re: The future of concurrent programming
eao197 wrote:
> On Wed, 30 May 2007 18:01:26 +0400, Sean Kelly <sean@f4.ca> wrote:
> 
>>>  Erlang's threads are better than fibers because they are
>>> pre-emptive. However, this is only possible because Erlang runs on a
>>> VM. Context-switching in the VM is much cheaper than in the CPU
>>> (ironically enough), which means that D isn't going to get near
>>> Erlang's threads except on a VM that supports it (somehow I doubt the
>>> JVM or CLR come close).
>>>  Fibers are nice when you don't need pre-emption, but having to think
>>> about pre-emption makes the parallelism intrude on your
>>> problem-solving, which is what we would like to avoid.
>>
>> If I understand you correctly, I don't think either are a clear win.
>> Preemptive multithreading, be it in a single kernel thread or in
>> multiple kernel threads, require mutexes to protect shared data.
>> Cooperative multithreading does not, but requires explicit yielding
>> instead.  So it's mostly a choice between deadlocks and starvation.
> 
> AFAIK, there isn't shared data in Erlang -- processes in Erlang VM
> (threads in D) communicate each to another by sending and receiving
> messages. And message passing mechanism is very efficient in Erlang VM.

This is true, for the most part.  However, you can have shared data among
Erlang processes.  Mnesia, the distributed database system (in-memory,
on-disk, or both) that ships with Erlang, is an example of an app that allows
shared data.  It's fairly battle-tested in regards to locking, dirty
reads/writes, etc.

BA
1 2 3 4 5
Top | Discussion index | About this forum | D home