Jump to page: 1 2
Thread overview
[dmd-concurrency] CSP: Communicating sequential processes
Jan 20, 2010
Robert Jacques
Jan 20, 2010
Sean Kelly
Jan 20, 2010
Sean Kelly
Jan 20, 2010
Robert Jacques
Jan 20, 2010
Michel Fortin
Jan 20, 2010
Robert Jacques
Jan 20, 2010
Robert Jacques
Jan 20, 2010
Michel Fortin
Jan 20, 2010
Robert Jacques
Jan 20, 2010
Michel Fortin
Jan 20, 2010
Pelle Månsson
Jan 20, 2010
Kevin Bealer
Jan 20, 2010
Kevin Bealer
Jan 20, 2010
Sean Kelly
Jan 20, 2010
Sean Kelly
Jan 20, 2010
Sean Kelly
January 20, 2010
Communicating sequential processes and pi calculus are one of the other major schools of message passing. There's been some interest in it, so I'm digging out and summarizing some old notes of mine. I view implementing CSP as a good use case for D's message passing architecture; i.e. one should be able to write a CSP library and have it compose gracefully with other concurrent libraries. Most of the active research in this area is being done by the University of Kent (http://www.cs.kent.ac.uk/projects/ofa/kroc/).

CSP is built around the concept of a channel; essentially typed, half-duplex, synchronous point-to-point single item message passing. This is great for a fibers/co-routine based implementation, but bad for the cluster/network. The asynchronous channels, would naturally have the opposite properties. One of the other design patterns that works well with single item channels are generators (or really any fast producer/slow consumer problem).

Channel: Several CSP implementations have found it useful to add some
addition state to a channel and/or have built some higher level primitives
on-top of them.
-close/poison: closes / shuts down a channel. Very useful for
producer/consumer processing loops. It may be useful to separate poison
 from close, i.e chan.poison(exception).
-valid: true if the channel isn't closed/poisoned.
-blackhole: chan.send is a no-op, and chan.receive throws an exception
-broadcast/delta: 1 to N messaging
-N-way: N to N messaging
-Paraplex: Converts N T sends into a T[N] send
-Deparaplex: Converts a T[N] send into N T sends

Select: One of the key flow control primatives in CSP is the select/alternative. This really simplifies certain usages and is particularly useful in GUIs, i.e. Plan9 (http://plan9.bell-labs.com/sys/doc/acme.pdf, http://plan9.bell-labs.com/sys/doc/81/2.pdf). A select statement waits on several guard statements. A guard statement takes either a lazy bool or a channel and a function/delegate. The guard it ready if the boolean is true or if there is a value waiting in the channel. If multiple guards are ready, the select only executes the first one, as determined by the scheduler (either round robin or priority)

while(!a_done && !b_done) {
     select(
       guard(chan1,(int v) { writefln("from chan1: ",v);}),
       guard(chan2 ,foo),                                  // ->
foo(chan2.recv);
       guard(a_done && b_done, { writefln(?We're done?);}),
    );
}

Mobile: Essentially a unique type. It is called mobile based on it's usage: it's used for data that moves, i.e. is mobile between processes.

Barrier: A barrier synchronization primitive. I believe something similar
for threads is already in druntime, though there isn't support for threads
or network processes as far as I know.
-enroll: enrolls (adds) this process to the barrier
-resign: resigns (removes) this process from the barrier
-sync: blocks until all enrolled processes have called sync

Bucket: A bucket synchronization primitive. Again, a thread-only version
is in druntime
-fall_into: causes this process to block until the bucket is flushed
-flush: unblocks all processes waiting on the bucket.
-flush(int N): unblock N processes from the bucket

parallel: Run multiple delegates/functions concurrently, with an implicit
join at the end, e.g.
Channel!(int) c;
parallel( { writefln("Sending a hi-five?"); c = 5; },
           { writefln("Received a ", c());          } );

Sequence/serial: take a bunch of processes and run them sequentially

Skip: A process that terminates immediately or a guard that?s always ready

Stop: A process that does nothing but doesn?t terminate

More Useful links: http://www.ibm.com/developerworks/java/library/j-csp3.html http://www.ibm.com/developerworks/java/library/j-jtp11234/ http://www.ibm.com/developerworks/java/library/j-jtp10264/ http://www.cs.kent.ac.uk/projects/ofa/c++csp/

An aside: I ran across CREW (Concurrent read, exclusive write) locks again while making this. These have always seemed perfect for D as it has both const and shared/synchronized method modifiers. I don't think there's any blockers for implementing this, so should I file a bugzilla enhancement request, so it gets onto the big long todo list?







January 19, 2010
Robert Jacques wrote:
> An aside: I ran across CREW (Concurrent read, exclusive write) locks again while making this. These have always seemed perfect for D as it has both const and shared/synchronized method modifiers. I don't think there's any blockers for implementing this, so should I file a bugzilla enhancement request, so it gets onto the big long todo list?

Shouldn't hurt, but unless God puts 72 hours in a day or you are willing to do most of the legwork, I don't see how it'll get work done. Heck, I created this list hoping for less work :o).

Andrei
January 19, 2010
On Jan 19, 2010, at 10:00 PM, Robert Jacques wrote:

> Communicating sequential processes and pi calculus are one of the other major schools of message passing. There's been some interest in it, so I'm digging out and summarizing some old notes of mine. I view implementing CSP as a good use case for D's message passing architecture; i.e. one should be able to write a CSP library and have it compose gracefully with other concurrent libraries. Most of the active research in this area is being done by the University of Kent (http://www.cs.kent.ac.uk/projects/ofa/kroc/).

Thanks for all the info!

> An aside: I ran across CREW (Concurrent read, exclusive write) locks again while making this. These have always seemed perfect for D as it has both const and shared/synchronized method modifiers. I don't think there's any blockers for implementing this, so should I file a bugzilla enhancement request, so it gets onto the big long todo list?

Is what you want in core.sync.rwmutex?  You can use it as:

synchronized( mut.reader ) {}
synchronized( mut.writer ) {}

It doesn't currently allow upgrading read locks to write locks, though I think this wouldn't be difficult to add.  More importantly I suppose is that acquiring a concurrent read lock may be more expensive than you'd expect, so you really only want to use a ReadWriteMutex if the read operations take a reasonable amount of time to complete.
January 19, 2010
Sean Kelly wrote:
> It doesn't currently allow upgrading read locks to write locks, though I think this wouldn't be difficult to add.  More importantly I suppose is that acquiring a concurrent read lock may be more expensive than you'd expect, so you really only want to use a ReadWriteMutex if the read operations take a reasonable amount of time to complete.

I think pthreads don't allow the upgrade.

Andrei
January 19, 2010
On Jan 19, 2010, at 11:08 PM, Andrei Alexandrescu wrote:

> Sean Kelly wrote:
>> It doesn't currently allow upgrading read locks to write locks, though I think this wouldn't be difficult to add.  More importantly I suppose is that acquiring a concurrent read lock may be more expensive than you'd expect, so you really only want to use a ReadWriteMutex if the read operations take a reasonable amount of time to complete.
> 
> I think pthreads don't allow the upgrade.

Then there's justification not to support it, though the impl. in core.sync doesn't use pthread_rwmutex--it uses a mutex and condition variable under the covers.
January 20, 2010
On Wed, 20 Jan 2010 02:07:53 -0500, Sean Kelly <sean at invisibleduck.org> wrote:
> On Jan 19, 2010, at 10:00 PM, Robert Jacques wrote:
>
>> Communicating sequential processes and pi calculus are one of the other major schools of message passing. There's been some interest in it, so I'm digging out and summarizing some old notes of mine. I view implementing CSP as a good use case for D's message passing architecture; i.e. one should be able to write a CSP library and have it compose gracefully with other concurrent libraries. Most of the active research in this area is being done by the University of Kent (http://www.cs.kent.ac.uk/projects/ofa/kroc/).
>
> Thanks for all the info!

Your welcome. I hope it helps.

>> An aside: I ran across CREW (Concurrent read, exclusive write) locks again while making this. These have always seemed perfect for D as it has both const and shared/synchronized method modifiers. I don't think there's any blockers for implementing this, so should I file a bugzilla enhancement request, so it gets onto the big long todo list?
>
> Is what you want in core.sync.rwmutex?  You can use it as:
>
> synchronized( mut.reader ) {}
> synchronized( mut.writer ) {}
>
> It doesn't currently allow upgrading read locks to write locks, though I think this wouldn't be difficult to add.  More importantly I suppose is that acquiring a concurrent read lock may be more expensive than you'd expect, so you really only want to use a ReadWriteMutex if the read operations take a reasonable amount of time to complete.

Sort of; I was thinking about implicit use and not explicit use; i.e. const functions would take a reader lock and non-const function would take a writer lock. Generally, I've heard reports that CREW locks give big performance boosts in certain domains (game programming, for instance), but I don't know exactly the real performance trade-off compare to normal locks under low contention. Still, it would be nice to be able to replace an object's mutex with a rwmutex and have it work right.
January 20, 2010
Le 2010-01-20 ? 2:53, Robert Jacques a ?crit :

> Sort of; I was thinking about implicit use and not explicit use; i.e. const functions would take a reader lock and non-const function would take a writer lock. Generally, I've heard reports that CREW locks give big performance boosts in certain domains (game programming, for instance), but I don't know exactly the real performance trade-off compare to normal locks under low contention. Still, it would be nice to be able to replace an object's mutex with a rwmutex and have it work right.

I think read/write lock usage should only be used explicitly.

That's because you can never upgrade "safely" from a read lock to a write lock, or to be precise there is always a risk that you must release your read lock before upgrading. Two threads could acquire a read lock and then both might want to upgrade to a write lock, which can't be satisfied without one of them dropping its read lock first.

If the read lock drops because you call another function that wants a write lock, this could be trouble. Upgrades work well when there is only one thread requesting upgrades. Otherwise the upgrade must be seen as a drop-read-lock then acquire-write-lock non-atomic operation.

-- 
Michel Fortin
michel.fortin at michelf.com
http://michelf.com/



January 20, 2010
Le 2010-01-20 ? 2:07, Sean Kelly a ?crit :

> Is what you want in core.sync.rwmutex?  You can use it as:
> 
> synchronized( mut.reader ) {}
> synchronized( mut.writer ) {}
> 
> It doesn't currently allow upgrading read locks to write locks, though I think this wouldn't be difficult to add.  More importantly I suppose is that acquiring a concurrent read lock may be more expensive than you'd expect, so you really only want to use a ReadWriteMutex if the read operations take a reasonable amount of time to complete.

What happens if you call test() here?

	void test() {
		synchronized (mut.reader) {
			// read A
			testWrite();
			// read B
		}
	}

	void testWrite() {
		syncrhonized (mut.writer) {
			// write C
		}
	}

I guess that if you haven't implemented upgrades this will deadlock.

I'd like to note that even upgrading the lock to a write lock might be a problem: as I said in my other message, making this an automatic upgrade might force the release the reader lock until the writer lock is acquired, which would be unexpected from test()'s point of view.

This could work well as a transaction, where a failure to upgrade the lock would throw an exception, "read A" would be rollbacked, and the synchronized part would be attempted again. But without proper logic to deal with the unfortunate case of a failed upgrade you have either a race or a deadlock here.

-- 
Michel Fortin
michel.fortin at michelf.com
http://michelf.com/



January 20, 2010
You aren't supposed to write when you have a readers lock, I think it should deadlock.

On Wed, Jan 20, 2010 at 1:28 PM, Michel Fortin <michel.fortin at michelf.com> wrote:
> Le 2010-01-20 ? 2:07, Sean Kelly a ?crit :
>
> What happens if you call test() here?
>
> ? ? ? ?void test() {
> ? ? ? ? ? ? ? ?synchronized (mut.reader) {
> ? ? ? ? ? ? ? ? ? ? ? ?// read A
> ? ? ? ? ? ? ? ? ? ? ? ?testWrite();
> ? ? ? ? ? ? ? ? ? ? ? ?// read B
> ? ? ? ? ? ? ? ?}
> ? ? ? ?}
>
> ? ? ? ?void testWrite() {
> ? ? ? ? ? ? ? ?syncrhonized (mut.writer) {
> ? ? ? ? ? ? ? ? ? ? ? ?// write C
> ? ? ? ? ? ? ? ?}
> ? ? ? ?}
>
> I guess that if you haven't implemented upgrades this will deadlock.
>
> I'd like to note that even upgrading the lock to a write lock might be a problem: as I said in my other message, making this an automatic upgrade might force the release the reader lock until the writer lock is acquired, which would be unexpected from test()'s point of view.
>
> This could work well as a transaction, where a failure to upgrade the lock would throw an exception, "read A" would be rollbacked, and the synchronized part would be attempted again. But without proper logic to deal with the unfortunate case of a failed upgrade you have either a race or a deadlock here.
>
> --
> Michel Fortin
> michel.fortin at michelf.com
> http://michelf.com/
>
>
>
> _______________________________________________
> dmd-concurrency mailing list
> dmd-concurrency at puremagic.com
> http://lists.puremagic.com/mailman/listinfo/dmd-concurrency
>
January 20, 2010
There is a type of locking pattern that solves this issue.  It involves locks that can be in "read", "write" or "shared" mode.  The concept is that you get a shared lock when you might want to upgrade to a write lock.

Shared mode can coexist with other threads that are in "read" mode, but is not compatible with other threads that are in write or shared mode.  You can upgrade a shared mode lock to a write lock without deadlocking -- it waits for other threads to finish reading.

The normal usage pattern is that you get a shared mode when you might want to modify something.  Let's say that a user wants to write a line to a logfile.  You get a shared mode lock on the directory, then check if the file exists.  If it already exists, you downgrade the shared lock to a read lock, then get a write mode lock on the file.  If the file does not exist, you instead upgrade the shared lock to a write lock, then create the file (which is why you needed write permissions), then get a write lock on the new file and downgrade or release the directory lock.

This model of locking cannot deadlock on itself because there is only one "shared" process at a time -- when getting a shared lock, you need to wait for all existing writers to exit, and for any existing shared lock holders to exit or downgrade.  It works best for cases where a processes often thinks it *might* need to do a write operation but usually doesn't, in which case it reduces lock contention for reader threads.  The most common situation is where you don't know if you need to do the write until you've done some reads to asses the current situation.

I know we used locks like this when I was at IBM.  I wish I could find out who invented this so I know whether it is patented.  (IBM loves patents -- I suspect it isn't patented but I haven't found a good writeup of it online yet.)

Kevin

On Wed, Jan 20, 2010 at 7:28 AM, Michel Fortin <michel.fortin at michelf.com>wrote:

> Le 2010-01-20 ? 2:07, Sean Kelly a ?crit :
>
> > Is what you want in core.sync.rwmutex?  You can use it as:
> >
> > synchronized( mut.reader ) {}
> > synchronized( mut.writer ) {}
> >
> > It doesn't currently allow upgrading read locks to write locks, though I
> think this wouldn't be difficult to add.  More importantly I suppose is that acquiring a concurrent read lock may be more expensive than you'd expect, so you really only want to use a ReadWriteMutex if the read operations take a reasonable amount of time to complete.
>
> What happens if you call test() here?
>
>        void test() {
>                synchronized (mut.reader) {
>                        // read A
>                        testWrite();
>                        // read B
>                }
>        }
>
>        void testWrite() {
>                syncrhonized (mut.writer) {
>                        // write C
>                }
>        }
>
> I guess that if you haven't implemented upgrades this will deadlock.
>
> I'd like to note that even upgrading the lock to a write lock might be a problem: as I said in my other message, making this an automatic upgrade might force the release the reader lock until the writer lock is acquired, which would be unexpected from test()'s point of view.
>
> This could work well as a transaction, where a failure to upgrade the lock would throw an exception, "read A" would be rollbacked, and the synchronized part would be attempted again. But without proper logic to deal with the unfortunate case of a failed upgrade you have either a race or a deadlock here.
>
> --
> Michel Fortin
> michel.fortin at michelf.com
> http://michelf.com/
>
>
>
> _______________________________________________
> dmd-concurrency mailing list
> dmd-concurrency at puremagic.com
> http://lists.puremagic.com/mailman/listinfo/dmd-concurrency
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.puremagic.com/pipermail/dmd-concurrency/attachments/20100120/14c1edfe/attachment.htm>
« First   ‹ Prev
1 2