Jump to page: 1 29  
Page
Thread overview
[dmd-concurrency] draft 4
Jan 12, 2010
Sean Kelly
Jan 12, 2010
Sean Kelly
Jan 12, 2010
Michel Fortin
Jan 12, 2010
Sean Kelly
Jan 13, 2010
Sean Kelly
Jan 13, 2010
Kevin Bealer
Jan 12, 2010
Sean Kelly
Jan 12, 2010
Robert Jacques
Jan 12, 2010
Robert Jacques
Jan 12, 2010
Sean Kelly
Jan 13, 2010
Kevin Bealer
[dmd-concurrency] draft 5
Jan 19, 2010
Walter Bright
Jan 19, 2010
Michel Fortin
Jan 19, 2010
Robert Jacques
Jan 19, 2010
Michel Fortin
Jan 20, 2010
Kevin Bealer
Jan 20, 2010
Sean Kelly
Jan 20, 2010
Sean Kelly
Jan 20, 2010
Kevin Bealer
Jan 20, 2010
Sean Kelly
Jan 20, 2010
Kevin Bealer
Jan 20, 2010
Kevin Bealer
Jan 20, 2010
Sean Kelly
Jan 20, 2010
Kevin Bealer
Jan 20, 2010
Sean Kelly
Jan 20, 2010
Sean Kelly
Jan 20, 2010
Sean Kelly
Jan 20, 2010
Kevin Bealer
Jan 20, 2010
Kevin Bealer
Jan 20, 2010
Kevin Bealer
Jan 20, 2010
Kevin Bealer
Jan 20, 2010
Kevin Bealer
Jan 21, 2010
Robert Jacques
Jan 21, 2010
Robert Jacques
Jan 21, 2010
Robert Jacques
Jan 20, 2010
Kevin Bealer
[dmd-concurrency] shutting down
Jan 20, 2010
Michel Fortin
Jan 20, 2010
Sean Kelly
Jan 20, 2010
Sean Kelly
Jan 21, 2010
Michel Fortin
Jan 21, 2010
Kevin Bealer
Jan 21, 2010
Sean Kelly
Jan 21, 2010
Michel Fortin
[dmd-concurrency] Shutdown protocol
Jan 21, 2010
Sean Kelly
Jan 21, 2010
Sean Kelly
Jan 21, 2010
Sean Kelly
January 12, 2010
To be found at the usual location:

http://erdani.com/d/fragment.preview.pdf

I didn't add a lot of text this time around but I do have a full example of communicating threads. Skip to the last section for explanation. I paste the code below. I think it looks pretty darn cool. Sean, please let me know if it floats your boat.

import std.concurrency, std.stdio;

void main() {
    auto low = 0, high = 1000;
    auto tid = spawn(&fun);
    foreach (i; low .. high) {
       writeln("Main thread: ", message, i);
       tid.send(thisTid, i);
       enforce(receiveOnly!Tid() == tid);
    }
    // Signal the other thread
    tid.send(Tid(), 0);
}

void fun() {
    for (;;) {
       auto msg = receiveOnly!(Tid, int)();
       if (!msg[0]) return;
       writeln("Secondary thread: ", msg[1]);
       msg[0].send(thisTid);
    }
}


Andrei
January 12, 2010
On Jan 12, 2010, at 12:45 AM, Andrei Alexandrescu wrote:

> To be found at the usual location:
> 
> http://erdani.com/d/fragment.preview.pdf
> 
> I didn't add a lot of text this time around but I do have a full example of communicating threads. Skip to the last section for explanation. I paste the code below. I think it looks pretty darn cool. Sean, please let me know if it floats your boat.
> 
> import std.concurrency, std.stdio;
> 
> void main() {
>   auto low = 0, high = 1000;
>   auto tid = spawn(&fun);
>   foreach (i; low .. high) {
>      writeln("Main thread: ", message, i);
>      tid.send(thisTid, i);

I haven't been able to come up with any difference in implementation for using an interface vs. a free function that accepts an opaque type for sending messages, so the choice seems largely a matter of how you want the call to look.  I'll admit that I like "send(tid, thisTid, i)" since it matches "receive(...)" but this is a small thing really.

>      enforce(receiveOnly!Tid() == tid);

This could be a wrapper for the usual recvmsg call, so no big deal.  What I'd like to do now that I haven't been able to because of a compiler bug is allow a timeout and catchall receiver to be supplied to the full recvmsg call:

recvmsg( after(5, { writefln( "no message received!" ); },
                  any( (Variant v) { writefln( "got something: %s", v ); } ) );

I guess the "any" wrapper could be dropped for the catchall routine and some special casing could be done inside recvmsg:

recvmsg( (Variant v) {...} );

though I do kind of like that "any" (or whatever it would be called) is obvious at a glance and greppable.

>   }
>   // Signal the other thread
>   tid.send(Tid(), 0);

I've considered having the sender's Tid automatically embedded in every message.  Seems like it's pretty much always wanted, though this would mean not being able to use just any old delegate for receiving messages.  ie. if you have a function already that accepts a Foo then would you want to use that directly or would it be a bother to wrap it in something to throw away the Tid?

Finally, I'd like for recvmsg to accept either a function returning void for "accept any of this type" as well as functions returning a bool for "if returns true, the passed value was a match, if false then not" to allow dynamic pattern matching.  Seems like this should be easily possible with a static if inside the recvmsg loop, but I haven't actually tried it yet.

> }
> 
> void fun() {
>   for (;;) {
>      auto msg = receiveOnly!(Tid, int)();
>      if (!msg[0]) return;

Oh, so this format returns a Tuple for multiple arguments and the value for a single argument?  The Tuple is gone by the time the user code is hit with recvmsg so it would have to rewrap it for receiveOnly, but if that's okay then this would work.  I guess the other option would be a completely separate receiveOnly call instead of a wrapper, which could eliminate the extra work.

>      writeln("Secondary thread: ", msg[1]);
>      msg[0].send(thisTid);
>   }
> }
January 12, 2010
This all looks very cool for message passing.

I'm still more interested in how shared turns out, message passing seems to be a very straightforward problem with a very straightforward solution.  Also, having never really used a language with builtin message passing or a MP library (I have implemented it several times unwittingly not knowing the pattern), I can't really add any more insightful comment except to say it does look exciting :)

A comment on the introduction, I know that the other chapters of the book don't have an initial section header, but you may want to break up this section into a brief introduction and then title this section header appropriately.  Although it is a good lesson and backs up the design choices of D, it doesn't have anything to do with D's API.  Having to read 6 pages of history before reading anything about D is puzzling.  The header "A brief history of data sharing"  before the whole thing would cue uninterested readers to jump to the meaty parts.

Pretend your a person learning D, and you already know that message passing is the best, having dealt with some message passing library (or language that supports it).  You don't want to read through a history lesson confirming what you already know, you just want to answer the question "how does D do concurrency?"  Basically, I think you should explicitly identify the "how" and "why" parts, preferably putting some of the "how" first.

-Steve




January 12, 2010
----- Original Message ----

> From: Sean Kelly <sean at invisibleduck.org>
>
> I've considered having the sender's Tid automatically embedded in every message.  Seems like it's pretty much always wanted, though this would mean not being able to use just any old delegate for receiving messages.  ie. if you have a function already that accepts a Foo then would you want to use that directly or would it be a bother to wrap it in something to throw away the Tid?

I'm not sure it's always wanted.

In one application I developed, there was a server event thread that received events from all the other threads when an object changed, and broadcast those events to all clients of the server.  This was in C#, so all the objects were "shared", and all used synchronized methods to update.  The clients (and therefore the event thread) didn't care what server thread changed the object, they just cared what object changed.

My point is, you generally *do* want to know some "return address" for a message, but it's not always going to be the Tid.

Another thing, you may only ever expect certain messages from certain threads, or a message may be an instruction that doesn't need a response (i.e. application is closing, cleanup and exit as soon as possible).  You don't need to verify or use the sender in those cases, so you would be wasting time and resources copying the Tid in such cases.

-Steve




January 12, 2010
On Jan 12, 2010, at 10:28 AM, Steve Schveighoffer wrote:

> ----- Original Message ----
> 
>> From: Sean Kelly <sean at invisibleduck.org>
>> 
>> I've considered having the sender's Tid automatically embedded in every message.  Seems like it's pretty much always wanted, though this would mean not being able to use just any old delegate for receiving messages.  ie. if you have a function already that accepts a Foo then would you want to use that directly or would it be a bother to wrap it in something to throw away the Tid?
> 
> I'm not sure it's always wanted.
> 
> In one application I developed, there was a server event thread that received events from all the other threads when an object changed, and broadcast those events to all clients of the server.  This was in C#, so all the objects were "shared", and all used synchronized methods to update.  The clients (and therefore the event thread) didn't care what server thread changed the object, they just cared what object changed.
> 
> My point is, you generally *do* want to know some "return address" for a message, but it's not always going to be the Tid.
> 
> Another thing, you may only ever expect certain messages from certain threads, or a message may be an instruction that doesn't need a response (i.e. application is closing, cleanup and exit as soon as possible).  You don't need to verify or use the sender in those cases, so you would be wasting time and resources copying the Tid in such cases.

I suppose you're right.  And now that the API can handle multiple arguments in a single message, there isn't as much reason to make the inclusion of a Tid automatic.  Thanks!
January 12, 2010
Le 2010-01-12 ? 13:28, Steve Schveighoffer a ?crit :

> ----- Original Message ----
> 
>> From: Sean Kelly <sean at invisibleduck.org>
>> 
>> I've considered having the sender's Tid automatically embedded in every message.  Seems like it's pretty much always wanted, though this would mean not being able to use just any old delegate for receiving messages.  ie. if you have a function already that accepts a Foo then would you want to use that directly or would it be a bother to wrap it in something to throw away the Tid?
> 
> I'm not sure it's always wanted.

I was about to say the same. You often don't care from where an event comes in. And when you care, you can just add the info in the message.

-- 
Michel Fortin
michel.fortin at michelf.com
http://michelf.com/



January 12, 2010
I totally agree about the introduction. I was also thinking to prepend some text to the headless chapter intro that would read something like this:

=======================
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\chapter{Concurrency}
\label{ch:concurrency}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

Convergence of  various factors  in the hardware  industry has  led to qualitative  changes  in the  way  we  are  able to  access  computing resources,  which in  turn prompts  profound  changes in  the ways  we approach   computing    and   in   the    language   abstractions   we use.  Concurrency is now  virtually everywhere,  and it  is software's responsibility to tap into it.

Although the software  industry as a whole does  not have yet ultimate responses  to   the  challenges  brought  about   by  the  concurrency revolution,  \dee's  youth  allowed  its  creators  to  make  informed decisions  regarding  concurrency without  being  tied  down by  large legacy  code  bases.  A  major  break  with  the  mold  of  concurrent imperative  languages is  that \dee  does not  foster sharing  of data between threads; by default, concurrent threads are virtually isolated by language mechanisms.  Data sharing is allowed but  only in limited, controlled ways that offer the  compiler the ability to provide strong global guarantees.

At the same time, \dee remains at heart a system  programming language so  it  does  allow  you  to  use a  variety  of  low-level,  maverick approaches to concurrency. (Some of these mechanisms are not, however, allowed in safe programs.)

In brief, here's how \dee's concurrency offering is layered:

\begin{itemize*}
\item  The  flagship approach  to  concurrency  is  by using  isolated
   threads or processes that  communicate via messages.  This paradigm,
   known as \emph{message passing},  leads to safe and modular programs
   that are easy to understand and maintain. A variety of languages and
   libraries  have  used  message passing  successfully.   Historically
   message  passing has  been slower  than approaches  based  on memory
   sharing---which  explains why  it was  not  unanimosly adopted---but
   that trend  underwent a  definite and lasting  reversal.  Concurrent
   \dee programs are  encouraged to use message passing  and benefit of
   extensive infrastructure support.
\item \dee  also provides support for  old-style synchronization based
   on critical sections protected  by mutexes and event variables. This
   approach  to  concurrency  has   since  recently  come  under  heavy
   criticism  because of  its  failure  to scale  well  to today's  and
   tomorrow's  highly  parallel  architectures.   \dee  imposes  strict
   control  over   data  sharing,   which  in  turn   curbs  lock-based
   programming  styles.   Such restrictions  may  seem  quite harsh  at
   first, but they  cure lock-based code of its  worst enemy: low-level
   data races.
\item In  the tradition of  system-level languages, \dee  programs not
   marked as \cc{\@safe} may use casts to obtain hot, bubbly, unchecked
   data  sharing.  The  correctness of  such programs  becomes entirely
   your responsibility, and is often system-dependent.
\item If  that level of control  is insufficient for you,  you can use
   @asm@   statements   for   ultimate   control  of   your   machine's
   resources. To go  any lower level than that,  you'd need a miniature
   soldering iron and a very, very steady hand.
\end{itemize*}

Before getting into the thick of  the topics above, let's take a brief detour  in  order to  gain  a  better  understanding of  the  hardware developments that have shaken our world.

\section{Concurrentgate}

When it comes to concurrency, we are living the proverbial interesting
times more than  ever before. ...
==============================

Works?


Andrei

Steve Schveighoffer wrote:
> This all looks very cool for message passing.
> 
> I'm still more interested in how shared turns out, message passing seems to be a very straightforward problem with a very straightforward solution.  Also, having never really used a language with builtin message passing or a MP library (I have implemented it several times unwittingly not knowing the pattern), I can't really add any more insightful comment except to say it does look exciting :)
> 
> A comment on the introduction, I know that the other chapters of the book don't have an initial section header, but you may want to break up this section into a brief introduction and then title this section header appropriately.  Although it is a good lesson and backs up the design choices of D, it doesn't have anything to do with D's API.  Having to read 6 pages of history before reading anything about D is puzzling.  The header "A brief history of data sharing"  before the whole thing would cue uninterested readers to jump to the meaty parts.
> 
> Pretend your a person learning D, and you already know that message passing is the best, having dealt with some message passing library (or language that supports it).  You don't want to read through a history lesson confirming what you already know, you just want to answer the question "how does D do concurrency?"  Basically, I think you should explicitly identify the "how" and "why" parts, preferably putting some of the "how" first.
> 
> -Steve
> 
> 
> 
> 
> _______________________________________________
> dmd-concurrency mailing list
> dmd-concurrency at puremagic.com
> http://lists.puremagic.com/mailman/listinfo/dmd-concurrency
January 12, 2010



----- Original Message ----
> From: Andrei Alexandrescu <andrei at erdani.com>
> 
> Before getting into the thick of  the topics above, let's take a brief detour  in  order to  gain  a  better  understanding of  the  hardware developments that have shaken our world.
> 
> \section{Concurrentgate}
> 
> When it comes to concurrency, we are living the proverbial interesting
> times more than  ever before. ...
> ==============================
> 
> Works?
> 

Much better :)

-Steve




January 12, 2010
You'll probably have to edit the opening of the history section to get it to flow right, but this is a much cleaner approach to the chapter.

On Jan 12, 2010, at 11:45 AM, Andrei Alexandrescu wrote:

> I totally agree about the introduction. I was also thinking to prepend some text to the headless chapter intro that would read something like this:
> 
> =======================
> %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
> \chapter{Concurrency}
> \label{ch:concurrency}
> %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
> 
> Convergence of  various factors  in the hardware  industry has  led to qualitative  changes  in the  way  we  are  able to  access  computing resources,  which in  turn prompts  profound  changes in  the ways  we approach   computing    and   in   the    language   abstractions   we use.  Concurrency is now  virtually everywhere,  and it  is software's responsibility to tap into it.
> 
> Although the software  industry as a whole does  not have yet ultimate responses  to   the  challenges  brought  about   by  the  concurrency revolution,  \dee's  youth  allowed  its  creators  to  make  informed decisions  regarding  concurrency without  being  tied  down by  large legacy  code  bases.  A  major  break  with  the  mold  of  concurrent imperative  languages is  that \dee  does not  foster sharing  of data between threads; by default, concurrent threads are virtually isolated by language mechanisms.  Data sharing is allowed but  only in limited, controlled ways that offer the  compiler the ability to provide strong global guarantees.
> 
> At the same time, \dee remains at heart a system  programming language so  it  does  allow  you  to  use a  variety  of  low-level,  maverick approaches to concurrency. (Some of these mechanisms are not, however, allowed in safe programs.)
> 
> In brief, here's how \dee's concurrency offering is layered:
> 
> \begin{itemize*}
> \item  The  flagship approach  to  concurrency  is  by using  isolated
>  threads or processes that  communicate via messages.  This paradigm,
>  known as \emph{message passing},  leads to safe and modular programs
>  that are easy to understand and maintain. A variety of languages and
>  libraries  have  used  message passing  successfully.   Historically
>  message  passing has  been slower  than approaches  based  on memory
>  sharing---which  explains why  it was  not  unanimosly adopted---but
>  that trend  underwent a  definite and lasting  reversal.  Concurrent
>  \dee programs are  encouraged to use message passing  and benefit of
>  extensive infrastructure support.
> \item \dee  also provides support for  old-style synchronization based
>  on critical sections protected  by mutexes and event variables. This
>  approach  to  concurrency  has   since  recently  come  under  heavy
>  criticism  because of  its  failure  to scale  well  to today's  and
>  tomorrow's  highly  parallel  architectures.   \dee  imposes  strict
>  control  over   data  sharing,   which  in  turn   curbs  lock-based
>  programming  styles.   Such restrictions  may  seem  quite harsh  at
>  first, but they  cure lock-based code of its  worst enemy: low-level
>  data races.
> \item In  the tradition of  system-level languages, \dee  programs not
>  marked as \cc{\@safe} may use casts to obtain hot, bubbly, unchecked
>  data  sharing.  The  correctness of  such programs  becomes entirely
>  your responsibility, and is often system-dependent.
> \item If  that level of control  is insufficient for you,  you can use
>  @asm@   statements   for   ultimate   control  of   your   machine's
>  resources. To go  any lower level than that,  you'd need a miniature
>  soldering iron and a very, very steady hand.
> \end{itemize*}
> 
> Before getting into the thick of  the topics above, let's take a brief detour  in  order to  gain  a  better  understanding of  the  hardware developments that have shaken our world.
> 
> \section{Concurrentgate}
> 
> When it comes to concurrency, we are living the proverbial interesting
> times more than  ever before. ...
> ==============================
> 
> Works?
> 
> 
> Andrei
> 
> Steve Schveighoffer wrote:
>> This all looks very cool for message passing.
>> I'm still more interested in how shared turns out, message passing seems to be a very straightforward problem with a very straightforward solution.  Also, having never really used a language with builtin message passing or a MP library (I have implemented it several times unwittingly not knowing the pattern), I can't really add any more insightful comment except to say it does look exciting :)
>> A comment on the introduction, I know that the other chapters of the book don't have an initial section header, but you may want to break up this section into a brief introduction and then title this section header appropriately.  Although it is a good lesson and backs up the design choices of D, it doesn't have anything to do with D's API.  Having to read 6 pages of history before reading anything about D is puzzling.  The header "A brief history of data sharing"  before the whole thing would cue uninterested readers to jump to the meaty parts.
>> Pretend your a person learning D, and you already know that message passing is the best, having dealt with some message passing library (or language that supports it).  You don't want to read through a history lesson confirming what you already know, you just want to answer the question "how does D do concurrency?"  Basically, I think you should explicitly identify the "how" and "why" parts, preferably putting some of the "how" first.
>> -Steve
>>      _______________________________________________
>> dmd-concurrency mailing list
>> dmd-concurrency at puremagic.com
>> http://lists.puremagic.com/mailman/listinfo/dmd-concurrency
> _______________________________________________
> dmd-concurrency mailing list
> dmd-concurrency at puremagic.com
> http://lists.puremagic.com/mailman/listinfo/dmd-concurrency

January 12, 2010
On Tue, 12 Jan 2010 03:45:35 -0500, Andrei Alexandrescu <andrei at erdani.com> wrote:
> To be found at the usual location:
>
> http://erdani.com/d/fragment.preview.pdf
>
> I didn't add a lot of text this time around but I do have a full example of communicating threads. Skip to the last section for explanation. I paste the code below. I think it looks pretty darn cool. Sean, please let me know if it floats your boat.
>
> import std.concurrency, std.stdio;
>
> void main() {
>     auto low = 0, high = 1000;
>     auto tid = spawn(&fun);
>     foreach (i; low .. high) {
>        writeln("Main thread: ", message, i);
>        tid.send(thisTid, i);
>        enforce(receiveOnly!Tid() == tid);
>     }
>     // Signal the other thread
>     tid.send(Tid(), 0);
> }

message is undefined and doesn't match the output. Is this a typo?

> void fun() {
>     for (;;) {
>        auto msg = receiveOnly!(Tid, int)();
>        if (!msg[0]) return;
>        writeln("Secondary thread: ", msg[1]);
>        msg[0].send(thisTid);
>     }
> }
>
>
> Andrei
> _______________________________________________
> dmd-concurrency mailing list
> dmd-concurrency at puremagic.com
> http://lists.puremagic.com/mailman/listinfo/dmd-concurrency

I know this is a bit of a bike shed, but I'd prefer something shorter for receiveOnly, (like recv or receive) as A) type-checked message passing should be the easy/default way to do things and B) it's easy to define recv!()() to return the unchecked message using a variant. I'd also like to be able to use recv(tid,i); in addition to recv!(Tid, int)(); but I haven't been able to get the templates to not clash with each other.
« First   ‹ Prev
1 2 3 4 5 6 7 8 9