October 11, 2012
On Oct 10, 2012, at 6:55 PM, Charles Hixson <charleshixsn@earthlink.net> wrote:
> 
> TDPL quotes the recommendation from an Erlang book "Have LOTS of threads!", but doesn't really say how to guess at an order of magnitude of what's reasonable for D std.concurrency.  People on Erlang say that 100's of thousands of threads is reasonable.  Is it the same for D?

Not currently.  spawn() generates a kernel thread, unlike a user-space thread as in Erlang, so you really can't go too crazy with spawning before the cost of context switches starts to hurt.  There was a thread about this recently in digitalmars.D, I believe.  To summarize, the issue blocking a move to user-space threads is the technical problem of making thread-local statics instead be local to a user-space thread.  That said, if you don't care about that detail it would be pretty easy to make std.concurrency use Fibers instead of Threads.
October 11, 2012
On Oct 11, 2012, at 12:39 PM, thedeemon <dlang@thedeemon.com> wrote:

> My biggest concern here is with this number of agents communicating to each other via message passing it would mean huge number of memory allocations for the messages, but in current D runtime allocation is locking (and GC too), so it may kill all the parallelism if reactions to messages are short and simple. D is no Erlang in this regard.

I've experimented with using free lists for message data but didn't see any notable speedup.  If someone can produce an example where allocations are a limiting factor, I'd be happy to revisit this.
October 12, 2012
On 10/11/2012 01:49 PM, Sean Kelly wrote:
> On Oct 10, 2012, at 6:55 PM, Charles Hixson<charleshixsn@earthlink.net>  wrote:
>>
>> TDPL quotes the recommendation from an Erlang book "Have LOTS of threads!", but doesn't really say how to guess at an order of magnitude of what's reasonable for D std.concurrency.  People on Erlang say that 100's of thousands of threads is reasonable.  Is it the same for D?
>
> Not currently.  spawn() generates a kernel thread, unlike a user-space thread as in Erlang, so you really can't go too crazy with spawning before the cost of context switches starts to hurt.  There was a thread about this recently in digitalmars.D, I believe.  To summarize, the issue blocking a move to user-space threads is the technical problem of making thread-local statics instead be local to a user-space thread.  That said, if you don't care about that detail it would be pretty easy to make std.concurrency use Fibers instead of Threads.

I'm not clear on what Fibers are.  From Ruby they seem to mean co-routines, and that doesn't have much advantage.  But it also seems as if other languages have other meanings.  TDPL doesn't list fiber in the index. I just found them in core.thread... but I'm still quite confused about what their advantages are, and how to properly use them.

OTOH, it looks as if Fibers are heavier than classes, and I was already planning on using structs rather than classes mainly because classes are heavier.  And if processes are even heavier... well, I need to use a different design.  Perhaps I can divvy the structs up four ways as in std.concurrency.  Perhaps I should use a parallel foreach, as in std.parallelism.  (That one looks really plausible. but I'm not sure what the overhead is when I'm doing more than a simple multiplication. Still, the example *looks* quite promising for this application.)  One of the advantages of std.parallelism::foreach is that I can code the application in serial as normal, and then add the parallelism later.  I wasn't intending to have deterministic interaction between the pieces anyway.  (But I am intending that some of the cells will send messages to other cells.  Something on the order of cells[i].bumpActivity; being issued by a cell other than cell i.)
October 12, 2012
On Thu, 2012-10-11 at 20:30 -0700, Charles Hixson wrote: […]
> I'm not clear on what Fibers are.  From Ruby they seem to mean co-routines, and that doesn't have much advantage.  But it also seems as
[…]

I think the emerging consensus is that threads allow for pre-emptive scheduling whereas fibres do not. So yes as in Ruby, fibres are collaborative co-routines. Stackless Python is similar.
-- 
Russel. ============================================================================= Dr Russel Winder      t: +44 20 7585 2200   voip: sip:russel.winder@ekiga.net 41 Buckmaster Road    m: +44 7770 465 077   xmpp: russel@winder.org.uk London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder


October 14, 2012
On Oct 12, 2012, at 2:29 AM, Russel Winder <russel@winder.org.uk> wrote:

> On Thu, 2012-10-11 at 20:30 -0700, Charles Hixson wrote: […]
>> I'm not clear on what Fibers are.  From Ruby they seem to mean co-routines, and that doesn't have much advantage.  But it also seems as
> […]
> 
> I think the emerging consensus is that threads allow for pre-emptive scheduling whereas fibres do not. So yes as in Ruby, fibres are collaborative co-routines. Stackless Python is similar.

Yep. If fibers were used in std.concurrency there would basically be an implicit yield in send and receive.
October 14, 2012
On 14-Oct-12 20:19, Sean Kelly wrote:
> On Oct 12, 2012, at 2:29 AM, Russel Winder <russel@winder.org.uk> wrote:
>
>> On Thu, 2012-10-11 at 20:30 -0700, Charles Hixson wrote:
>> […]
>>> I'm not clear on what Fibers are.  From Ruby they seem to mean
>>> co-routines, and that doesn't have much advantage.  But it also seems as
>> […]
>>
>> I think the emerging consensus is that threads allow for pre-emptive
>> scheduling whereas fibres do not. So yes as in Ruby, fibres are
>> collaborative co-routines. Stackless Python is similar.
>
> Yep. If fibers were used in std.concurrency there would basically be an implicit yield in send and receive.
>

Makes me wonder how it will work with blocking I/O and the like. If all of (few of) threads get blocked this way that going to stall all of (thousands of) fibers.

-- 
Dmitry Olshansky
October 15, 2012
On Oct 14, 2012, at 9:59 AM, Dmitry Olshansky <dmitry.olsh@gmail.com> wrote:

> On 14-Oct-12 20:19, Sean Kelly wrote:
>> On Oct 12, 2012, at 2:29 AM, Russel Winder <russel@winder.org.uk> wrote:
>> 
>>> On Thu, 2012-10-11 at 20:30 -0700, Charles Hixson wrote: […]
>>>> I'm not clear on what Fibers are.  From Ruby they seem to mean co-routines, and that doesn't have much advantage.  But it also seems as
>>> […]
>>> 
>>> I think the emerging consensus is that threads allow for pre-emptive scheduling whereas fibres do not. So yes as in Ruby, fibres are collaborative co-routines. Stackless Python is similar.
>> 
>> Yep. If fibers were used in std.concurrency there would basically be an implicit yield in send and receive.
> 
> Makes me wonder how it will work with blocking I/O and the like. If all of (few of) threads get blocked this way that going to stall all of (thousands of) fibers.

Ideally, IO would be nonblocking with a yield there too, at least if the operation would block.
October 15, 2012
On 15-Oct-12 05:58, Sean Kelly wrote:
> On Oct 14, 2012, at 9:59 AM, Dmitry Olshansky <dmitry.olsh@gmail.com> wrote:
>
>> On 14-Oct-12 20:19, Sean Kelly wrote:
>>> On Oct 12, 2012, at 2:29 AM, Russel Winder <russel@winder.org.uk> wrote:
>>>
>>>> On Thu, 2012-10-11 at 20:30 -0700, Charles Hixson wrote:
>>>> […]
>>>>> I'm not clear on what Fibers are.  From Ruby they seem to mean
>>>>> co-routines, and that doesn't have much advantage.  But it also seems as
>>>> […]
>>>>
>>>> I think the emerging consensus is that threads allow for pre-emptive
>>>> scheduling whereas fibres do not. So yes as in Ruby, fibres are
>>>> collaborative co-routines. Stackless Python is similar.
>>>
>>> Yep. If fibers were used in std.concurrency there would basically be an implicit yield in send and receive.
>>
>> Makes me wonder how it will work with blocking I/O and the like. If all of (few of) threads get blocked this way that going to stall all of (thousands of) fibers.
>
> Ideally, IO would be nonblocking with a yield there too, at least if the operation would block.

I'm wondering if it will be possible to (sort of) intercept all common I/O calls in 3rd party C libraries. Something like using our own "wrapper" on top of C runtime but that leaves BSD sockets and a ton of WinAPI/Posix primitives to care about.

-- 
Dmitry Olshansky
October 15, 2012
On Oct 15, 2012, at 9:35 AM, Dmitry Olshansky <dmitry.olsh@gmail.com> wrote:
> 
> I'm wondering if it will be possible to (sort of) intercept all common I/O calls in 3rd party C libraries. Something like using our own "wrapper" on top of C runtime but that leaves BSD sockets and a ton of WinAPI/Posix primitives to care about.

It's possible, but I don't know that I want to inject our own behavior into what users think is a C system call.  I'd probably put the behavior into whatever networking API is added to Phobos though.  Still not sure if this should be opt-out or not though, or how that would work.
1 2
Next ›   Last »