March 08, 2014
On Saturday, 8 March 2014 at 16:01:00 UTC, Bienlein wrote:
> On Monday, 3 March 2014 at 14:27:53 UTC, Sönke Ludwig wrote:
>
>> Just out of curiosity, what did you miss in vibe.d regarding fiber based scheduling?
>
> By the way is there a way to make use of vibe.d in something like a local mode? I mean some in-memory mode without going through TCP.

Pipes maybe?
March 12, 2014
Am 03.03.2014 16:55, schrieb Bienlein:
> On Monday, 3 March 2014 at 14:27:53 UTC, Sönke Ludwig wrote:
>>
>> Just out of curiosity, what did you miss in vibe.d regarding fiber
>> based scheduling?
>
> Hi Söhnke,
>
> I'm thinking of developing a little actor library on top of D's
> spawn/receive model for creating threads, which is already actor-like
> but on a level of global functions. I want to mold some thin class layer
> on top of it to have actors on class level. Vibe.d would be a good
> solution for distributed actors. But for a first step I want to have
> local actors. Actors that are in the same memory space don't need to
> communicate through sockets as in case of vibe.d.
>
> Regards, Bienlein

The vibe.core.concurrency module provides the same interface as std.concurrency (with some different details). Once Sean's fiber additions to std.concurrency will be ready, vibe.core.concurrency will be layered on top of (and finally replaced by) it.

There is also vibe.stream.taskpipe, which offers a stream interface for passing data between tasks. This works for tasks in the same or in different threads.
March 12, 2014
Am 03.03.2014 22:58, schrieb Bienlein:
> On Monday, 3 March 2014 at 14:27:53 UTC, Sönke Ludwig wrote:
>
>> Just out of curiosity, what did you miss in vibe.d regarding fiber
>> based scheduling?
>
> There is something else I forgot to mention. One scenario I'm thinking
> of is to have a large number of connections like more than 100.000 I
> want to listen on. This results in a situation with blocking I/O for all
> those connections. Fibers in D are more like continuations that are
> distributed over several kernel threads. The way Sean Kelly has
> implemented the FiberScheduler a fiber is invoked in case it receives an
> item like data through the connection it serves as in my scenario. At
> least this is the way I understood the implementation. So I can have
> like 100.000 connections simultanously as in Go without having to use Go
> (the Go language is too simple for my taste).

In vibe.d, there are basically two modes of fiber scheduling. The usual mode is purely driven by the event loop: Once a task/fiber triggers a blocking operation, lets say a socket receive operation, it registers its handle for the corresponding event and calls an internal rawYield() function. Once the event fires, the fiber is then resumed.

The other mode happens when yield() (in vibe.core.core) is explicitly called. In this case, tasks are inserted into a singly-linked list, which is processed in chunks alternated with a call to processEvents() and in FIFO order to ensure a fair scheduling and to avoid blocking event processing when tasks perform continuous computations with intermittent yield() calls.

So the first mode AFAICS is working just like how Sean has made his fiber scheduler. And at least on 64-bit systems, there is nothing that speaks against handling huge numbers of connections simultaneously. 32-bit can also handle a lot of connections with small fiber stack sizes (setTaskStackSize), but using decently sized stacks will quickly eat up the available address space.
1 2 3 4
Next ›   Last »