October 28, 2014
On Monday, 27 October 2014 at 22:59:50 UTC, Brad Anderson wrote:
>
> Again, just out of curiosity, have you ever looked at Windows user-mode scheduling or Google's user-level threads[1][2] (under 200ns context-switch times)? I first heard of them from a post on the Rust forum[3] which suggested M:N may be a dead end. I believe Rust decided to try to make sure either 1:1 or M:N could be used but I don't actively follow Rust's development so I may be mistaken.

No, but I will.  The round robin scheduling is turning out to be considerably better than using all kernel threads in some cases and far far worse in others.  Really, any case where you have a large percentage of "threads" waiting to be notified, round robin wastes a lot of time.  Fixing this means treating "just yield" threads, "wait forever until notified" threads, and "wait N seconds unless notified" threads all differently in terms of the CPU time devoted to checking their state.  Something like libevent would help here, but I'll see what I can do.  I suspect WaitForMultipleObjects or the like will be needed on Windows, etc.  Thanks for the references.
October 28, 2014
On Tuesday, 28 October 2014 at 07:59:32 UTC, Martin Nowak wrote:
> On Monday, 27 October 2014 at 16:32:25 UTC, Sean Kelly wrote:
> That's the reason why the await adapter is so powerful.
> It's should be possible to await a promise (future) to let the scheduler know that it should resume the Fiber only after the promise (future) was set.

What's in the guts of the await adapter is the important part.  I already have that from a functional standpoint within the Scheduler, but the thread is basically polling state, which is terribly inefficient.
October 28, 2014
On Tuesday, 28 October 2014 at 08:02:23 UTC, Martin Nowak wrote:
> On Monday, 27 October 2014 at 21:43:47 UTC, Sean Kelly wrote:
>> Yep.  Every logical thread is a Fiber executed in a round-robin manner by a pool of kernel threads.  Pooled threads are spun up on demand (to a set upper limit) and terminate when there are no fibers waiting to execute.  It should make for a good "millions of threads" baseline scheduler.
>
> Will you reuse std.parallel's task scheduler for that?
> I always thought that the std.parallel and Fibers should work together but it wasn't easily possible to adapt Fibers to Tasks.

This wasn't really a natural fit for std.parallelism.  There are very few lines of code dedicated to thread management though anyway.  The code as-is isn't much bigger than FiberScheduler.  The complicated bit will be making scheduling efficient, which I've decided has to happen for MultiScheduler to be actually worth using.  It isn't as much of a proof of concept like FiberScheduler.
October 28, 2014
On 10/27/14 9:32 AM, Sean Kelly wrote:
> The real tricky part, which is something that even Go doesn't address as
> far as I know, is what to do about third-party APIs that block.  The
> easiest way around this is to launch threads that deal with these APIs
> in actual kernel threads instead of fibers, or try to make the scheduler
> smart enough to recognize that blocking is occurring (or more generally,
> that a given logical thread isn't playing nice) and move that fiber into
> a dedicated kernel thread automatically.  This latter approach seems
> entirely possible but will likely mean kernel calls to gather statistics
> regarding how long a given thread executes before yielding, etc.

I'm not sure but as far as I understand this one issue forces Go code to have a strong networking effect (must call into Go code designed especially for cooperative threading). That forces a lot of rewriting of existing code.

Andrei

October 28, 2014
On Tuesday, 28 October 2014 at 17:05:13 UTC, Andrei Alexandrescu
wrote:
> I'm not sure but as far as I understand this one issue forces Go code to have a strong networking effect (must call into Go code designed especially for cooperative threading). That forces a lot of rewriting of existing code.
>
Yes these things only work with coroutine aware functions,
because they need to yield execution back to the scheduler when
some function blocks.
This is also true for our Fibers and the reason why vibe.d and
libasync implement their own Socket, File and Mutex primitives.

The await proposal mentioned in the other thread
(http://forum.dlang.org/post/izosaywbnlxnbzyhjbnu@forum.dlang.org)
solve this problem by using generic constructs (based on Promises
and Awaitable adapters).

It should be possible to define a notion of Resumable that is
compatible with Fibers, Stackless Resumable Functions and even
Callbacks (.then(doThis)).

So a scheduler would only need to know that it should resume a
Resumable once the associated Promise finishes. How the promise
is computed is irrelevant.
1 2
Next ›   Last »