Thread overview
On Concurrency
Apr 18, 2014
Nordlöw
Apr 18, 2014
Nordlöw
Apr 21, 2014
Etienne Cimon
Apr 24, 2014
Bienlein
Apr 25, 2014
Kagamin
Apr 25, 2014
Russel Winder
April 18, 2014
Could someone please give some references to thorough explainings on these latest concurrency mechanisms

- Go: Goroutines
- Coroutines (Boost):
  - https://en.wikipedia.org/wiki/Coroutine
  - http://www.boost.org/doc/libs/1_55_0/libs/coroutine/doc/html/coroutine/intro.html
- D: core.thread.Fiber: http://dlang.org/library/core/thread/Fiber.html
- D: vibe.d

and how they relate to the following questions:

1. Is D's Fiber the same as a coroutine? If not, how do they differ?

2. Typical usecases when Fibers are superior to threads/coroutines?

3. What mechanism does/should D's builtin Threadpool ideally use to package and manage computations?

4. I've read that vibe.d's has a more lightweight mechanism than what core.thread.Fiber provides. Could someone explain to me the difference? When will this be introduced and will this be a breaking change?

5. And finally how does data sharing/immutability relate to the above questions?
April 18, 2014
Correction: The references I gave _are_ theoretically thorough, so I'm satisified with these for now. I'm however still interested in the D-specific questions I asked.
April 21, 2014
On 2014-04-18 13:20, "Nordlöw" wrote:
> Could someone please give some references to thorough explainings on
> these latest concurrency mechanisms
>
> - Go: Goroutines
> - Coroutines (Boost):
>    - https://en.wikipedia.org/wiki/Coroutine
>    -
> http://www.boost.org/doc/libs/1_55_0/libs/coroutine/doc/html/coroutine/intro.html
>
> - D: core.thread.Fiber: http://dlang.org/library/core/thread/Fiber.html
> - D: vibe.d
>
> and how they relate to the following questions:
>
> 1. Is D's Fiber the same as a coroutine? If not, how do they differ?
>
> 2. Typical usecases when Fibers are superior to threads/coroutines?
>
> 3. What mechanism does/should D's builtin Threadpool ideally use to
> package and manage computations?
>
> 4. I've read that vibe.d's has a more lightweight mechanism than what
> core.thread.Fiber provides. Could someone explain to me the difference?
> When will this be introduced and will this be a breaking change?
>
> 5. And finally how does data sharing/immutability relate to the above
> questions?

I'll admit that I'm not the expert you may be expecting for this but I could answer somewhat 1, 2, and 5. Coroutines, fibers, threads, multi-threading and all of this task-management "stuff" is a very complex science and most of the kernels actually rely on this to do their magic, keeping stack frames around with contexts is the idea and working with it made me feel like it's much more complex than meta-programming but I've been reading and getting a hang of it within the last 7 months now.

Coroutines give you control over what exactly you'd like to keep around once the "yield" returned. You make a callback with "boost::asio::yield_context" or something of the likes and it'll contain  exactly what you're expecting, but you're receiving it in another function that expects it as a parameter, making it asynchronous but it can't just resume within the same function because it does rely on a callback function like javascript.

D's fibers are very much simplified (we can argue whether it's more or less powerful), you launch them like a thread ( Fiber fib = new Fiber( &delegate ) ) and just move around from fiber to fiber with Fiber.call(fiber) and Fiber.yield(). The yield function called within a Fiber-called function will stop in a middle of that function's procedures if you want and it'll just return like the function ended, but you can rest assured that once another Fiber calls that fiber instance again it'll resume with all the stack info restored. They're made possible through some very low-level assembly magic, you can look through the library it's really impressive, the guy who wrote that must be some kind of wizard.

Vibe.d's fibers are built right above this, core.thread.fiber (explained above) with the slight difference that they're packed with more power by putting them on top of a kernel-powered event loop rotating infinitely in epoll or windows message queues to resume them, (the libevent driver for vibe.d is the best developed event loop for this). So basically when a new "Task" is called (which has the Fiber class as a private member) you can yield it with yield() until the kernel wakes it up again with a timer, socket event, signal, etc. And it'll resume right after the yield() function. This is what helps vibe.d have async I/O while remaining procedural without having to shuffle with mutexes : the fiber is yielded every time it needs to wait for the network sockets and awaken again when packets are received so until the expected buffer length is met!

I believe this answer is very mediocre and you could go on reading about all I said for months, it's a very wide subject. You can have "Task message queues" and "Task concurrency" with "Task semaphores", it's like multi-threading in a single thread!
April 21, 2014
On Friday, 18 April 2014 at 17:20:06 UTC, Nordlöw wrote:
> Could someone please give some references to thorough explainings on these latest concurrency mechanisms

Coroutines is nothing more than explicit stack switching. Goroutines/fiber etc are abstractions that may be implemented using coroutines.

Threads are new execution contexts with their own register sets (and stack/stackpointer) that run in parallell (coroutines don't).

Processes have their own resource space (memory, file handles etc).

Supervisor mode is a state where the core has access to all memory, hardware registers and the ability to touch the config of other cores. Typically only for the OS.

> 5. And finally how does data sharing/immutability relate to the above questions?

The key difference is the granularity/parallellism of the concurrency. With threads you have to consider locking mechanisms/memorybarriers. With coroutines you don't if you switch context when the data set is in a consistent state.

One key difference is that coroutines won't make your programs run faster. It is a modelling mechanism that can simplify your programs where you otherwise would have to implement a state machine.
April 24, 2014
> One key difference is that coroutines won't make your programs run faster. It is a modelling mechanism that can simplify your programs where you otherwise would have to implement a state machine.

This is also my impression when I look at this code (see http://www.99-bottles-of-beer.net/language-d-2547.html) that implements 99 bottles of beer in D with fibers. What seems to be happening is some alternating handover of the CPU.

But when I run the code all 4 cores of my machine are under load and it looks like the runtime were able to make things run in parallel somehow. Now I'm really confused ...

April 25, 2014
Fibers are more lightweight, they're not kernel objects. Threads are scheduled by kernel (usually). Fibers are better if you can get better resource usage with manual scheduling - less context switches, or don't want to consume resources need by threads.
April 25, 2014
On Fri, 2014-04-25 at 17:10 +0000, Kagamin via Digitalmars-d-learn wrote:
> Fibers are more lightweight, they're not kernel objects. Threads are scheduled by kernel (usually). Fibers are better if you can get better resource usage with manual scheduling - less context switches, or don't want to consume resources need by threads.

Or to put it another way, fibres (!) are what threads were before the hardware and kernel folks changed the game.
-- 
Russel. ============================================================================= Dr Russel Winder      t: +44 20 7585 2200   voip: sip:russel.winder@ekiga.net 41 Buckmaster Road    m: +44 7770 465 077   xmpp: russel@winder.org.uk London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder