June 08, 2015
On Saturday, 6 June 2015 at 18:49:30 UTC, Shachar Shemesh wrote:
> Since we are talking about several tens of thousands of threads, each random fluctuation in the load resulted in the

Using an unlikely workload that the kernel has not been designed and optimized for is in general a bad idea. Especially on a generic scheduler that has no knowledge of the nature of the workload and therefore is (or should be) designed to avoid worst case starvation scenarios.
June 08, 2015
On Friday, 5 June 2015 at 18:25:26 UTC, Chris wrote:
> On Friday, 5 June 2015 at 17:28:39 UTC, Ola Fosheim Grøstad wrote:
>> On Friday, 5 June 2015 at 14:51:05 UTC, Chris wrote:
>>> I agree, but I dare doubt that a slight performance edge will make the difference. There are load of factors (knowledge base, infrastructure, complacency, C++-Guruism, marketing etc.) why D is an underdog.
>>
>> But everybody loves the underdog when it catches up to the pack and beats the pack on the finish line. ;^)
>>
>> I now follow Pony because of this self-provided benchmark:
>>
>> http://ponylang.org/benchmarks_all.pdf
>>
>> They are communicating a focus for a domain, a good understanding of their area, and it makes me want to give it a spin even at this early stage where I obviously can't actually use it.
>>
>> I am not saying Pony is good, but it makes a good case for itself IMO.
>>
>>> no sugar, thanks." I know, as usual I simplify things and exaggerate! He he he. But programming languages are like everything else, only because something is good doesn't mean that people will buy it.
>>
>> Sure, but it is also important to make people take notice. People take notice of benchmark leaders. And too often benchmarks measure throughput while latency is just as important.
>>
>> End user don't notice peak throughput (which is measurable as a bleep on the cloud server instance-count logs), they notice reduced latency. So to me latency is the most important aspect of a web-service (+ programmer productivity).
>>
>> I don't find Go exciting, but they show concern for latency (concurrent GC etc). Communicating that concern is good, even before they reach whatever goals they have.
>>
>>> As regard compiler-based features, as soon as features are compiler-based people will complain "Why is it built-in? That should be handled by a library! I want more freedom!" I know for sure.
>>
>> Heh, not if it is getting you an edge, but if it is a second citizen addition. Yes, then I agree.
>>
>> Cheers!
>
> Thanks for showing me Pony. Languages like Nim and Pony keep popping up which shows a) how important native compilation is and [...]

Which is why after all those years, the OpenJDK will eventually support AOT compilation to native code for Java 10 with some work being done in JEP 220[0], and .NET does AOT native code on Windows Phone 8 (MDIL), with static compilation with Visual C++ backend coming with .NET Native.

And Android also went native with the Dalvik re-write.

The best approach is anyway to have a JIT/AOT capable toolchain and use them accordingly to the deployment target.

[0]Which means Oracle finally accepted why almost all commercial JVM vendors do offer such a feature. I read somewhere that JIT only was a kind of Sun political issue.

April 16, 2016
Here is an interesting talk from Naughty Dog

http://www.gdcvault.com/play/1022186/Parallelizing-the-Naughty-Dog-Engine

They move Fibers between threads.

A rough overview:

You create task A that depends on task B. The task is submitted as a fiber and executed by a thread. Now task A has to wait for task B to finish so you hold the fiber and put it into a queue, you also create an atomic counter that tracks all dependencies, once the counter reaches 0 you know that all dependencies have finished.

Now you put task A into a queue and execute a different task. Once a thread completes a task it looks into the queue and checks if there is one task that has a counter of 0, which means it can continue to execute that task.

Now move that fiber/task onto a free thread and you can continue to execute that fiber.

What is the current state of fibers in D? I have asked this question on SO https://stackoverflow.com/questions/36663720/how-to-pass-a-fiber-to-a-thread

April 16, 2016
On 04/16/2016 03:45 PM, maik klein wrote:
> Here is an interesting talk from Naughty Dog
> 
> http://www.gdcvault.com/play/1022186/Parallelizing-the-Naughty-Dog-Engine
> 
> They move Fibers between threads.
> 
> A rough overview:
> 
> You create task A that depends on task B. The task is submitted as a fiber and executed by a thread. Now task A has to wait for task B to finish so you hold the fiber and put it into a queue, you also create an atomic counter that tracks all dependencies, once the counter reaches 0 you know that all dependencies have finished.
> 
> Now you put task A into a queue and execute a different task. Once a thread completes a task it looks into the queue and checks if there is one task that has a counter of 0, which means it can continue to execute that task.
> 
> Now move that fiber/task onto a free thread and you can continue to execute that fiber.
> 
> What is the current state of fibers in D? I have asked this question on
> SO
> https://stackoverflow.com/questions/36663720/how-to-pass-a-fiber-to-a-thread

Such design is neither needed for good concurrency, nor actually helpful. Under heavy load (and that is the only case that is worth optimizing for) there will be so many fibers that thread-local fiber queues will always have enough work to keep them busy.

At the same time moving fibers between threads is harmful for plain performance - it screws the cache and makes impossible to share thread-local storage between fibers on same worker thread.

Simply picking a worker thread + worker fiber when task is assigned and sticking to it until finished should work good enough. It is also important to note though that "fiber" is not the same as "task". Former is execution context primitive, latter is scheduling abstraction. In fact, heavy load systems are likely to have many more tasks than fibers at certain spike points.
January 08, 2017
> Simply picking a worker thread + worker fiber when task is assigned and sticking to it until finished should work good enough. It is also important to note though that "fiber" is not the same as "task". Former is execution context primitive, latter is scheduling abstraction. In fact, heavy load systems are likely to have many more tasks than fibers at certain spike points.

Could you explain difference between fibers and tasks. I read a lot, but still can't understand the difference.

January 08, 2017
"The type of concurrency used when logical threads are created is determined by the Scheduler selected at initialization time. The default behavior is currently to create a new kernel thread per call to spawn, but other schedulers are available that multiplex fibers across the main thread or use some combination of the two approaches" (с) dlang docs

Am I right understand that `concurrency` is just wrapper that hide implementation of tasks and fibers? So programmer can work with threads like with fibers and vice versa?

If yes, does it's mean that spawns is planing not but with system Scheduler, but with DRuntime Scheduler (or how it's can be named?) and all of them work in user-space?
January 08, 2017
On Sun, 08 Jan 2017 09:18:19 +0000, Suliman wrote:

>> Simply picking a worker thread + worker fiber when task is assigned and sticking to it until finished should work good enough. It is also important to note though that "fiber" is not the same as "task". Former is execution context primitive, latter is scheduling abstraction. In fact, heavy load systems are likely to have many more tasks than fibers at certain spike points.
> 
> Could you explain difference between fibers and tasks. I read a lot, but still can't understand the difference.

A task is a unit of work to be scheduled.

A fiber is a concurrency mechanism supporting multiple independent stacks, like threads, that you can switch between. Unlike threads, a fiber continues to execute until it voluntarily yields execution.

You might have a task: send a registration message to a user who just registered. That gets scheduled onto a fiber. Your email sending stuff is vibe.d all the way down, and also you have to make some database queries. The IO involved causes the fiber that the task was scheduled on to yield execution several times. Finally, the task finishes, and the fiber can be destroyed -- or reused for another task.
January 08, 2017
On Sunday, 8 January 2017 at 09:18:19 UTC, Suliman wrote:
>> Simply picking a worker thread + worker fiber when task is assigned and sticking to it until finished should work good enough. It is also important to note though that "fiber" is not the same as "task". Former is execution context primitive, latter is scheduling abstraction. In fact, heavy load systems are likely to have many more tasks than fibers at certain spike points.
>
> Could you explain difference between fibers and tasks. I read a lot, but still can't understand the difference.

Fiber is context switching primitive very similar to thread. It is different from thread in a sense that it is completely invisible to operating system and only does context switching when explicitly told so in code. But it still can execute arbitrary code. When we talk about fibers in D, we usually mean https://dlang.org/library/core/thread/fiber.html

Task is abstraction over some specific piece of work to do. Most simple task one can think of is simply a function to execute. Other details may vary a lot -different languages and libraries implement tasks differently, and D standard library doesn't define it all. Most widespread task definition in D comes from vibe.d - http://vibed.org/api/vibe.core.task/Task

To summarize - fiber defines HOW to execute code but doesn't care which code to execute. Task defines WHAT code to execute but normally has no assumptions over how exactly it gets run.
January 08, 2017
On Sun, 2017-01-08 at 09:18 +0000, Suliman via Digitalmars-d wrote:
> > Simply picking a worker thread + worker fiber when task is assigned and sticking to it until finished should work good enough. It is also important to note though that "fiber" is not the same as "task". Former is execution context primitive, latter is scheduling abstraction. In fact, heavy load systems are likely to have many more tasks than fibers at certain spike points.
> 
> Could you explain difference between fibers and tasks. I read a lot, but still can't understand the difference.

A fibre is what a thread used to be before kernels supported threads directly. Having provided that historical backdrop, that seems sadly missing from the entire Web, the current status is roughly described by:

https://en.wikipedia.org/wiki/Fiber_(computer_science)

http://stackoverflow.com/questions/796217/what-is-the-difference-betwee n-a-thread-and-a-fiber

Tasks are things that can be scheduled using threads or fibres. It's all down to thread pools and kernel processes. Which probably doesn't help per se, but:

http://docs.paralleluniverse.co/quasar/

Quasar, GPars, std.parallelism, Java Fork/Join all harness all these ideas.

In the end as a programmer you should be using actors, agents, dataflow, data parallelism or some similar high level model. Anything lower level and, to be honest, you are doing it wrong.


-- 
Russel. ============================================================================= Dr Russel Winder      t: +44 20 7585 2200   voip: sip:russel.winder@ekiga.net 41 Buckmaster Road    m: +44 7770 465 077   xmpp: russel@winder.org.uk London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder

January 23, 2017
On Sunday, 8 January 2017 at 09:18:19 UTC, Suliman wrote:
>> Simply picking a worker thread + worker fiber when task is assigned and sticking to it until finished should work good enough. It is also important to note though that "fiber" is not the same as "task". Former is execution context primitive, latter is scheduling abstraction. In fact, heavy load systems are likely to have many more tasks than fibers at certain spike points.
>
> Could you explain difference between fibers and tasks. I read a lot, but still can't understand the difference.

The meaning of the word "task" is contextual:

https://en.wikipedia.org/wiki/Task_(computing)

So, yes, it is a confusing term that one should avoid using without defining it.

Ola.
1 2 3 4
Next ›   Last »