March 27, 2015
Am 27.03.2015 um 17:06 schrieb Dicebot:
> On Friday, 27 March 2015 at 15:28:31 UTC, Ola Fosheim Grøstad wrote:
>> No... E.g.:
>>
>> On the same thread:
>> 1. fiber A receives request and queries DB (async)
>> 2. fiber B computes for 1 second
>> 3. fiber A sends response.
>>
>> Latency: 1 second even if all the other threads are free.
>
> This is a problem of having blocking 1 second computation in same fiber
> pool as request handlers -> broken application design. Hiding that issue
> by moving fibers between threads is just making things worse.

Exactly, the problem will remain there, even with moving fibers around, because you might as well have the same situation in all of the threads at the same time like that. It always makes sense to have dedicated threads for lengthy computations. Apart from that, long computations can call yield() every now and then to avoid this kind of issue in the first place.
March 27, 2015
Am 27.03.2015 um 17:11 schrieb "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= <ola.fosheim.grostad+dlang@gmail.com>":
> On Friday, 27 March 2015 at 16:06:55 UTC, Dicebot wrote:
>> On Friday, 27 March 2015 at 15:28:31 UTC, Ola Fosheim Grøstad wrote:
>>> No... E.g.:
>>>
>>> On the same thread:
>>> 1. fiber A receives request and queries DB (async)
>>> 2. fiber B computes for 1 second
>>> 3. fiber A sends response.
>>>
>>> Latency: 1 second even if all the other threads are free.
>>
>> This is a problem of having blocking 1 second computation in same
>> fiber pool as request handlers -> broken application design. Hiding
>> that issue by moving fibers between threads is just making things worse.
>
> Not a broken design. If I have to run multiple servers just to handle an
> image upload or generating a PDF then you are driving up the cost of the
> project and developers would be better off with a different platform?
>
> You can create more complicated setups where multiple 200ms computations
> cause the same latency when the CPU is 90% idle. This is simply not good
> enough, if fibers carry this cost then it is better to just use an event
> driven design.

So what happens if 10 requests come in at the same time? Does moving things around still help you? No.

BTW, why would an event driven design be any better? You'd have exactly the same issue.
March 27, 2015
On Friday, 27 March 2015 at 16:09:08 UTC, Chris wrote:
> It need not be new, it needs to be good. That's all. I don't understand this obsession people have with new things, as if they were automatically good only because they are new. Why not try square wheels? Uh, it's new, you know.

New things can be cool for a toy language, but not for a production language. The latter needs polish and support (IDE etc).

Just pointed out the social dynamics where Go/D communities are not all that different. There's probably a difference between programmers that juggle 5-7 languages and programmers that stick to 1 language: «it is just A tool among many» vs «it is THE tool». I think you see this expressed in both Go and D communities.
March 27, 2015
On Friday, 27 March 2015 at 16:11:42 UTC, Ola Fosheim Grøstad wrote:
> Not a broken design. If I have to run multiple servers just to handle an image upload or generating a PDF then you are driving up the cost of the project and developers would be better off with a different platform?
>
> You can create more complicated setups where multiple 200ms computations cause the same latency when the CPU is 90% idle. This is simply not good enough, if fibers carry this cost then it is better to just use an event driven design.

I have no interest in arguing with you, just calling out especially harmful lies that may mislead random readers.
March 27, 2015
On Friday, 27 March 2015 at 16:18:33 UTC, Sönke Ludwig wrote:
> So what happens if 10 requests come in at the same time? Does moving things around still help you? No.

Load balancing is probabilistic in nature. Caching also makes it unlikely that you get 10 successive high computation requests.

> BTW, why would an event driven design be any better? You'd have exactly the same issue.

1. No stack.
2. Batching.

But it is more tedious.
March 27, 2015
On Friday, 27 March 2015 at 16:27:48 UTC, Dicebot wrote:
> I have no interest in arguing with you, just calling out especially harmful lies that may mislead random readers.

Nice one. I am sure your attitude is very helpful for D.
March 27, 2015
On Friday, 27 March 2015 at 16:20:28 UTC, Ola Fosheim Grøstad wrote:
> On Friday, 27 March 2015 at 16:09:08 UTC, Chris wrote:
>> It need not be new, it needs to be good. That's all. I don't understand this obsession people have with new things, as if they were automatically good only because they are new. Why not try square wheels? Uh, it's new, you know.
>
> New things can be cool for a toy language, but not for a production language. The latter needs polish and support (IDE etc).
>
> Just pointed out the social dynamics where Go/D communities are not all that different. There's probably a difference between programmers that juggle 5-7 languages and programmers that stick to 1 language: «it is just A tool among many» vs «it is THE tool». I think you see this expressed in both Go and D communities.

I'd say Go fans are worse in this respect (yes, I know, probably not all of them). People in the D community are here, because they have tried at least 5-7 other languages. Go programmers, if Pike's remarks are anything to go by, are probably less experienced (just left school or college) and are more susceptible to Google's propaganda. I'd say they know not better.
March 27, 2015
Am 27.03.2015 um 17:31 schrieb "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= <ola.fosheim.grostad+dlang@gmail.com>":
> On Friday, 27 March 2015 at 16:18:33 UTC, Sönke Ludwig wrote:
>> So what happens if 10 requests come in at the same time? Does moving
>> things around still help you? No.
>
> Load balancing is probabilistic in nature. Caching also makes it
> unlikely that you get 10 successive high computation requests.

You could say the same for the non-moving case. If you have a fully loaded node and mix request handling and lengthy computations like this, you'll run into this no matter what. The simple solution is to just either separate lengthy computations (easy) or to split them up into shorter parts using yield() (usually easy, too).

Caching *may* make it unlikely, but that completely depends on the application. If you have some kind of server-side image processing web service with many concurrent users, you'd have a lot of computation heavy requests with no opportunities for caching.

>
>> BTW, why would an event driven design be any better? You'd have
>> exactly the same issue.
>
> 1. No stack.

That reduces the memory footprint, but doesn't reduce latency.

> 2. Batching.

Can you elaborate?

>
> But it is more tedious.

March 27, 2015
On Friday, 27 March 2015 at 16:40:14 UTC, Ola Fosheim Grøstad wrote:
> On Friday, 27 March 2015 at 16:27:48 UTC, Dicebot wrote:
>> I have no interest in arguing with you, just calling out especially harmful lies that may mislead random readers.
>
> Nice one. I am sure your attitude is very helpful for D.

Actually, it really is. He does a lot of useful work that has helped improve many parts of D and it's ecosystem. Mostly I see you sniping from the sidelines with in-actionable comments; not because you're necessarily wrong, but because despite what appears to be a significant body of knowledge, your arguments lack detail and are often supported by a bunch of academic knowledge that - at best - you refer to in overly general terms.

Sorry if that sounds harsh, but it's frustrating seeing you throw knowledge at topics without making any of it stick.
March 27, 2015
On 3/27/2015 5:15 AM, Sönke Ludwig wrote:
> It has, that is more or less the original selling point. It also keeps an
> internal thread pool where each thread has a dynamic set of reusable fibers to
> execute tasks. Each fiber is bound to a certain thread, though, and they have
> to, because otherwise things like thread local storage or other thread specific
> code (e.g. the classic OpenGL model, certain COM modes etc.) would break.

It's awesome that vibe has that! How about replacing the fiber support in druntime with that?


> Apart from these concerns, It's also not clear to me that moving tasks between
> threads is necessarily an improvement. There are certainly cases where that
> leads to a better distribution across the cores, but in most scenarios the
> number of concurrent tasks should be high enough to keep all cores busy anyhow.
> There are also additional costs for moving fibers (synchronization, cache misses).

I agree that moving between threads can wait.