On Monday, 9 May 2022 at 20:37:50 UTC, Andrea Fontana wrote:
>On Monday, 9 May 2022 at 20:08:38 UTC, Sebastiaan Koppe wrote:
>As an example, how many requests per second can you manage if all requests have to wait 100 msecs?
For non critical workload you will probably still hit good enough performance though.
Firstly, it depends on how many workers you have.
Then you should consider that a lot of (most?) websites use php-fpm, that works using the same approach (but php is much slower than D). The same goes for cgi/fastcgi/scgi and so on.
Let's say you have just 20 workers. 100msecs for each request (a lot of time for my standards, I would say). That means 20*10 = 200 webpages/s = 720k pages/h. I don't think your website has so much traffic...
And I hope not every request will take 100msecs!
100msecs is on the upper end for sure, but if you add a database, external service call, etc. it is not uncommon to reach that.
The point however, is that the architecture breaks down because it is unable to do work concurrently. Every requests blocks a worker from start to finish.
Unless it is CPU heavy the system will be under utilized. That is not necessarily bad though. The simplicity has something going for it, but it is definitely a tradeoff that you should consider highlighting.
> @endpoint void my_end(Request r, Output o)
{
if (r.uri == "/asd") // or whatever you want: regex, or checking another field
return false; //
}
This is just like:
@matchuda(uri, "/asd") void my_end(....) { ... }
What's the difference? The first one is much more flexible, IMHO.
The difference is that with the route uda you can only map routes 1:1 exhaustively. With your approach it is up to the programmer to avoid errors. It is also hard to reason about the flow of requests through all those functions, and you have to look at the body of them to determine what will happen.