May 10, 2022

On Monday, 9 May 2022 at 20:37:50 UTC, Andrea Fontana wrote:

>

On Monday, 9 May 2022 at 20:08:38 UTC, Sebastiaan Koppe wrote:

>

As an example, how many requests per second can you manage if all requests have to wait 100 msecs?

For non critical workload you will probably still hit good enough performance though.

Firstly, it depends on how many workers you have.

Then you should consider that a lot of (most?) websites use php-fpm, that works using the same approach (but php is much slower than D). The same goes for cgi/fastcgi/scgi and so on.

Let's say you have just 20 workers. 100msecs for each request (a lot of time for my standards, I would say). That means 20*10 = 200 webpages/s = 720k pages/h. I don't think your website has so much traffic...

And I hope not every request will take 100msecs!

100msecs is on the upper end for sure, but if you add a database, external service call, etc. it is not uncommon to reach that.

The point however, is that the architecture breaks down because it is unable to do work concurrently. Every requests blocks a worker from start to finish.

Unless it is CPU heavy the system will be under utilized. That is not necessarily bad though. The simplicity has something going for it, but it is definitely a tradeoff that you should consider highlighting.

>
    @endpoint void my_end(Request r, Output o)
    {
         if (r.uri == "/asd") // or whatever you want: regex, or checking another field
            return false; //
    }

This is just like:

@matchuda(uri, "/asd") void my_end(....) { ... }

What's the difference? The first one is much more flexible, IMHO.

The difference is that with the route uda you can only map routes 1:1 exhaustively. With your approach it is up to the programmer to avoid errors. It is also hard to reason about the flow of requests through all those functions, and you have to look at the body of them to determine what will happen.

May 10, 2022

On Tuesday, 10 May 2022 at 08:32:15 UTC, Sebastiaan Koppe wrote:

>

On Monday, 9 May 2022 at 20:37:50 UTC, Andrea Fontana wrote:

>

On Monday, 9 May 2022 at 20:08:38 UTC, Sebastiaan Koppe wrote:

>

As an example, how many requests per second can you manage if all requests have to wait 100 msecs?

For non critical workload you will probably still hit good enough performance though.

Firstly, it depends on how many workers you have.

Then you should consider that a lot of (most?) websites use php-fpm, that works using the same approach (but php is much slower than D). The same goes for cgi/fastcgi/scgi and so on.

Let's say you have just 20 workers. 100msecs for each request (a lot of time for my standards, I would say). That means 20*10 = 200 webpages/s = 720k pages/h. I don't think your website has so much traffic...

And I hope not every request will take 100msecs!

100msecs is on the upper end for sure, but if you add a database, external service call, etc. it is not uncommon to reach that.

And you can still handle 700k/views per hour with 20 workers!

>

The point however, is that the architecture breaks down because it is unable to do work concurrently. Every requests blocks a worker from start to finish.

Unless it is CPU heavy the system will be under utilized. That is not necessarily bad though. The simplicity has something going for it, but it is definitely a tradeoff that you should consider highlighting.

Every server has its own target. BTW, I'm not developing serverino to use it as a building block of a CDN.

In real-life projects, I think you can use it without any problem for not-huge projects.

You can also put it under a reverse proxy (f.e. nginx), to handle just the requests you need to write in D.

>

The difference is that with the route uda you can only map routes 1:1 exhaustively. With your approach it is up to the programmer to avoid errors. It is also hard to reason about the flow of requests through all those functions, and you have to look at the body of them to determine what will happen.

Sorry I don't follow you: I don't know which framework you're using, but if you're using UDA with matches (something like: @matchUri("/main") void renderMain(...) { ... }) you still have to check all the functions if a request is not handled correctly. Or am I missing something?

Using my approach if you want to check which functions escape from routing you can just add a catch-all endpoint with low priority.

@priority(-1000) @endpoint
void wtf(Request r, Output o)
{
      fatal("Request NOT HANDLED: ", r.dump());
}

And if a request doesn't match your UDA constraint, how do you debug what's wrong with it? I think it's easier to add a checkpoint/log on the first lines of your functions body to guess why the function is skipped.

In any case if you want to use a different routing strategy it's quite easy. I really don't like libraries that force you to use their own style/way.

So you can even drop my UDAs and write the app like this. It still works:

mixin ServerinoMain;

void entry(Request r, Output o)
{
    // Use your routing strategy here
    // ...
    // YourRouter router;
    // router.do(r, "/hello/world", &yourFunction);
    // router.do(r, "/bla", &hello);
}

Andrea

May 10, 2022

On Tuesday, 10 May 2022 at 10:49:06 UTC, Andrea Fontana wrote:

>

And you can still handle 700k/views per hour with 20 workers!

Requests tend to come in bursts from the same client, thanks to clunky javascript APIs and clutters of resources (and careless web developers). For a typical D user ease-of-use is probably more important at this point, though, so good luck with your project!

May 10, 2022

On Tuesday, 10 May 2022 at 12:31:23 UTC, Ola Fosheim Grøstad wrote:

>

On Tuesday, 10 May 2022 at 10:49:06 UTC, Andrea Fontana wrote:

>

And you can still handle 700k/views per hour with 20 workers!

Requests tend to come in bursts from the same client, thanks to clunky javascript APIs and clutters of resources (and careless web developers). For a typical D user ease-of-use is probably more important at this point, though, so good luck with your project!

In my opnioni IRL that's not a big problem as it can seem.

Again: that's just how nginx and apache handle php/cgi/fcgi/scgi requests.
Wikipedia runs wikimedia software. Written in php. Running on apache with php-fpm (and cache!).

And I'm not suggesting to run wikipedia on serverino, for now.

If you try to open a lot of wikipedia pages at the same time in a burst, they will be served (probably using keep-alive connection) not in parallel: you're queued. And the 99.9% of users will never notice this. Is it a problem?

If you need much control, you can use an http accelerator and/or you can use a reverse proxy (like nginx) to control bursts et similia.

I'm running a whole website in D using fastcgi and we have no problem at all, it's blazing fast. But it's not so easy to setup as serverino :)

Andrea

May 10, 2022

On Tuesday, 10 May 2022 at 12:52:01 UTC, Andrea Fontana wrote:

>

I'm running a whole website in D using fastcgi and we have no problem at all, it's blazing fast. But it's not so easy to setup as serverino :)

Easy setup is probably the number one reason people land on a specific web-tech, so it is the best initial angle, I agree.

(By version 3.x you know what the practical weak spots are and can rethink the bottom layer.)

May 10, 2022
On Monday, 9 May 2022 at 20:37:50 UTC, Andrea Fontana wrote:
> The same goes for cgi/fastcgi/scgi and so on.

Well, cgi does one process per request, so there is no worker pool (it is the original "serverless" lol).

fastcgi is interesting because the Apache module for it will actually start and stop worker processes as-needed. I don't think the the nginx impl does that though.

But the nicest thing about all these application models is if you write it in the right way, you can swap out the approach, either transparently adding the i/o event waits or just adding additional servers without touching the application code. That's a lot harder to do when you expect shared state etc. like other things encourage.
May 10, 2022
On Tuesday, 10 May 2022 at 13:34:27 UTC, Adam D Ruppe wrote:
> On Monday, 9 May 2022 at 20:37:50 UTC, Andrea Fontana wrote:
>> The same goes for cgi/fastcgi/scgi and so on.
>
> Well, cgi does one process per request, so there is no worker pool (it is the original "serverless" lol).
>
> fastcgi is interesting because the Apache module for it will actually start and stop worker processes as-needed. I don't think the the nginx impl does that though.

Some daemons can manage this by themselves (once again: check  php-fpm  "dynamic" setting).

Serverino can do it as well. You can set in configuration the max and min number of workers. It's easy:

```
@onServerInit auto setup()
{
    ServerinoConfig sc = ServerinoConfig.create();
    sc.setMinWorkers(5);
    sc.setMaxWorkers(100);
    return sc;
}
```

If all workers are busy the daemon will launch a new one. You might be interested in
setMaxWorkerLifetime() and  sc.setMaxWorkerIdling() too!

> But the nicest thing about all these application models is if you write it in the right way, you can swap out the approach, either transparently adding the i/o event waits or just adding additional servers without touching the application code. That's a lot harder to do when you expect shared state etc. like other things encourage.

I would mention that if something goes wrong and a process crash or get caught in an infinite loop, it's not a problem. Process is killed and wake up again without pull all the server down.

Andrea


May 10, 2022

On Tuesday, 10 May 2022 at 13:15:38 UTC, Ola Fosheim Grøstad wrote:

>

On Tuesday, 10 May 2022 at 12:52:01 UTC, Andrea Fontana wrote:

>

I'm running a whole website in D using fastcgi and we have no problem at all, it's blazing fast. But it's not so easy to setup as serverino :)

Easy setup is probably the number one reason people land on a specific web-tech, so it is the best initial angle, I agree.

(By version 3.x you know what the practical weak spots are and can rethink the bottom layer.)

Right. But it's not just marketing.

I work in the R&D and every single time I even have to write a small api or a simple html interface to control some strange machine I think "omg, I have to set nginx agaaaaaain". It's pretty annoying especially if you're working on shared aws machine. (I know, docker & c. Exist, but they take a lot to setup and they are heavy for some simple api). I'm going to love serverino in the next months :)

May 10, 2022

On Monday, 9 May 2022 at 19:20:27 UTC, Andrea Fontana wrote:

>

Thank you. Looking forward to getting feedback, bug reports and help :)

BTW I'm curious, what made you not want to use my cgi.d which has similar capabilities?

May 10, 2022

On Tuesday, 10 May 2022 at 15:01:43 UTC, Adam Ruppe wrote:

>

On Monday, 9 May 2022 at 19:20:27 UTC, Andrea Fontana wrote:

>

Thank you. Looking forward to getting feedback, bug reports and help :)

BTW I'm curious, what made you not want to use my cgi.d which has similar capabilities?

I was really tempted to start from that!
But it's difficult to fork and edit a 11kloc project like that :)

I had yet developed fastcgi and scgi code in the past so I've reused some code and it didn't take so much time to get to serverino.

Andrea