Jump to page: 1 26  
Page
Thread overview
Release: serverino - please destroy it.
May 08, 2022
Andrea Fontana
May 08, 2022
Andrea Fontana
May 12, 2022
Guillaume Piolat
May 12, 2022
Andrea Fontana
May 12, 2022
rikki cattermole
May 12, 2022
Guillaume Piolat
May 12, 2022
Andrea Fontana
May 08, 2022
Ali Çehreli
May 08, 2022
Andrea Fontana
May 08, 2022
Adam Ruppe
May 08, 2022
Ali Çehreli
May 09, 2022
rikki cattermole
May 09, 2022
H. S. Teoh
May 09, 2022
Ali Çehreli
May 09, 2022
Bruce Carneal
May 09, 2022
Vladimir Panteleev
May 09, 2022
H. S. Teoh
May 09, 2022
Vladimir Panteleev
May 09, 2022
H. S. Teoh
May 09, 2022
Vladimir Panteleev
May 09, 2022
H. S. Teoh
May 09, 2022
Guillaume Piolat
May 09, 2022
Andrea Fontana
May 10, 2022
Adam Ruppe
May 10, 2022
Andrea Fontana
May 09, 2022
Sebastiaan Koppe
May 09, 2022
Andrea Fontana
May 10, 2022
Sebastiaan Koppe
May 10, 2022
Andrea Fontana
May 10, 2022
Andrea Fontana
May 10, 2022
Andrea Fontana
May 10, 2022
Andrea Fontana
May 10, 2022
Andrea Fontana
May 10, 2022
Andrea Fontana
May 10, 2022
Paolo Invernizzi
May 10, 2022
Andrea Fontana
May 10, 2022
Paolo Invernizzi
May 10, 2022
Andrea Fontana
May 10, 2022
Paolo Invernizzi
May 10, 2022
Andrea Fontana
May 10, 2022
Sebastiaan Koppe
May 10, 2022
Andrea Fontana
May 10, 2022
Adam D Ruppe
May 10, 2022
Andrea Fontana
May 11, 2022
Orfeo
May 11, 2022
Andrea Fontana
May 14, 2022
frame
May 14, 2022
Andrea Fontana
May 15, 2022
frame
May 15, 2022
Andrea Fontana
May 08, 2022

Hello!

I've just released serverino. It's a small & ready-to-go http/https server.

Every request is processed by a worker running in an isolated process, no fibers/threads, sorry (or thanks?)

I did some tests and the performance sounds good: on a local machine it can handle more than 100_000 reqs/sec for a simple page containing just "hello world!".Of course that's not a good benchmark, if you can help me with other benchmarks it would be much appreciated (a big thanks to Tomáš Chaloupka who did some tests!)

I'm trying to keep it simple and easy to compile. It has no external deps in its base configuration and only one external library (libretls) is required if you need/want to enable https.

For your first project you need just three lines of code as you can see here:
https://github.com/trikko/serverino/

I didn't implement a traditional router for uris as probably many of you expected. I use a different approach. Check out this example: https://github.com/trikko/serverino/#defining-more-than-one-endpoint

This allows you to do some interesting things giving higher or lower priority to each endpoint (for example you can force something to always running first like redirect, logging, checks on login...)

Instead of using a lot of different UDAs to set routing rules, you can simply write them in your endpoint's body and exit from it to pass to the next endpoint.

Please help me testing it, I'm looking forward to receiving your shiny new issues on github.

Dub package: https://code.dlang.org/packages/serverino

Andrea

May 08, 2022

On Sunday, 8 May 2022 at 21:32:42 UTC, Andrea Fontana wrote:

>

[...]
Andrea

Whoops, I forgot a couple of things. This was tested on linux only and it should work fine on other posix systems (macOS included!).

I don't have windows, but I think you need WSL to run it, since I'm using a lot of strange posix tricks to keep performace at a good level (like sending opened file descriptors between processes thru sockets).

If you can test it on windows with WSL, that would be appreciated a lot!

Andrea

May 08, 2022
Congratulations! :) Looking forward to watching your presentation at DConf... ;)

On 5/8/22 14:32, Andrea Fontana wrote:

> Every request is processed by a worker running in an isolated process,
> no fibers/threads, sorry (or thanks?)

That effectively uses multiple GCs. I always suspected that approach would provide better latency.

> sending opened file descriptors between processes thru sockets

Sweet!

Ali

May 08, 2022
On Sunday, 8 May 2022 at 22:09:37 UTC, Ali Çehreli wrote:
> Congratulations! :) Looking forward to watching your presentation at DConf... ;)

I wish I was able to speak publicly in English in front of an audience :)

>
> On 5/8/22 14:32, Andrea Fontana wrote:
>
> > Every request is processed by a worker running in an isolated
> process,
> > no fibers/threads, sorry (or thanks?)
>
> That effectively uses multiple GCs. I always suspected that approach would provide better latency.

I think it depends on what your server is doing, anyway.

>
> > sending opened file descriptors between processes thru sockets

I sent a pull request (merged!) for druntime to make this work on macOS too!

>
> Sweet!
>
> Ali


May 08, 2022
On Sunday, 8 May 2022 at 22:09:37 UTC, Ali Çehreli wrote:
> That effectively uses multiple GCs. I always suspected that approach would provide better latency.

My cgi.d has used some fork approaches for a very long time since it is a very simple way to spread this out, it works quite well.
May 08, 2022
On 5/8/22 16:10, Adam Ruppe wrote:
> On Sunday, 8 May 2022 at 22:09:37 UTC, Ali Çehreli wrote:
>> That effectively uses multiple GCs. I always suspected that approach
>> would provide better latency.
>
> My cgi.d has used some fork approaches for a very long time since it is
> a very simple way to spread this out, it works quite well.

While we are on topic :) and as I finally understood what generational GC is[1], are there any fundamental issues with D to not use one?

Ali

[1] Translating from what I wrote in the Turkish forum, here is my current understanding: Let's not waste time checking all allocated memory at every GC cycle. Instead, let's be smarter and assume that memory that survived through this GC cycle will survive the next cycle as well.

Let's put those memory blocks aside to be reconsidered only when we really have to. This effectively makes the GC only play with short-lived objects, reducing the amount of memory touched. This would make some objects live forever, but GC never promises that all finalizers will be executed.

May 09, 2022
On 09/05/2022 11:44 AM, Ali Çehreli wrote:
> While we are on topic :) and as I finally understood what generational GC is[1], are there any fundamental issues with D to not use one?

This is not a D issue, its an implementation one.

We don't have write barriers, that's it.

Make them opt-in and we can have more advanced GC's.

Oh and book recommendation for the subject: https://www.amazon.com/Garbage-Collection-Handbook-Management-Algorithms/dp/1420082795
May 08, 2022
On Mon, May 09, 2022 at 12:10:53PM +1200, rikki cattermole via Digitalmars-d-announce wrote:
> On 09/05/2022 11:44 AM, Ali Çehreli wrote:
> > While we are on topic :) and as I finally understood what generational GC is[1], are there any fundamental issues with D to not use one?
> 
> This is not a D issue, its an implementation one.
> 
> We don't have write barriers, that's it.
> 
> Make them opt-in and we can have more advanced GC's.
[...]

In the past, the argument was that write barriers represented an unacceptable performance hit to D code.  But I don't think this has ever actually been measured. (Or has it?)  Maybe somebody should make a dmd fork that introduces write barriers, plus a generational GC (even if it's a toy, proof-of-concept-only implementation) to see if the performance hit is really as bad as believed to be.


T

-- 
The best way to destroy a cause is to defend it poorly.
May 08, 2022
On 5/8/22 17:25, H. S. Teoh wrote:

> somebody should make a dmd
> fork that introduces write barriers, plus a generational GC (even if
> it's a toy, proof-of-concept-only implementation) to see if the
> performance hit is really as bad as believed to be.

Ooh! DConf is getting even more interesting. :o)

Ali

May 09, 2022

On Monday, 9 May 2022 at 00:32:33 UTC, Ali Çehreli wrote:

>

On 5/8/22 17:25, H. S. Teoh wrote:

>

somebody should make a dmd
fork that introduces write barriers, plus a generational GC
(even if
it's a toy, proof-of-concept-only implementation) to see if
the
performance hit is really as bad as believed to be.

Ooh! DConf is getting even more interesting. :o)

Ali

A helpful paper: "Getting to Go: The Journey of Go's garbage collector".

Positive highlights: 1) non-copying 2) no read barriers

Less friendly: 1) write barriers 2) GC aware fiber scheduler 3) other???

Would be some (huge amount?) of work but porting/enabling an opt-in golang latency GC could be a big enabler for the casual/soft "real time" crowd.

Here's a link to the paper:
https://go.dev/blog/ismmkeynote

« First   ‹ Prev
1 2 3 4 5 6