Jump to page: 1 2
Thread overview
vibe.d + dpq2 + json trick
Aug 21
Serg Gini
5 days ago
Andrea Fontana
5 days ago
Serg Gini
4 days ago
Serg Gini
August 19

Sorry for the gigantic code dump, but I think its useful to show the complete thing here.
I find myself often in a situation where I want to have a rest endpoint that returns json from a DB.

I bet all of you here know the following approach, but I didn't so here it goes.

Functions like normalPG and oldSchool I can find in my codebases.
While working on other stuff I thought why should I have to press the DB response into json in vibe at all, as Postgres can generate json?
Looking into it, it turns out that libpg which is used by dpq2 returns json as a string.
So instead of parsing the json, or parsing a normal postgres row, and then generating json, you can just get the json as one string in one row, put it into the HttpResponse and set the json content type.

import std.stdio;
import vibe.vibe;
import vibe.data.json;
import vibe.db.postgresql;

/*
CREATE TABLE Person(
	id bigserial
);

WITH Pids AS (
	SELECT *
	  FROM generate_series(1,5000) AS id
)
INSERT INTO Person(id)
SELECT id
  FROM Pids;
*/

struct Person {
	long id;
}

void main() {
    setupWorkerThreads(logicalProcessorCount());
    runWorkerTaskDist(&runServer);
    runApplication();
}

PostgresClient client;

void runServer() nothrow {
	try {
		client = new PostgresClient("dbname=postgres user=postgres", 2);
		auto settings = new HTTPServerSettings;
		settings.options |= HTTPServerOption.reusePort;
		settings.port = 8080;
		settings.bindAddresses = ["127.0.0.1"];
		auto router = new URLRouter();
		router.get("/normal", &normalPG);
		router.get("/special", &specialPG);
		router.get("/oldschool", &oldSchool);
		listenHTTP(settings, router);
	} catch(Exception e) {
	}
}

void oldSchool(HTTPServerRequest req, HTTPServerResponse res) {
	Person[] pIds;
	client.pickConnection(
        (scope conn) {
			QueryParams query;
			query.sqlCommand = "SELECT id FROM person;";
			auto result = conn.execParams(query).rangify;
			if(!result.empty) {
				foreach(ref it; result) {
					pIds ~= Person(it["id"].as!(long)());
				}
			}
		}
	);
	res.writeJsonBody(pIds);
}

void normalPG(HTTPServerRequest req, HTTPServerResponse res) {
	Json[] pIds;
	client.pickConnection(
        (scope conn) {
			QueryParams query;
			query.sqlCommand = "SELECT id FROM person;";
			auto result = conn.execParams(query).rangify;
			if(!result.empty) {
				foreach(ref it; result) {
					Json tmp = Json.emptyObject();
					tmp["id"] = it["id"].as!(long)();
					pIds ~= tmp;
				}
			}
		}
	);
	Json r = Json(pIds);
	res.writeJsonBody(r);
}

void specialPG(HTTPServerRequest req, HTTPServerResponse res) {
	string ret;
	cast()(client).pickConnection(
        (scope conn) {
			QueryParams query;
			query.sqlCommand = `WITH Ps AS (
				SELECT json_build_object('id', id) AS d
				  FROM person
				)
				SELECT json_agg(d) AS ids
				  FROM Ps;
				`;
			auto result = conn.execParams(query).rangify;
			if(!result.empty) {
				ret = result.front["ids"].as!(string)();
			}
		}
	);
	res.writeBody(ret, 200, "application/json");
}

Testing this with ab I got nice improvements 2x - 200x depending. And yes I know this is a simplified example and bad benchmarking. But I think the general observation that doing less work is faster than doing more work will hold.

  • ab -n 10000 -c 10 /oldschool Requests per second: 834.65
  • ab -n 10000 -c 10 /normal Requests per second: 89.29
  • ab -n 10000 -c 10 /special Requests per second: 1622.15

How are other people doing this kind of http -> vibe -> DB -> vibe -> http thing?

August 20

On Tuesday, 19 August 2025 at 14:09:09 UTC, Robert Schadek wrote:

>

While working on other stuff I thought why should I have to press the DB response into json in vibe at all, as Postgres can generate json?

It would also be nice to measure the time it takes to get a response from Postgres.

As for me, Postgres server is too expensive resource for this work, even though this particular query works faster for some reason. Getting a result near to a "tabular binary form" from PG, we can convert it in into JSON on D side which code can work in parallel on several (or hundreds) servers.

>

Looking into it, it turns out that libpg which is used by dpq2 returns json as a string.

Strictly speaking, PG supports two JSON types: json and jsonb ("indexed" form of json with additional offset values). It might be worth experimenting with changing it to "jsonb" type to achieve performance improvement

>

foreach(ref it; result) {
pIds ~= Person(it["id"].as!(long)());
}

libpq calls "id"->col number conversion for each time here. This, of course, is fast, but...

August 20

On Tuesday, 19 August 2025 at 14:09:09 UTC, Robert Schadek wrote:

>

Sorry for the gigantic code dump,

I made some code rework:

https://gist.github.com/denizzzka/124dc3827a10d29dd04cb7331f4c4ff8

Previously there a pool of connections was created for each worker - this is unnecessary, it is better when the pool is common for all workers.

Also, I added PREPARE, row-by-row read for /normal (just for check if it helps) and replaced "id" by [0] on column accessing code

Nothing really helped

My measurements:

ab -n 10000 -c 10 localhost:8080/oldschool Requests per second: 675.46 [#/sec] (mean)
ab -n 10000 -c 10 localhost:8080/normal Requests per second: 163.55 [#/sec] (mean)
ab -n 10000 -c 10 localhost:8080/special Requests per second: 3636.42 [#/sec] (mean)

August 20

On Wednesday, 20 August 2025 at 14:10:31 UTC, Denis Feklushkin wrote:

Maybe that the whole deal is in some server-side row-level caching. If server sees the same request and the tables haven't changed, it can simply return the same row lines from the cache. And here is we have only one line for /special, so it is super fast

August 20

On Wednesday, 20 August 2025 at 14:10:31 UTC, Denis Feklushkin wrote:

>

Also, I added PREPARE, row-by-row read for /normal (just for check if it helps) and replaced "id" by [0] on column accessing code

Nothing really helped

My measurements:

ab -n 10000 -c 10 localhost:8080/oldschool Requests per second: 675.46 [#/sec] (mean)
ab -n 10000 -c 10 localhost:8080/normal Requests per second: 163.55 [#/sec] (mean)
ab -n 10000 -c 10 localhost:8080/special Requests per second: 3636.42 [#/sec] (mean)

I haven't checked by I bet postgres caches parsed/optimized queries. And the test would have a nearly perfect cache hit ratio

August 21

On Wednesday, 20 August 2025 at 16:06:10 UTC, Robert Schadek wrote:

>

On Wednesday, 20 August 2025 at 14:10:31 UTC, Denis Feklushkin wrote:

>

Also, I added PREPARE, row-by-row read for /normal (just for check if it helps) and replaced "id" by [0] on column accessing code

Nothing really helped

My measurements:

ab -n 10000 -c 10 localhost:8080/oldschool Requests per second: 675.46 [#/sec] (mean)
ab -n 10000 -c 10 localhost:8080/normal Requests per second: 163.55 [#/sec] (mean)
ab -n 10000 -c 10 localhost:8080/special Requests per second: 3636.42 [#/sec] (mean)

I haven't checked by I bet postgres caches parsed/optimized queries. And the test would have a nearly perfect cache hit ratio

I wonder with values will valkey show.
Another thing that ab tool will send the same request and the same ID will be requested over and over again.

Maybe if create let's say 1000 ids and every request then calculating random(1,1000) and use it for data extraction, it will show a bit more realistic values.

August 21

On Thursday, 21 August 2025 at 10:31:18 UTC, Serg Gini wrote:

>

On Wednesday, 20 August 2025 at 16:06:10 UTC, Robert Schadek wrote:

>

On Wednesday, 20 August 2025 at 14:10:31 UTC, Denis Feklushkin wrote:

>

[...]

I haven't checked by I bet postgres caches parsed/optimized queries. And the test would have a nearly perfect cache hit ratio

I wonder with values will valkey show.
Another thing that ab tool will send the same request and the same ID will be requested over and over again.

Maybe if create let's say 1000 ids and every request then calculating random(1,1000) and use it for data extraction, it will show a bit more realistic values.

ab is old and single threaded, use wrk it’s multithreaded and supports lua scripts to generate requests.

5 days ago

On Thursday, 21 August 2025 at 12:24:47 UTC, Dmitry Olshansky wrote:

>

ab is old and single threaded, use wrk it’s multithreaded and supports lua scripts to generate requests.

I agree. Try with wrk, wrk2, bombardier...

Andrea

5 days ago

On Wednesday, 27 August 2025 at 12:54:37 UTC, Andrea Fontana wrote:

>

On Thursday, 21 August 2025 at 12:24:47 UTC, Dmitry Olshansky wrote:

>

ab is old and single threaded, use wrk it’s multithreaded and supports lua scripts to generate requests.

I agree. Try with wrk, wrk2, bombardier...

Andrea

But I think it will be more important to prepare better experiment.
Returning JSON from the DB without any modifications I think a rare case

4 days ago

On Wednesday, 27 August 2025 at 15:32:52 UTC, Serg Gini wrote:

>

On Wednesday, 27 August 2025 at 12:54:37 UTC, Andrea Fontana wrote:

>

On Thursday, 21 August 2025 at 12:24:47 UTC, Dmitry Olshansky wrote:

>

ab is old and single threaded, use wrk it’s multithreaded and supports lua scripts to generate requests.

I agree. Try with wrk, wrk2, bombardier...

Andrea

But I think it will be more important to prepare better experiment.
Returning JSON from the DB without any modifications I think a rare case

I added an impure random() to the SQL request which creates JSON:

WITH Ps AS (
		SELECT json_build_object('id', id + random(0, 10) AS d
		  FROM person
		)
		SELECT json_agg(d) AS ids
		  FROM Ps

Now its performance is:

Requests per second: 530.44 [#/sec] (mean)

this is near to the performance of other methods mentioned earlier

« First   ‹ Prev
1 2