So since last time I was pushing to DMD regularly (several years ago), it seems like the CI time has inflated enormously.
Maybe this happened slowly and nobody noticed, but I would say it's night-and-day difference since last time I checked in.
There's like 50 build jobs. Of the main ones, the nix-ey ones take 10-15m, the mac ones take like 25m, and the windows ones are 30 minutes and counting...

I wonder if any effort has been made to improve this?
Just a cursory look, I noticed some low-hanging fruit;
1. Some build machines don't seem to be in an operational state by default, and I noticed apt-get or other package managers fetching at the start of the build to get the build environment in order, which took over a minute each time; so that could be an instant ~10% improvement by updating the images?
2. The DMD unit tests are most of it. Is there any reason they can't be batched?

The 'compilable' tests could probably be built all-at-once to some extent? There's about 1.5 thousand of those... it'd probably be almost instant if they were batched.
The 'runnable' ones could conceivably be batched too, but they all seem to have separate main() functions rather than using unittests... if there's some way to sequence the entrypoints, that's another thousand or so, and they're probably the slowest ones, since they also link and execute each one.
There's another almost 2000 'fail_compilation' tests, and they seem to move through slightly faster (I guess they don't need to link), but that's still a lot.

Has anyone poked at this? Are there known inhibitors?