On Monday, 10 June 2024 at 09:36:14 UTC, Dmitry Olshansky wrote:
> And on Posix libc is the systems API.
For some things yes, but if you want to do anything with Async I/O you're going to switch over to something like select
or io_uring
.
> > Or kludges like Photon. (Don't get me wrong, Photon is a very cool piece of work, but the nature of it's implementation means that it suffers from the inherent limitations and drawbacks of fibers, and does so in a highly obfuscated way.)
I find critique of stackful coroutines really weak to be honest. Implementing n:m or 1:n threads is kernel doesn’t scale, but combining async io with user-space fibers works beautifully. The only problem I see is stack sizes, and even there we can just reserve a lot more on x64 it’s not a problem at all.
Java after all these years is putting lots of effort to support virtual threads, that would be introduced along side with normal threads.
Go is highly popular and doing just fine without even having normal threads.
That strikes me as more of an opinion than objective fact. I led a detailed discussion of this topic on Discord. The end result was that the stack size issue ends up being catastrophic in non-trivial workloads. Vibe went with a 16MB stack size for precisely this reason, which means that to handle 65536 simultaneous connections, I need a server with 1TB of RAM. The reason for that is that due to performance concerns, we turn off over-commit and thus allocating 16MB per stack means that you are fully committing 16MB of physical RAM. Go/Java/.NET, can all handle 10x that number of connections on a server with 128GB of RAM, so that's the bar we have to meet.
No other language suffers this problem, not even Go. The reason is that all languages that successfully use Fibers, use dynamically expanding stacks, but this means using a precise stack-scanning moving GC. Something that D, so long as Walter is among the living, will never have.
Stackless coroutines also do not suffer this problem, which is why .NET and Rust use them.
Here are some resources, including a real-world example from CloudFlare:
https://devblogs.microsoft.com/oldnewthing/20191011-00/?p=102989
https://www.open-std.org/JTC1/SC22/WG21/docs/papers/2018/p1364r0.pdf
https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2019/p1520r0.pdf
https://blog.cloudflare.com/how-stacks-are-handled-in-go
> On Windows I have no idea why we need to bind to libc at all. Synchronize with C’s io?
Beyond the fact that we currently use libc, I don't see any reason for us to either.
> On Posix mostly syscall interface. That and malloc/free, memcpy/memset (simply because it’s optimized to death). All of the rest is legacy garbage no one is going to touch anyway.
And in the case of malloc/free, replacing those with different allocators has become quite the rage lately. jemalloc seems particularly popular. IIRC, Deadalnix's new GC uses a different allocator from libc. So even fewer reasons.
> Being a system language D it allows anyone to use system APIs, meaning it’s easy to step on the toes of DRT if it uses lots of them.
Can you explain what you mean? I am not sure how we'd step on DRT's toes if applications use a lot of System API's?