June 09

On Thursday, 6 June 2024 at 18:00:56 UTC, Sebastiaan Koppe wrote:

>

On Wednesday, 5 June 2024 at 23:58:14 UTC, Adam Wilson wrote:

>

[...]

I think DRT only needs to concern itself with supporting language features. Anything else needs to go elsewhere.

Where then?

A runtime is a collection of code that is executed, at runtime, by applications. The compiler doesn't execute code in the DRT while it's compiling, so it's not really a compiler support library. The only thing that the compiler needs to know about the runtime is that the symbols:

  1. Exist.
  2. Follow the compilers emitted ABI for them so that they'll link.

The compiler is only tangentially interested in the runtime insofar as it emits well-known symbols that are contained in the runtime. There is nothing stopping DMD from emitting Phobos symbols, and indeed, on a few occasions, it does. (std.math IIRC) So by this definition of "Runtime" all of Phobos is part of DRT since technically any Phobos symbol is a compiler support symbol as well.

Which, consequently, is why Phobos and DRT are built and shipped together. That fact should be enough to end this prattle about keeping DRT solely for the compiler. The hard reality is that the runtime is far more closely associated with the standard library than the compiler. And it always will be. If somebody wanted to link in a new runtime, the compiler would not know or care as long as the symbols it relied on were available and mangled the same. IIRC this is what Tango's runtime did. But if somebody wants to try to make Phobos work with that same new runtime ... well, I guess we'll see them year or two later with a lot less hair.

This becomes a blocker if the goal is to make Phobos "source-only" to ease the transition to Editions, as we must move code out of Phobos and into DRT since we would no longer be able to ensure that the Phobos symbols DMD emits are actually compiled into the binary.

I am not proposing that we get rid of DRT or it's utility in language feature support, only that we accept it's mission for what it is, to be the universal system interface. It's either that or we continue to pretend that the split between DRT and Phobos is an actual thing.

Runtimes do not serve the compiler, they serve the applications that the compiler builds. Yes the compiler needs to be aware of the runtime, but that's it. I think we are trying to hard to keep DRT "small" to make porting easier, when all we've really done is create a situation where we have three runtimes: CRT, DRT, and Phobos.

Mission accomplished in the most uselessly restrictive and excessive way possible. Yay for D!

TL;DR: The runtime is far more important to libraries and applications than it ever will be to the compiler.

So, where should all of this system interface code go?

June 09

On Thursday, 6 June 2024 at 23:50:06 UTC, max haughton wrote:

>

Secondly, I think this reasoning leads towards a trap: Placing an arbitrary division between phobos and druntime, or more charitably placing that division in the wrong place, can lead to a lot of needless debate or lost productivity.

This is actually the heart of what I am trying to resolve. Right now people are treating DRT as some sort of particular annoying compiler extension. I wrote more about this elsewhere so I won't duplicate.

>

Unless the compiler (i.e. containing druntime) and phobos were in a repo together you can end up in the same situation we had before with the compiler and runtime being separate leading to have numbers of double-PRs and so on, only squared.

While we still have a runtime I don't think phobos should have all of the nuts and bolts in it but in contrast to "maintenance would be a nightmare" I posit that this be preferable to many alternatives: be glad you can do the maintenance in the first place. When you split things up across projects (or worse repos) you lose the ability to do atomic commits, test (easily, buildkite isn't easy) at the same time for example.

Is this a critique of my idea or the way things are today? Because I talked with Mike at DConf last year about merging Phobos into the DMD repo and he told me then that the plan has always been to turn D into a mono-repo project, precisely for the reasons you have laid out above. My assumption is that the reason this has not been done yet is because of the difficultly of getting the CI infra stood-up correctly in the DMD repo and not some ideological reason.

In any case, I put this on the agenda for the Monthly meeting next week.

>

Aside from that I do like really the idea of ditching the C API where easy e.g. purely as a gimmick I've always liked the idea of hello world inlining down to a system call where applicable.

There are things where the C runtime does cover up a lot of hardware-specific details that we really don't need to care about but other things where it's a pile of crap and worth ditching.

You and me both.

June 09

On Thursday, 6 June 2024 at 23:58:07 UTC, max haughton wrote:

>

The way to get a stable foundation that can be ported easily to the likes of wasm and so on is not making druntime bigger. You end up with a good result by doing so but it would be like taking a shortcut through a maze rather than the road - fragile.

I think you misunderstood what I am proposing. I want to split out the necessary compiler symbols into a "Mini-DRT" that represents the minimum required to make the application function. Then push the rest up higher in the onion.

I'm trying to make porting easier. And the best way to do that is layering.

June 09

On Thursday, 6 June 2024 at 18:56:59 UTC, Mathias Lang wrote:

>

On Wednesday, 5 June 2024 at 23:58:14 UTC, Adam Wilson wrote:

>

Before I get into the design of the DRT I want to propose rules that will allow us to continue to evolve DRT in the future without breaking past editions. [...]

We have to take into account the combinatorial explosion that comes with the rename method. For example, should you choose to change the Throwable interface (because it's old and really not good), you would need to make sure that everything that uses it is compatible with it. There are known ways to do it in other languages (https://wiki.qt.io/D-Pointer) but I think the rules you proposed are incomplete as they only work for functions, not types.

I'm actually a fan of that approach for data structures, I've seen it referred to as either a pImpl for "Pointer-to-implementation" or a "Handle". Realistically, for editions this is likely the direction we need to go.

As for the combinatorial explosion problem. I'd rather have that than arbitrarily cutting off development of DRT. Yes, it will result in an ever increasing number of symbols. No way around that, in most cases we should be able to do redirections to newer code and leave the old methods as forwarding-stubs. Yes there will be cases where we can't But it doesn't have to be a geometric expansion.

> >

For DRT I propose a sharded design with a split between the compiler support modules and the universal system interface shards. [...]

Layering things / encapsulating them is good, but you also need a way for inner layers to refer to outer layers, otherwise you severely limit the capabilities of your system.
Take for example unittests: we would like to provide a much better out-of-the box experience for unittests. You should get colors, summary, the ability to run them in parallel, or even a filewatcher that auto-recompiler them, out of the box. We might consider unittests to be in Core, but I'm sure the rest is not.

IMO, the FileWatcher/Re-builder is something that I've always seen done outside of the compiler. The compiler compiles things, we need to avoid trying to make it the "Everything Program". I feel the same way about proposals that try to make the compiler responsible for dependency management for the same reason.

> >

Simple, so that we can move beyond the limitations imposed by the C Runtime.

A resounding yes. Keeping the ability to easily bind to the CRuntime is useful (e.g. being able to spin up a socket and just look at the C documentation), but because it's "good enough", it was never good at all.

Agreed.

June 09
On Friday, 7 June 2024 at 01:23:03 UTC, Walter Bright wrote:
> Is relying on the C runtime library really a problem? It's probably the most debugged library in history, and it's small and lightweight.

For example, let's say you want to do some asynchronous I/O. Forget the CRT, it just doesn't do that. So off you go to the system API's anyways. Or kludges like Photon. (Don't get me wrong, Photon is a very cool piece of work, but the nature of it's implementation means that it suffers from the inherent limitations and drawbacks of fibers, and does so in a highly obfuscated way.)

The point is more that if we want to do useful things in the modern world that exists beyond the CRT, then we have to work around it, and if we have to work around it anyways, why are we using it at all?

If you go with the System API's the world is your oyster. Yes, there is more work upfront, but the number of capabilities we would be able to enable is immense.
June 09
On Saturday, 8 June 2024 at 21:15:39 UTC, monkyyy wrote:
> On Friday, 7 June 2024 at 01:23:03 UTC, Walter Bright wrote:
>> Is relying on the C runtime library really a problem? It's probably the most debugged library in history, and it's small and lightweight.
>
> Depends on goals, if your targeting moving d to a higher level, wasm+libc will just suck(they broke file i/o despite w3c lying, it will always be a weird edge case you have to specifically support) and I think a go/swift apooch  of an std making a non-c api will probably be best
>
> If you want to compete on the low level zig competing with c involves competing with libc; other platforms, new chips; maybe things break weirdly or they have bad workarounds
>
> if you want to keep d exactly where it is; I cant imagine much reason to change libc dependence its fine for windows and linux and fake linux; so is there going to be a major push for wasm or embedded?

I broadly agree with this assessment.

Moving up level necessarily means broadening out beyond the CRT.

Moving down is something we're not well equipped to handle as you start to compete on execution speed, which means esoteric back-end optimizations, which is something DMD sucks at so we use LDC/GDC. And since LLVM and GCC already exist and we're already using them, we've already admitted that we're not going down that path.

Staying where we are means stagnation. Let's not do that.

And WASM keeps coming up as a priority for DLF so I think we can all see where this is headed.
June 09
On Saturday, 8 June 2024 at 21:15:39 UTC, monkyyy wrote:

> if you want to keep d exactly where it is; I cant imagine much reason to change libc dependence its fine for windows and linux and fake linux; so is there going to be a major push for wasm or embedded?

As I remember, druntime uses not more than 10 CRT calls: allocate/free memory, sockets, threads. Thats all.

It upsets me that we have a huge core.stdc.* - it would be great to move these files somewhere
June 09
On Sunday, 9 June 2024 at 03:22:07 UTC, Adam Wilson wrote:
> 
> Moving down is something we're not well equipped to handle as you start to compete on execution speed, which means esoteric back-end optimizations

Im not sure how esoteric it is; I think you could have mixins generate inline asm syscalls and just grab the tables and then like a data structure for simd thats mostly op overloads
June 10
On Sunday, 9 June 2024 at 03:10:26 UTC, Adam Wilson wrote:
> On Friday, 7 June 2024 at 01:23:03 UTC, Walter Bright wrote:
>> Is relying on the C runtime library really a problem? It's probably the most debugged library in history, and it's small and lightweight.
>
> For example, let's say you want to do some asynchronous I/O. Forget the CRT, it just doesn't do that. So off you go to the system API's anyways.

And on Posix libc is the systems API.

> Or kludges like Photon. (Don't get me wrong, Photon is a very cool piece of work, but the nature of it's implementation means that it suffers from the inherent limitations and drawbacks of fibers, and does so in a highly obfuscated way.)

I find critique of stackful coroutines really weak to be honest. Implementing n:m or 1:n threads is kernel doesn’t scale, but  combining async io with user-space fibers works beautifully. The only problem I see is stack sizes, and even there we can just reserve a lot more on x64 it’s not a problem at all.

Java after all these years is putting lots of effort to support virtual threads,  that would be introduced along side with normal threads.
Go is highly popular and doing just fine without even having normal threads.

> The point is more that if we want to do useful things in the modern world that exists beyond the CRT, then we have to work around it, and if we have to work around it anyways, why are we using it at all?

On Windows I have no idea why we need to bind to libc at all. Synchronize with C’s io?

On Posix mostly syscall interface. That and malloc/free, memcpy/memset (simply because it’s optimized to death). All of the rest is legacy garbage no one is going to touch anyway.

> If you go with the System API's the world is your oyster. Yes, there is more work upfront, but the number of capabilities we would be able to enable is immense.

Being a system language D it allows anyone to use system APIs, meaning it’s easy to step on the toes of DRT if it uses lots of them.

June 10
On 6/8/2024 8:10 PM, Adam Wilson wrote:
> On Friday, 7 June 2024 at 01:23:03 UTC, Walter Bright wrote:
>> Is relying on the C runtime library really a problem? It's probably the most debugged library in history, and it's small and lightweight.
> 
> For example, let's say you want to do some asynchronous I/O. Forget the CRT, it just doesn't do that. So off you go to the system API's anyways. Or kludges like Photon. (Don't get me wrong, Photon is a very cool piece of work, but the nature of it's implementation means that it suffers from the inherent limitations and drawbacks of fibers, and does so in a highly obfuscated way.)
> 
> The point is more that if we want to do useful things in the modern world that exists beyond the CRT, then we have to work around it, and if we have to work around it anyways, why are we using it at all?
> 
> If you go with the System API's the world is your oyster. Yes, there is more work upfront, but the number of capabilities we would be able to enable is immense.

I don't understand how the CRT impedes any of that, or why any of it needs to be worked around. And D is intended to work with hybrid C/D code, so the CRT support needs to be there.

The CRT also does some startup initialization things that need doing, like collecting the command line arguments to present to the program.