December 05

The D Language Foundation's monthly meeting for June 2024 was held on Friday the 13th. It lasted about an hour and 25 minutes. This was the first time Rikki Cattermole joined us.

The Attendees

The following people attended:

  • Walter Bright
  • Rikki Cattermole
  • Timon Gehr
  • Martin Kinkelin
  • Dennis Korpel
  • Mathias Lang
  • Razvan Nitu
  • Mike Parker
  • Robert Schadek
  • Steven Schveighoffer
  • Adam Wilson

The Summary

The DMD ARM backend

Razvan asked Walter about his plans for the DMD ARM backend and wondered if he'd considered alternatives, like merging the DMD frontend with the LDC backend.

Walter said people had written him telling him they liked using DMD because it was small and fast. He'd received a lot of pressure over the years to implement an ARM backend. Other people had shown an interest in writing one but then given up because it was too difficult or time-consuming. So he'd just decided to get it done.

He didn't plan on doing any optimizations initially. The ARM instruction set was ridiculously complicated, but there was a simple path through. He would stick with the simple path and not generate optimized code beyond what the global optimizer and the register allocator already did. Once the simple path was implemented and working, the micro-optimizations could be done on a per-instruction basis. So far, he'd made more progress than he'd expected to. Simple functions could be compiled and the generated code was correct.

He'd already written a disassembler for it because that made it easy to check the generated code. A big issue was the lack of an inline assembler. He didn't think it could compile all of the runtime without one. Brad Roberts had already written an ARM disassembler and had expressed interest in adapting it to ARM64. That was independent of writing an ARM backend, so it would be great if he got it done.

Razvan felt that anyone doing serious work with D would use LDC for ARM. The only people he'd seen interested in an ARM backend for DMD wanted it for toy projects. Though it might be interesting for D's general direction, he wasn't sure if it was the best way for Walter to allocate his time.

Walter said that was a great point, but he'd been surprised by the amount of interest it had generated. Also, he thought it was good to have control over the whole process from beginning to end in the compiler. He had no idea how the LDC and GDC backends worked, so adding a new feature could be a problem.

Martin asked Walter not to implement another inline assembler syntax. We already had standardized, architecture-independent inline Assembly expressions in LDC and GDC that were 100% compatible with the GCC style. The code to parse it was already in the frontend but only enabled for GDC and LDC. We already had the special case for the DMD inline assembly syntax. That was fine for legacy code. For the ARM stuff, DMD could reuse the existing parser and we wouldn't need version statements all over the place.

Walter said the Intel x86 inline assembler in DMD wasn't some kludgy thing he'd made up. It matched Intel's spec so that there was a one-to-one correspondence. The ARM dissaembler, and what he intended for the assembler, matched the spec. He had no idea about the syntax used by LDC and GDC.

Martin said that for x86 there were two dialects, Intel and AT&T. Both LDC and GDC had gone with the AT&T dialect, which happened to be less commonly used. But for ARM, they happened to have the same dialect everywhere. The important thing was that it was string-based so it could be architecture-independent.

He said the inline assembler was a minor point. Regarding Razvan's point about Walter's use of time, Martin said that was up to Walter. If he thought there was value in it, he should do it. Years ago, when this discussion had come up before, Martin had believed that Walter's time was better spent fixing bugs. But he had a lot of industry experience now. Even Symmetry used DMD for debug builds simply because of its speed. He agreed with Walter on that. It was nice having everything in a single code base, not depending on huge code bases with huge backends. We had it all in DMD and it was fast. That was the main thing. So he felt if Walter wanted to do this, he should be free to.

Regarding Razvan's speculation about merging the LDC backend with DMD, Martin said that was never going to happen. The DMD frontend was in LDC, so anyone who wanted to work on stuff with the LLVM backend was free to do it in the LDC repository. Anything else, if it were about merging stuff from the LDC repository into core DMD, was just not viable. Anyone implementing a new feature that required backend changes would need to be able to implement it in C++ for LLVM. He said Walter would never have finished the bitfield stuff if he'd needed to do it for LDC as well.

Walter said he didn't want to program in C++ anymore, and he was sorry Martin had to.

Rikki said that DMD was dead on OSX without ARM support. He noted that Atila wanted unit testing in a daemon. If you were writing object files, you'd already failed. DMD's backend would be great for a JIT.

Walter said there was another small point. Most people found it nearly impossible to understand how the DMD backend worked and how to improve or modify it. He could agree with them. He hadn't looked at much of that code in years, and in trying to implement the ARM backend he'd had to get back into it and try to figure out how it worked. He could see how it was difficult to understand. The structure of it was fairly simple, but all the massive kludge instruction set of x86, all the special cases, all the weird instruction sequences for various things... it was just difficult to tease out their implementations. He was hoping the ARM implementation would look a lot simpler.

So far, it was turning out a lot simpler, but it followed the same structure as the x86 implementation. He thought it would be more tractable for people who wanted to enhance the code generator. It would also help people who wanted to modify the x86 backend. Because the structure was the same, they could look at it to understand what a given function in the x86 version was doing.

Walter agreed with Rikki that it could be used as a simple JIT. He hadn't thought of that.

Steve said he hadn't used DMD on his Mac in a couple of years because it was no longer native. He'd just been using LDC. But the main thing was that all the frontend development happened in DMD. So if we didn't have an ARM backend there, we wouldn't see any problems with new features or changes until LDC had updated to match it. He didn't know if that was a huge issue, but Mac would probably stop supporting x86 at some point. Then we'd no longer be able to say that DMD worked on Mac. So there were some huge benefits to having an ARM backend in DMD, even if it wasn't the best ARM backend in the world. As long as it worked and generated valid code, then it was worth having.

Steve thought he agreed with Martin about the inline assembler. He felt like it wasn't a hugely used feature. From that perspective, it seemed like a waste of time to implement a separate syntax. But he didn't use inline Assembly much, so he didn't really understand the issues.

Walter said inline Assembly was needed far, far less than he'd anticipated. But it was one of those things that when you needed it, you needed it. Otherwise, you'd be programming in hex. And that was unacceptable.

Timon said this was one of the cases where he would back Walter on all points. It was important to have an ARM backend for DMD because DMD was a very hackable compiler. It was easy to clone and easy to build. It would be great if that kept working on ARM. As for the inline assembler, he felt what DMD did for x86 was much superior to what LDC and GDC did.

Walter noted that he wouldn't attempt to adapt the existing inline assembler because it was a horrible piece of code. He wasn't the original author, and every time there was a new Intel instruction he had to support he was always worried he wouldn't be able to get it to work.

Rikki thought that if anyone was using inline Assembly today, that was a compiler bug. It indicated a lack of intrinsics. Atomics, for example, should all be intrinsics. None of that should be inline Assembly. So an inline assembler should be a low priority.

Moving Phobos v3 to the mono repo

Adam said that after a conversation with me at DConf '23, he'd understood that there was a plan to move Phobos into the mono repo with DMD and DRuntime. He understood that the CI infrastructure had certain issues we'd have to work through, but wondered if there was a philosophical reason why it couldn't be done.

Walter said one reason was that the compiler shouldn't use Phobos because of a bootstrapping problem. It would be too hard to debug the compiler if he couldn't build it without Phobos. He was concerned that adding Phobos into the mono repo would make the temptation to use it in DMD too great. It would also be too tempting to merge DRuntime and Phobos. To build a successful compiler, he needed to be able to compile it and debug it without Phobos.

Martin said that didn't have anything to do with the location of the Phobos source code. We could still do what we did now and ensure that no DMD code was using Phobos. From his perspective, merging the DRuntime and DMD repositories had proven difficult for LDC. They had their own forks of DRuntime and Phobos, along with a repository of DMD tests extracted from the official DMD repository. He'd had to create an entirely different scheme to handle everything after the merge. Should Phobos be merged in, he'd have to do the work to adapt again. And he probably wasn't the only one depending on it.

He said the problems would mainly be downstream. He was never a friend of the idea of moving Phobos into the DMD repository simply because Phobos had nothing to do with DMD internals. The compiler and the runtime were tightly coupled, so putting them together made sense. It solved some pain points that existed when they were in separate repos. But he didn't think he'd had to create a Phobos branch and a DMD branch at the same time in years. The coupling between them was very low. He saw no good reason to move Phobos into the DMD repo.

What he found tedious was the build. For the DMD tests to succeed, he first had to go into the DMD repo and run the makefile to build DMD and DRuntime. Then he had to cd into the phobos directory to build Phobos because the DMD tests depended on libphobos.a. He said that was a pity, but he didn't know if it justified moving Phobos into the mono repo. He thought if that were to happen, we'd also need to move the dlang.org and installer repos into it for completeness, which he thought was a bad idea.

Mathias said it was never the plan to move Phobos into the mono repo. We had always pushed back on that idea. He didn't think we should do it. He understood we were moving away from distributing a precompiled Phobos for editions.

Adam said he and Rikki had had a lot of discussions about this. They felt that the coupling between DRuntime and Phobos was bound to increase if we did anything beyond what Phobos 2 did. For example, one thing that always came up when he discussed DRuntime was fibers vs. stackless, i.e. stackful vs. stackless coroutines. He'd talked with Dmitry Olshansky, and Dmitry had realized that fibers were probably not the way that D would work in the future, and he recognized that meant that we'd need to add stuff to DRuntime like an event loop and some interfaces. The compiler would need to know about it and do some rewrites, but it would be Phobos primarily linking to it. The actual objects we'd need for it would be in Phobos.

So in his view, coupling was going to get worse in the future, not better, but he didn't see any way around it. Even if we were doing fibers, we'd still be adding stuff, just less of it.

Another thing that had come up in discussions Adam had with Walter was replacing the C file I/O with system file I/O. That, too, would increase the amount of coupling. It might be low now, but we shouldn't expect it to remain that way.

Walter said it was okay for Phobos to depend on the runtime, but the runtime could never depend on Phobos. Adam agreed. His point was that Phobos would depend on DRuntime much more in the future. So we'd be in a situation where implementing some things would require changes in both repositories.

Mathias noted that we were already doing that with two repositories and it worked. If we did move toward more integration, then we should certainly consider if we should merge the repositories. But he would like to discuss that on its own merits. At the moment, there was no goal or reason to do it.

Rikki said that DRuntime didn't need to be with DMD. The coupling was a lot less than people thought. To new a class, you didn't really need a GC. Any function that allocated memory, like malloc, would do. He thought that extracting DRuntime out of the DMD repository and instead having a mini-DRuntime that we could dog food would be a better way to go.

Timon noted that OpenD had merged the most popular dub libraries into the main repository. I said I could promise there was no plan for that. Robert said that taking the good ideas from those libraries and implementing them in Phobos was something he would be after.

Steve wanted to point out the different kinds of coupling we were talking about here. If you had a feature that had to be implemented in both DMD and Phobos, then you couldn't implement the compiler side of it and then later on implement the Phobos side of it once the compiler side was solid, because you couldn't test it without both sides being implemented. With DRuntime and Phobos, it was a different proposition. You could add anything you wanted to DRuntime, like a new class, that nothing else was using, and it would be fine. Then later on you could add the equivalent piece to Phobos. It wasn't the same kind of coupling.

Mathias noted that in the past the runtime interface was implemented as a set of C functions. The compiler would know to emit calls to those functions and it was great. The problem was CTFE. You couldn't do it with the C functions. There were other features implemented in the compiler to match what the runtime was doing.

In recent years, we had moved to a templated interface. Instead of calling a C function, we instantiated a template with a certain set of arguments. That required the code to be known from DRuntime, so the coupling had increased over the last few years. Rikki said that in his proposal, those kinds of hooks would always be with the compiler. They would never be in a separate repository.

Adam said the most common complaint he'd heard so far was that DRuntime was already too big and did too much. He thought people hadn't read what was actually being proposed: that we trim down the runtime to only the essentials that the compiler needed and leave that in the compiler repository; everything else we rip out and put somewhere else. That way people doing ports would have just a small kernel to worry about without all this other stuff coming with it.

Martin said he fully agreed. He cited the standard C headers as an example. Why were they in DRuntime? He didn't know. Maybe they were used somewhere in DRuntime and so that was why they got put there. He emphasized that it was about more than just the compiler hooks. It was also about binary reflection things that were needed at runtime. It was about threading and thread-local storage. The GC needed to know about TLS things, the static data segments of each binary.

He said it was about exception handling, which depended on the exception personality function used by the compiler, and which varied on Windows. For example, DMD had its own personality function in the runtime, but LDC used the Microsoft C++ personality function and hacked the terminate handler to implement chained exceptions and other stuff. It was all hacky, but we could catch C++ exceptions in D and even throw C++ exceptions from D and catch them in C++. These were all little compiler details that just happened to be known about in DRuntime.

He said it was all about the little state and the protocol determining into which section the compiler emitted the ModuleInfo pointers so we could get ahold of all the module inputs, all the linked object files. We needed those for the ModuleInfo definition itself. There was protocol between the runtime and the compiler that was hardcoded into the compiler. That wasn't something you could just add to your own object file or put in the runtime. The TypeInfo stuff was all hardcoded as well.

He wanted to stress that it was about so much more than hooks. But he was fully in favor of a small DRuntime. There'd been talk about it for years. Something that just had object.d with its hooks and public aliases that were required by the compiler, and then some of the runtime binary stuff that he'd just mentioned, which in some parts was only GC-specific. If we didn't have the GC, we probably wouldn't need to know about TLS ranges.

Rikki noted that exceptions depended heavily on the runtime to the point that you couldn't have stack unwinding working in BetterC. That definitely needed a rewrite, because stack unwinding should never be turned off. That was a pretty big bug that other compilers didn't have.

I asked everyone to pause here for a poll. My takeaway was that there was no way Phobos was going into the mono repo, so I asked if that was the consensus.

Steve said if the question was whether Phobos should be put into the same repository, he would say no. But it would be good to have a better delineation in DRuntime between what the compiler needed and what was intended for the library.

I said that was my next question. What emerged from this discussion was that we should slim down DRuntime to the core needed by the compiler and put the rest somewhere else. Walter said it wouldn't necessarily have to go anywhere else. It could be handled by placing the compiler stuff and the library stuff in separate packages.

Adam said there was a reason he'd brought this up. We wanted Phobos to be a source-only library, and that was great. But right now it wasn't. Looking through the source code, there was a very, very good reason for that. So if we really wanted to do that, we would have to move what he called the "Phobos runtime" somewhere else.

For example, there were version(Windows) statements all over the place in Phobos right now. So that kind of thing should be taken out and placed somewhere else. Then we could have a source-only library. Otherwise, he didn't know what we could do to get there.

Walter said it would be nice if Phobos could be cleared of all the system-dependent code. Adam asked what then should be done with the runtime. I said I would leave it with Adam to let me know when he was ready to have a planning session about that topic and asked if everyone was good with that. They were.

Why vibe.d isn't winning benchmarks

Rikki said he had been inspired the previous month to review vibe.d because it wasn't winning benchmarks. He'd looked at the core and found it was handling events by polling 64 handles per thread. Multiply that out and it was a huge overhead. You'd run into the issue where you could have one CPU core busy and the others idle when all 64 handles on that one thread had work on them.

He said the reason it was done that way was because 12 years ago, we understood that fibers couldn't be moved safely between threads. One way to fix this, and maybe Walter could preapprove a DIP for it today, would be to introduce an attribute, e.g., @notls, that banned TLS access, then add it to an optional overload in the fiber API. Then any fiber overloading it could be safely moved.

Walter said he didn't have any objection to that.

Martin said he thought LDC had something like that in its runtime already. It was a little LDC extension that could be enabled to keep track of whether a fiber had migrated to another thread, and then throw something. There might be an implementation there that could be upstreamed. But that was a big discussion that would be more appropriate for a topic-specific meeting.

Walter said another solution might be making the functions pure. Pure functions didn't allow access to mutable global data, which was what TLS data was. Rikki said that wouldn't be workable for VM hooks and things like that. Banning TLS and having blacklists and whitelists would be the easiest way to handle it. And at least it provided a migration path until we got proper coroutines.

Mathias suggested running one scheduler per thread. Rikki said the problem came down to the IOCP event. We needed to be using IOCP. That meant we had to support moving fibers between threads. Otherwise, we'd be stuck with 64 handles per thread. Adam seconded that. He said everything on Windows depended on IOCP these days. You couldn't get performance, throughput, or low latency without it. We had to accept this.

Mathias asked if this was a Windows-only requirement. Adam said that iou_ring on Linux was the same. Rikki said the same for epoll. Mathias said you could run an epoll on every core. Rikki said you'd still get events for a handle only on one of the threads. IOCP had been the gold standard for 25 years since Windows 2000.

Walter said this sounded like a really important technical thing, and he didn't think we could solve it in this discussion. He needed to learn more about it. He suggested going into the Ideas forum and posting about it, and then we could relax and discuss it. He had a long list of questions about it, but it was too much for this meeting. Rikki agreed.

dpldocs.info

Steve noted that Adam Ruppe's dpldocs.info was unavailable because of a funding shortage. He explained the site hosted documentation for packages in the dub registry and a link was inserted on each package page at code.dlang.org. So if a package maintainer didn't build and host any documentation for a given package, the "Reference" link to the docs on dpldocs.info would still be there. Even in the absence of doc comments, it would still publish APIs. He had submitted a PR to remove that link while the site was unavailable.

He asked if it made sense for the DLF to take over funding for the server. I thought it had been there for a long time, and asked if that was correct. Steve said yes, it had been there for a while. As such, I said it was something we should try to continue maintaining, but I didn't like the idea of paying for something separate when we already had a server that probably had the resources to run something like this. I asked Mathias if he thought it was feasible to run it on our existing server. He said probably so.

Steve agreed and didn't think we should be paying for it in that case. He added that we should set up documentation somewhere showing how to create and host documentation on GitHub to encourage more package maintainers to publish their docs.

Walter said he was okay with hosting it on our server if there were no licensing restrictions. Steve said there weren't any.

(UPDATE: I have since reached out to Adam Ruppe and he agreed with the idea of migrating the service to a DLF server.)

The .di generator

Rikki had been looking into the .di generator and concluded it was currently useless for DI files. He did start a PR that did a complete overhaul of it, but he'd run into issues because of things getting lowered and information getting lost. He wasn't the right person to fix it. It had to be moved after semantic, otherwise you didn't get attributes, and scope and return were getting interpreted the wrong way around after generation. Mangling didn't even match up. He just needed to hand it off to someone.

I asked who he wanted to hand it off to. Rikki said he thought Razvan should have the first crack at it since the AST was his thing. Razvan said he could take a look at it, but the idea of going through special cases of lowerings and doing the right thing was something that didn't make him enthusiastic.

Martin thought it wasn't a big deal. If it hadn't been fixed by now, then apparently no one was using it. He didn't see a good use case for it. If you needed it for some closed-source thing, it would only really work if you weren't using many templates. If everything was templated, you really couldn't ship anything closed-source. It just didn't play a role in open-source development. The C headers were probably more interesting than the D headers. He had no motivation to fix it, and he suspected that would be true for others.

Adam said that if that were true, the correct thing to do would be to remove the feature. Having a broken feature that didn't do what it promised to do wasn't a good idea. He also could make the counterargument to that, because he was using DI generation heavily with ImportC. As near as he could tell, that did work, because there wasn't a lot of semantic work needed just for C headers. It was just dumping attributes.

He didn't have an answer for this. But he'd been planning to redo the headers for DRuntime, like the Windows modules, using ImportC and .di generation.

Razvan noted that parts of the DI generator were used inside DMD. For example, if you wanted to print some AST nodes, then you'd be using functions from the DI generator. You could probably still do that if it were moved after semantic, but it made the dependencies a bit weird.

Rikki said there were only three things related to lowering that he couldn't fix, but it was doable. But it was used for error messages, so it needed to be fixed. Razvan said he hadn't looked at the PR recently, but if the cases were detailed there, he could take a look and try to fix them.

Mathias thought that if Rikki handed it off, it wouldn't get fixed. Rikki was interested in it and had a good feedback loop, so we should do whatever we could to support him in fixing it. He wasn't hopeful it would get fixed otherwise. And he would like to use it in DUB. There was an issue that DI files could help with, but he knew it wasn't working at the moment.

Rikki said it was at the point where it just needed a complete overhaul. Walter said he didn't know what it was doing wrong because he hadn't seen any bug reports about it. Mathias asked if he'd tried using it. Walter said he had used it, but he was hearing here now that it was unusable and he didn't know what the problem was. That was why we had Bugzilla.

Rikki said that Walter should start with inference. Until it could be moved up to semantic, it wouldn't work. Walter said he needed concrete examples, not handwavy stuff. He needed to see an actual generated DI file that was all wrong. Rikki explained again that scope and return were problematic because generation wasn't happening after semantic. Walter said his mind didn't work without code examples. In a conversation like this, he couldn't work out what the problem was. He needed it written down. It was just a quirk of his and not Rikki's fault.

Rikki said he could fix most of it. It was just three lines that stumped him. Walter said if it was just three lines, that sounded like a perfect Bugzilla issue. Rikki said it wasn't a bug today because it was running before semantic. It was the lowerings of foreach, anonymous classes, and assign expressions and stuff.

Walter said that had come up before and he'd put a special hack in the compiler to use the unlowered version when a .di was being generated. He asked again for a Bugzilla issue, because otherwise he'd forget about it. There were always things that needed to be done, and if something wasn't in Bugzilla it probably wouldn't be done.

Martin was curious about Adam's intention in generating .di files for the Windows headers with ImportC. Was it intended to replace the modules there already or was it just for debugging? He didn't think we could go with such a solution on POSIX because of the dynamic nature of ImportC and all the predefined macros to deal with across platforms.

Adam said that in his experience the Windows headers and API were incredibly stable, though he was sure we could find corner cases. He felt that using ImportC for DI generation was better than all the other options. It took in the post-processed version and because the D compiler was also a C compiler, it just worked. He'd used it on the ODBC headers and it worked brilliantly. Those were now part of the Windows headers in DRuntime.

He thought we could start doing more Windows headers like this. Then when there was a new Windows SDK, updating was as simple as feeding the new SDK's path to the build system and generating the DI files. This had come up because he'd been working on Phobos v3 and DRuntime, trying to figure out what needed to be done. Using ImportC and DI generation for this was better than any other tool for generating bindings from C headers.

Martin asked Adam if using ImportC directly was too slow for his liking because it had to preprocess every time you imported a module that used a given header. He asked if Adam had seen a big performance benefit over using DI generation via ImportC rather than just using ImportC directly.

Adam said the motivation had come from tooling. He used VSCode a lot, and code-d could read the DI files where it had nothing to hang onto when you just used the regular version. There probably was a performance benefit by not preprocessing so often, but it wasn't the motivation.

Steve said there was also the issue that you couldn't organize ImportC files, but you could organize DI files. DI files were independent modules, but ImportC files went into one big module, so you'd have conflicts if you imported from multiple places. Adam said he'd generated DI files from ImportC for some work he did with zstd and it worked great.

I noted that we had strayed away from the original topic. I asked Rikki and Razvan if they were going to link up to talk about it further. Rikki said he thought that was the way forward. He and Razvan could try to figure out how to fix the things he hadn't been able to fix and then break up his PR, and he could do it more slowly.

Feature-focused documentation/user guide for D

Steve said a D user in Discord had been trying to get his team into D, but every time they complained that the documentation was hard to use. He brought it up because everyone in the meeting and many D users were familiar enough with programming languages that they didn't see docs in the same way. We had some tutorials and we had Ali's book with a lot of good stuff in it, but it was all focused on syntax and how you do things in D. He was wondering if we couldn't have a more feature-focused set of docs.

He pointed to the C# documentation as an example. They had a reference that listed all the functions and their parameters and so on, but they also had more beginner-level, feature-focused docs. For example, if you wanted to use classes, then here's how you use a class, here's how you build it, here's how you use this and that, and so on. He was trying to think how we could improve our situation to have something similar, like maybe through a Symmetry Autumn of Code project.

He found the D documentation really easy to use. But he couldn't see it from the perspective of someone who wasn't using D every day or wasn't heavily into programming languages.

I said he was basically describing a guide. He agreed. He knew we had the D Tour, but it was pretty short and more about how things worked in D compared to other languages.

I said that I remembered the Dart guide going into how to make specific types of programs, like an example command-line program, example this, example that. I noted that at DConf '23, Mike Shah's students cited the documentation as a big negative in their D experience. Steve said the person on Discord had pointed to that talk and said he was happy to hear someone else had the same issue.

I said this was something we could put on my task list. It would be a few months down the road before I could look into it, but this was definitely for me. Rikki suggested we needed a technical editor to evaluate this and thought it was a perfect project for SAOC. I didn't think this would work as a SAOC project, but I wasn't sure.

Walter said he was very bad at writing tutorials. He never knew what made a good tutorial and what didn't.

Dennis said the problem with documentation issues was that users often didn't report them. They'd find the documentation hard and then just try somewhere else. But the information about what was so difficult was often lost. When he came across something himself, like the docs didn't explain well enough what a function was doing, he'd try to improve it. But that was just him stumbling across things as a contributor. He didn't know what other people were stumbling over. It would be nice if it were easier for users to report that the docs were unclear about something.

Martin said it was already quite easy. On every documentation page there was a link that simplified creating a pull request for a fix. Steve said that was true for the spec pages, but not for the general documentation. And even then, that wasn't the kind of thing he was talking about. If the docs were missing some information or something, that was a different issue. He was thinking of new users wanting to know, for example, how to create a thread, how to do concurrency, and so on.

He said if we looked at the docs for other languages, we'd see things like here's how to do this, here's how to do that, here's how you'd do operator overloading, but none of that was the spec. There was an operator overloading spec that was really technical, but then there were other things that showed how to do specific tasks, like what to do when you wanted to add two objects together.

I told Steve I got where he was coming from, and I suggested that everyone go and look at the link Mathias had posted in the chat that outlined different kinds of documentation. I recalled that Paul from Ucora had brought this same thing up before.

Walter said it seemed that writing a tutorial was a rather daunting task, but there was another way to do it: the D Wiki. If someone said they needed a tutorial on operator overloading, that could become a Wiki page. Then you wouldn't have to think about the rest of the language or have a complete set of tutorials. Various tutorials could go into the Wiki as they were needed, then another page could link to them all. That might be a way to get to where we wanted to go organically without pushing someone to write a massive, book-style tutorial.

I said that what people wanted wasn't tutorials, but a how-to guide. There were examples on other language websites. Like Steve said, they had a reference, and then they had a guide. They segmented these things out. For D, we had some tutorials and we had a reference. I'd always seen Ali's book as a kind of guide, but apparently it wasn't seen that way by newcomers. We needed a guide on dlang.org.

Mathias said we did not have any how-to guides. He noted the link he shared broke documentation down into four components. The one we were sorely lacking was the how-to guide showing how to solve specific, self-contained problems.

Steve pointed at the Tutorials page on the D Wiki to show Walter we were already doing what he suggested. This had come together organically. No one sat down and said, oh we need this and we need that. It just came together as people thought of putting things there. It had nowhere near the kind of coverage we needed.

Walter said he had an answer for this: ChatGPT. Some of us laughed, and he said he wasn't joking. He'd realized that in order to run the C preprocessor efficiently from ImportC, he needed to learn how to use pipes. There were several functions in the API. The documentation explained what the functions did, but didn't show how to use them to construct pipes. So he'd gone to ChatGPT and asked it to write a program that forked another program and captured its output, or fed it input, and it barfed out a working program. It saved him a lot of time.

He'd done this a couple of times when he was unsure how to combine features to get a result. So if someone came to us looking for "how do I do operator overloading" or something, we could just ask ChatGPT, see what it came up with, fix the errors, and then put it up. It could be a huge labor saver for us. And why not? That's what it was good for.

I said we needed the guide not for people asking us how to do things, but for people trying to figure out how to do things on their own, finding we didn't have that information, and then going somewhere else. Walter agreed, and we should use ChatGPT to help us make that guide. I said I could see asking it for an outline for a comprehensive guide as a starting point and then going from there.

Steve thought we didn't need ChatpGPT. We could just go look at guides from other languages and see what they covered. Walter thought ChatGPT would be easier still. Once you had the list of topics, you could ask it to show you examples on that topic. It would be a big time saver.

I said I'd go ahead and take this. That didn't necessarily mean that I'd be the one to write it, but I'd look into it and see how to approach it, come up with an outline for it, and go from there. Maybe once I had an outline we could find some contributors to help fill it out. Maybe I could do it myself using ChatGPT to help, but it wouldn't be anytime soon.

Steve said he was good with that. He suggested that once I got around to it I should ask for feedback from the community to get an idea of what the documentation pain points were. I agreed.

Steve said that was his main problem: the docs worked fine for him, so he just couldn't see what problems other people had with it. I joked that they should have seen the docs in 2003 when I first learned D. Steve said that you had to walk uphill both ways in the snow to read the docs back then.

Last-minute updates

Before we logged out, Walter said he'd been posting updates on X about his work on the ARM backend, so anyone interested in it should follow him there.

Then Mathias said he'd started working a little on DUB. He'd left it on the side for a while. He'd fixed some performance bugs and was planning on getting to work on adding support for YAML. He asked how many of us would be angry with him for introducing dub.yaml.

Steve said YAML was awful and he wouldn't be happy with that. I said I didn't care as long as SDLang support stayed in. Steve said that couldn't be removed because it would break stuff. Mathias said he wasn't getting rid of anything, but he wanted something more well-known. YAML was also user-friendly, and JSON didn't meet the bar there.

I said I saw no pitchforks, so it seemed he was okay. Steve said he wanted his pitchfork. He said he didn't like YAML because there were too many ways to do the same thing and he could never figure out what things meant. Adam said he absolutely hated it. He had to use it on a build system and wanted to put an axe through his computer each time he did. It might be well-known, but that didn't mean it was good.

Steve suggested TOML or other simpler formats that might be useful. Mathias said instead of TOML, he might as well just stay with SDLang and call it a day.

Robert said he hated to do it, but he had to agree with Mathias. People knew YAML. As bad as it was, it had won to a certain degree.

Finally, Rikki informed us he had started on a proposal for owner escape analysis.

Conclusion

We agreed to discuss trimming down DRuntime at a planning session on June 21. Our next major meeting was a quarterly meeting on July 5. Our next monthly meeting took place on July 12.

December 06
I've put together another bug report for the .di generator.

https://issues.dlang.org/show_bug.cgi?id=24891

Where scope is getting emitted twice.