Thread overview
D Language Foundation January 2024 Monthly Meeting Summary
May 14
Hipreme
May 14
jmh530
May 18
Dukc
May 14

The D Language Foundation's monthly meeting for January 2024 was held on Friday the 12th. There were two things of particular note about this meeting.

First, Jonathan Davis joined us for the first time and is now a permanent member. We're very happy to have him aboard.

Second, this was the first meeting where we adopted a new format. Previously, we'd go around the "room" to give each participant a turn to say everything they needed to say or as much as they could say before we had to move on in the interest of time. Sometimes people had problems they wanted to solve, other times they just had status updates or progress reports. When our membership was small, it all worked well, but as the invitation list grew, we could end up with long meetings in which important topics got cut short because they didn't get brought up until later in the meeting.

For the new process, I ask everyone to send me any agenda items they'd like to discuss that are not progress reports or status updates. At the meetings, I'll go round-robin through the lists in the order I receive them: first item from each list, second item, etc, until either there are no more items or we've hit the hour-and-a-half mark. If we get through the items early, then I'll open the floor to anyone who has something to report. If we still have agenda items at the hour-and-a-half mark, I'll ask if we should continue for another 30 minutes or call it.

This new approach has worked well for us since. It's also going to change how I write the monthly summaries. Each section heading is a topic of discussion rather than a participant's name.

The only agenda items I received from the members for this meeting came from Timon. He had quite a list, and we went through pretty much the whole thing. But I did give the floor to Robert at the beginning for a very brief update. I also had one item that came in the form of a request on the Discord server.

The Attendees

The following people attended the meeting:

  • Andrei Alexandrescu
  • Walter Bright
  • John Colvin
  • Jonathan M. Davis
  • Timon Gehr
  • Martin Kinkelin
  • Dennis Korpel
  • Mathais Lang
  • Razvan Nitu
  • Mike Parker
  • Robert Schadek
  • Steven Schveighoffer
  • Adam Wilson

The Summary

Item 1: Bugzilla to GitHub migration

As I noted in the November meeting summary, some potential issues had come up regarding the effect of the Bugzilla to GitHub migration on the dlang bot. Robert had said he'd investigate them.

As an update, he'd discovered some changes needed to be made to the changelog generator and had made good progress on that. He said Vladimir Panteleev had been working on some changes to the dlang bot related to how it closed solved issues. When that was all completed, they'd be able to test it. He wanted to put it through multiple tests to make sure it held up and worked as intended.

Item 2: D data structures

Monkyyy asked me on the Discord server to bring his article on D data structures to the meeting. I emailed a link to everyone before the meeting. I received a little bit of feedback in the email thread. In summary:

  • Monkyyy's complaint that allocators are "unmerged" highlights why Átila wants them out of std.experimental.
  • The article doesn't specify any specific data structures he'd like to see in the standard library.
  • Any redesign of Phobos needs to consider containers at some point, but how comprehensive should it be? How high of a priority?
  • Some simple containers can easily be built on top of arrays. Should we add that kind of thing to Phobos, or only more complex ones?
  • Would it be better to have a comprehensive container library in the dub registry?

I asked if anyone had any more feedback that they hadn't raised in the email thread. Timon suggested that someone should do what Paul did with the sumtype package: write a container library and then we could put it in Phobos if it was good enough.

Adam said he'd been having discussions about containers with Rikki and others. He told us that Rikki felt that we needed to focus on allocators first, then once that was solved we could start looking at containers. Adam said he didn't know if that was necessary or not, but based on that and other conversations he'd had on Discord, he thought that was the point of the article.

Steve said he agreed with Timon: if we were going to have something that was going into Phobos, it should be put up on dub first to prove that it was worth including. There were so many different opinions around containers and their APIs that it wasn't something we should just say, "Here's some space in Phobos, now put them in." Put it in dub first, work out the API we wanted, make sure it all works, and then put it in Phobos.

Razvan wondered what the procedure would be. There were a lot of packages on dub right now. How would you get to the point where you take a container library from dub and integrate it with Phobos? I said that if we didn't have a process, we should just implement one.

Steve said that whatever the process looked like, we should be using dub to flesh libraries out before bringing them into Phobos. Some people thought that everything should be in Phobos. Steve didn't necessarily agree with that. Anything that would be useful to use in Phobos should be in Phobos. A container library, for example. Beyond that, it wasn't so clear. Dub should be our go-to place to get things. If you needed a domain-specific thing, getting it in from dub was fine.

Timon thought we didn't need to wait on allocators before doing containers. A good container library should be adaptable to any allocator design we came up with.

Adam said that Rikki had suggested we just mainline std.experimental.allocator after fixing its defects, as its design was good enough. He said he and Walter had been discussing what should go into Phobos vs. what shouldn't. They'd agreed that the definition of "domain-specific" tended to be different for different people. Coming up with a definition of what should be in the standard library had been fraught because of all the disagreements.

Adam continued by saying that Microsoft had experimented with putting the .NET base class library on NuGet. It failed miserably, so with .NET 5, they went back to shipping it. They did a writeup somewhere describing why they moved it to NuGet and then why they moved it back. He suggested everyone read it. It might be that none of that applied to us, but we might get something out of it.

Adam thought Phobos should be bigger, but he knew Steve had a different opinion on that. It was a fraught conversation, and he and Walter had been going back and forth about it for a while.

Andrei noted that previous discussions on containers had focused on things that were kind of difficult to make work together. The first was copy semantics: should they have STL semantics or Java semantics? The second was safety, and the third was memory/GC behavior. As he understood it, these three questions had yet to be solved together satisfactorily.

Robert agreed, but he had a gut feeling that we could only have two out of three. That was a question we needed to answer. He didn't see how to use allocators in a way that the compiler could prove was @safe and have containers that weren't static in what they were and what they contained.

Razvan agreed and noted that Eduard Staniloiu had tried to get immutable and pure working with generic allocators. There had been a lot of problems the compiler couldn't figure out. He didn't know if the issues had been fixed in the meantime, but he remembered those attributes being a very big problem.

Timon said he thought this was basically because D didn't have a design for manually creating lifetimes. For mutable things, you didn't really know this because you could just mutate stuff to paper over it. But for immutable things, you needed to be able to say when the lifetime of an immutable value ended so the memory could be reused. This wasn't in the language.

I proposed that we put this aside for now as we weren't going to solve containers and allocators in this meeting. I suggested that at our next planning session, we should look at allocators and containers and any other big tasks that we had ahead of us and figure out which were our top priorities.

There were no objections.

UPDATE: I can report that after this meeting, we did a ranked choice vote on a list of projects, most of which we'd already decided we wanted. There were also a few we hadn't yet discussed. "Container Library" bubbled to the top. "Allocators" ended up further down the list, but given how closely tied together they are, both are now a high priority. I'll have more to say about them in future updates.

Item 3: Runtime compatibility of different editions in the same program

Timon started by saying that the idea with editions was basically that you import D versions from the past or the future into your program and everything just works. He thought that one major technical hurdle to making that a reality was that the runtime changes when D features change. How could we square that with editions? How could we get different versions of the runtime to work together? This seemed a bit tricky and we probably needed to think about it.

I noted that this had come up in a planning session, but I didn't remember who brought it up or what the outcome was. Martin said he'd brought it up, and the result had been that Átila was going to take that into account in his proposal.

Steve had a concrete example of where this could come up. In the release that added static initialization of associative arrays, he'd had to change the runtime's internal layout of associative arrays. If we had editions now, code compiled for an older edition would potentially have a problem here. It would link and run, but it would be the wrong memory layout.

Walter said that we'd decided a while back that compatibility is at the source code level, not for the binaries. This is so if you want to use an old library with a new compiler, the new compiler should compile it for it to work with the new DRuntime. That was pretty much how C and C++ worked, too.

He said we were not going to worry about binary compatibility to mix and match libraries compiled with different compilers. They were not going to work together. They had to be compiled with the same compiler or we'd never be able to fix anything.

Timon said he thought that was fine for the AA case. His worry was more about when we wanted to make changes to the runtime API. Steve agreed. He brought up -preview=in. Let's say we made that the default in an edition and it was used in DRuntime. The problem was that the runtime was prebuilt. It was bundled with the compiler. We couldn't just change it on the fly or compile it with different flags on the fly depending on the edition. Martin agreed, and mentioned -checkaction=context.

Jonathan pointed out that regardless of what we were doing with binaries, we always had to take templates into account. Because D code in general used a lot of templates, and because we'd been trying to templatize more stuff in DRuntime, we were forced into source compatibility. And anyway, some changes could be made with better compatibility if they were templated.

Steve went back to -preview=in case, importing a piece of DRuntime that was compiled with it when your app isn't. Wouldn't the compiler then just recognize that the imported module used -preview=in, and then just use it for that call? He thought that should just work. With editions, it's the module you were importing that defines what the API is.

Martin said to take, for example, non-templated functions in DRuntime that take in parameters. They were built into the pre-built DRuntime. If that DRuntime binary were built with -preview=in, then the signature of some of those functions would change because they'd take those in params by reference. But if you were using an old code base of an older edition where -preview=in wasn't the default, imported and used that newer DRuntime function, then you'd end up with linker errors because the signature would be different than the one the compiler expected.

He said another problem was with template culling. That was why we had to use the -allinst switch when using -checkaction=context. Because otherwise, when we import a template from DRruntime and the compiler recognizes that it was already instantiated in DRuntime, it doesn't emit that instantiation in the user code base. This stuff only worked under the assumption that everything being linked together was being compiled with the same set of compiler flags.

He said that one of those problems was specific to templates, but the other one was not. He just wanted to point this out as a general problem of shipping a pre-built DRuntime unless we wanted to bundle some number of DRuntime versions which was the combinatorial explosion of all these flags. He said it was a very, very tricky problem. He wasn't sure if the edition thing was feasible at all.

Steve said he still thought it should be feasible. The compiler would know if a module was compiled with an edition where -preview=in is the default, so it would always just use that ABI when calling into that module. I asked what would happen if I were compiling my code with an older edition and had to link with it. Steve said the compiler would still use the "preview in" ABI.

Walter said there was a simple solution here: don't use -preview=in with DRuntime. You could use const scope ref instead. Steve said it wouldn't bind to rvalues then. Walter said yes because it wouldn't have the new feature. You just wouldn't use that feature in DRuntime. You didn't have to use in for anything. You could get by fine without it.

Dennis said he'd been about to say the same thing. DRuntime was low-level stuff. We should be able to find a common denominator in DRuntime that was stable. He thought there wouldn't be much conflict that way. And anyway, it wasn't like people used extern(D) definitions to call into DRuntime functions using in on their parameters. Most DRuntime usage was through compiler rewrites of hooks, and most DRuntime development had been improving the implementation or refactoring it, but not really breaking signatures or fundamentally changing what the functions did.

Adam brought up Walter's point about C++ being source-code compatible and thought it was a key observation. C++ only had to be binary compatible with the C ABI. He asked if there was some subset of the D ABI that could say, "Okay, this is the DRuntime ABI. If you're working in DRuntime, you have to follow it. You can't use this and you can't use that."

Walter said that DRuntime was more of a primitive library than a user-facing one. So if we just stuck to a narrower ABI on function arguments, it shouldn't be a problem. We didn't need to use every advanced feature of D in the runtime.

Timon said that we should aim to get into a state with editions where it was clear for every module which flags were in effect for that module. Then -preview=in wouldn't be more of an issue for DRuntime than it was for any D code.

He then said that the DRuntime did have some user-facing things, like durations and the GC interface. So right now, if we had different editions with differences in the GC interface, we'd need to ship different versions of the GC interface for different editions. This was something we'd also want to solve for D libraries generally.

Walter said we should ask then why durations were in DRuntime. Jonathan said they were there because DRuntime used them for various things. Walter suggested that maybe they should be an internal thing, then.

Jonathan said part of the problem was that threads were in DRuntime and they used durations for things like sleep. But generally, certain user-facing things could go in Phobos but were needed by DRuntime. They ended up in DRuntime to avoid duplication.

Walter said he'd noticed a string-to-integer conversion function in DRuntime, and there was another one in Phobos. So the question there was, why not have two versions of it? One that was in DRuntime strictly for internal use and one that was user-facing. He didn't see a big problem with that.

Jonathan noted that the problem was wider than just DRuntime. Phobos was distributed with DRuntime linked. Phobos as a whole had the same problem with any pieces that weren't templated because we were distributing it as a binary. But that didn't mean we couldn't fix it.

Martin said that was a point he wanted to make as well. He also retracted what he said earlier about flags and non-templated functions. As long as the DRuntime modules specified the edition they supported, then in that case it shouldn't matter. It was being imported, the compiler would see the edition, and then it would be able to figure out the correct signature. So it was more about templates probably, and also DRuntime and Phobos as the two pre-built binaries we shipped.

Walter didn't see how the runtime being part of the Phobos binary changed anything about the interface to the runtime. The interface could still be edition-independent. Jonathan said the issue was with Phobos: which version did you build it with? Walter said that was up to the people who implement Phobos. If a Phobos module required a different edition, then there should be a different binary for it.

Steve went back to durations, saying the problem there was that it was the API, it was the type that you passed into the runtime to do things. If you wanted to say, "Sleep for this many seconds," you didn't want to have a different duration that you passed into DRuntime than you'd pass into Phobos. The types would be different if they were duplicated. As far as types were concerned, because we had to implement operators in them, durations really should stay in the runtime.

Walter said he wasn't arguing whether duration should be in DRuntime or not. He was arguing that the one in the runtime didn't have to be the same as the one in Phobos. You have one internal to DRuntime and one that's user-facing in Phobos. And they could be different.

Also, if someone had an old Phobos module that required Edition X, then old code from before Edition X would not be able to use those Phobos functions. He didn't think that was unreasonable, as old code would be using the old APIs, not the newer ones.

Andrei said there'd been some discussion about merging DRuntime and Phobos and asked if that was still on the table. He felt the separation had always seemed odd and not principled or justified.

Walter said all we had to do was go back to the core principle that old code compiled with a new compiler should continue working. It didn't have to be binary compatible, but it did have to be source compatible. Andrei asked if he meant flag-free or subject to a flag. Walter said it should continue to work flag-free.

Mathias wanted to say that when he implemented -preview=in, he did exactly what Walter had suggested here: he removed every use case in DRuntime that would lead to the ABI change. The reason he'd done that was the same reason we'd had to jump through hoops to implement DIP1000: because we were distributing DRuntime and Phobos as binaries. He said we were backed into a corner when it came to preview flags. He was certain we could stop distributing Phobos precompiled, and he wondered if there was a way we could do that with DRuntime as well.

Walter said we should be able to distribute it precompiled as long as it was matched to the compiler version. Mathias said you'd then be fairly restricted in terms of what you could do because it wasn't only about the features used in Phobos. Say you passed a type into a template that was in Phobos, and that template was already instantiated in Phobos. You could end up with a different result. It would depend on your preview switch. He said it had been a problem for him, and he was sure Martin had seen it as well.

Martin said he had seen the problem but had been able to work around it. He was concerned about the possibility of avoiding D features in Phobos to avoid these issues. That would be fine with DRuntime, but Phobos should be a showcase of D features, showing what D code could do and what idiomatic D should look like.

But he emphasized that he'd been wrong about the preview thing and editions should work fine there. It really was just about template emission and culling. What could we do about templates in modules compiled with an edition that was different from the one containing the template declaration?

Mathias said the two problems he could think of were with -checkaction=context, with the compiler thinking a template was already instantiated in DRuntime and you got a linker error. The other was the kind of problem that cropped up with DIP1000, where the scope wasn't part of the mangling if it was inferred. That broke Phobos. We'd have the same problem whenever a preview changed demangling or the API.

At this point, I suggested we put a pin in this for now. Átila was already supposed to address some of this in the editions proposal. We should wait to see what the initial drafts looked like, then we could have some focused meetings to resolve any of the issues raised here that weren't addressed, along with anything else that came up.

Everyone agreed.

Item 4: Named arguments have broken perfect forwarding

Timon said that it had always been possible to write a template wrapper that just took a tuple of arguments, and then use core.lifetime.forward to seamlessly forward it to another function. With named arguments, this no longer worked. The user would not be able to name the arguments of the function that you were forwarding to with the template wrapper.

Walter observed if you didn't use named arguments for that, then it wouldn't break. Timon said it still broke any library that intended to support perfect forwarding. But now when named arguments were available, it no longer had that feature. In that sense, it was a regression. The functionality of the library had decreased relative to what the language was able to express. The code wasn't broken, but rather the feature.

Walter said okay, but if you just used it the way you'd always used it, then it would work. Timon said it depended. Maybe you needed named arguments for your use case and you were passing your arguments to the wrapper.

Steve wondered if we could have a new syntax that allowed you to capture the names of the arguments as well as the types. Not as the current variadic tuple, but as a new syntax like names:arg or something like that. Then you'd still be able to do perfect forwarding, you'd just use the new syntax to support forwarding named arguments.

He said it was something we didn't have to add now. We could just accept that perfect forwarding was broken for named arguments, but if we decided it was something we wanted, then we could add it down the road. He didn't think it was something that should hold named arguments back.

Timon agreed and thought it would be a good idea for a future proposal.

Item 5: @safe ergonomics

Timon said he'd noticed a trend where people were using @trusted to disable safety checks and @safe to enable them. That wasn't how these attributes were supposed to be used. They both were intended to mean the same thing anout the interface of a function: no matter how you call it, it will not corrupt memory.

The issue was that this trend meant those guarantees were completely circumvented, and @safe was basically some sort of linting tool. But he understood why people were doing it. At the moment, it was almost impossible to write the correct template whose safety was correctly inferred given that it had to do some things that were @trusted.

Therefore, he thought it would be great to have a more fine-grained way to say which aspects of a function were @trusted. And maybe even some people writing @system functions would like to have safety checks in those functions. So this was something to think about.

Steve said everyone knew he'd been thinking about @trusted functions for a while. He thought a lot of us had a concept of @trusted that would be better than what we had now. He thought editions would be the perfect way to introduce that.

Timon agreed. He thought that people pragmatically wanted to have either safety checks or no safety checks, and it should be easy to specify the correct interface of a function independent of whether people wanted the safety checks in that function or not.

Adam said that he'd been working on his security code recently and the concept of @trusted had come up a lot because of his heavy interfacing with C libraries. But then also there were cases where you'd want to drop down to @trusted because you needed a cast somewhere.

Martin said he'd like to see @trusted blocks for this. Just blocks. No lambdas. He hated that approach. From a codegen perspective, that was a separate function, maybe with its own little context. And also it was just ugly. Plus, function calls came with one very ugly pitfall. Just imagine a function that took a non-POD argument by value. If it had a destructor and you put an @trusted lambda in there thinking you knew all the operations that were going to happen, and if that destructor was @system for some good reason, it would be very easy to mess that up. That would never be a problem with @trusted blocks, as there'd be no parameter copying.

Timon said there was also the opposite case, when you wanted to trust the destructor, though it was in general not trusted, but in the way you'd instantiated the struct you knew that destructing it was safe. It was trickier to find a syntax for that case.

Martin agreed, saying destruction was implicit. He speculated on a possible workaround involving core.lifetime.init. Timon said one issue was it wasn't possible to avoid implicit destruction. And that was something you should think about with move semantics. It should be possible to avoid the implicit destruction of something if you knew it was in the init state.

Timon indicated he was done with this one, so we moved on.

Item 6: __local variables

Timon said that when he'd implemented the static foreach DIP, he'd added the __local storage class. This allowed you to define variables that exist only during one iteration of the loop. He hadn't included this in the DIP, but he thought it would be useful to expose this to users. The reason he hadn't included it was that he wanted to avoid bikeshedding about the name of the storage class.

Dennis said that people had asked before for a random identifier generator. He asked if that could be used in static foreach as well. Timon said maybe, but then you'd have to mix in that identifier. He said that was another thing that may be useful, mixing in an identifier rather than an entire declaration if you wanted to generate one. But that was just one workaround. You could also use mixins with iteration counts as part of the name.

Walter said that assemblers had a way to generate local identifiers, and that was a handy thing. Maybe that was an idea we could have: the user types a special identifier and the compiler generates a unique identifier for it. It's something he hadn't thought about, so he didn't know what was a good idea and what wasn't.

Martin wanted to know what the expected use cases were. Was it only about static foreach? Or were there any other valid use cases? Because it hadn't been a problem for him so far.

Timon said that what people had been doing was to create a mixin template. In the body, they defined their local variables as well as the variables they wanted to expose. Then they mixed that into the static foreach body.

He said the real use case was when you wanted to construct a declaration and to do that you needed some helper variables. Like you wanted to compute something and put it into an enum or an alias. At the moment, that didn't work because the enum in one iteration would conflict with the enum in the next iteration.

He said that you could put the entire enum or alias declaration in a mixin and generate names for them, but then those names would be visible for users of your code. This would probably trip up autocompletion as they weren't meant to be visible. They were internal implementation details for your static foreach loop.

Walter thought that names that existed but didn't exist would be disruptive to symbol table management. If you tried to instantiate an identifier with a local variable name, then that name ended up in the mangled name of the template. So it was going to show up somehow. So offhand, with just 10 seconds of thinking about it, he didn't think it would be easy to figure out how it was supposed to work and how the semantics would interact with the rest of the compiler.

Dennis said there was a related problem that had an open DIP in the queue. If you had a mixin template, you sometimes wanted helper variables that weren't mixed into the final result. He thought it would be great if we considered if that solution would also work here so we could kill two birds with one stone.

Timon said maybe the DIP addressed it, so we should just push that through the normal process once it restarted.

Item 7: Target architectures with dub

Timon said that you could specify your desired architecture with dub, but by default, it didn't work. It tried to use a linker from the wrong toolchain. It would be great to aspire to have this just work, where dub downloads the correct DRuntime and Phobos and links it correctly for the target architecture. He thought LDC's support for cross-compilation was advanced and worked well, but wasn't really exposed.

Martin said it was exposed. The --arch for dub with LDC didn't support only the few extra architectures but also a full triple. Dub used that for the probing of the compiler to introspect its features for that specific target. If you set up the ldc2.conf correctly, it would work out of the box. You could create a Windows executable with a dub one-liner and a command line on Linux and it would just work.

Timon agreed and said he was using that. But it required him to manually download the correct DRuntime and Phobos and then edit the LDC conf file to point at it.

Martin said he'd been promised a tool by different people over the years, but it hadn't happened. He refused to do it himself, as really anyone could do it. It didn't need anyone heavily involved in the LDC details. You just download the appropriate prebuilt package, extract it in some directory, and then extend the LDC config with a little section. That was it.

Timon said that would make dub edit the default LDC conf for the LDC compiler you were invoking. Martin said he hadn't thought about integrating this with dub and didn't think it should be. He was thinking of it as a little helper tool like the LDC build runtime helper.

Timon said it would be nice not to have to figure out the details. He had figured it out and had done it for the things he needed to target, but it would be a nicer user experience. If the expertise for this space existed within the LDC developers, it didn't need to be replicated on the user side.

I said that Mathias was in charge of dub these days. Mathias said that was true, but his focus had been on the package management side of things rather than the building side. Anyone was welcome to contribute it, though.

Steve noted that Hipreme had been working on a replacement for dub that did fewer things. Maybe it was worth looking into it to see what the pain points were and maybe include some of this stuff there.

Timon said he thought it would be good to see what Hipreme was doing. He said that Hipreme had spent a lot of time trying to make sure there was some seamless way to compile his engine for a lot of different architectures that weren't well supported before. If any of that could be upstreamed into dub, that would be a good outcome.

Item 8: Improving rdmd by using dmd -i

Timon understood that dmd -i was introduced partially to improve rdmd because rdmd invoked the compiler twice, once to collect dependencies and once to build. With -i, that would no longer be necessary. He thought it would be an easy win to use this to speed up rdmd by basically a factor of two for the compilation part. He wondered why this hadn't yet been done. Would it break something?

Martin said the only drawback he could see from thinking about it just a few seconds was the caching. From what he knew, rdmd cached the built executable if the source code hadn't changed, so the next time you invoked it, it would just use the existing binary. dmd -i had none of that. He just needed to compile all the dependencies and be done with it. And he thought rdmd was used mainly for the script-like functionality. If a caching mechanism were in place, then sure.

Steve speculated on whether rdmd could combine both calls into one, passing to dmd using whatever flags it used to figure out the dependencies along with -i to do the build, then sill use that information to cache the executable.

Timon thought it was a chicken and egg problem. You didn't know if you needed to build until you knew what the dependencies were, but if you called it with -i when you fetched the dependencies, it would build even if you didn't need to.

Steve said that was true, but then on that first build, rdmd could cache the list of dependencies and then check that on subsequent runs before calling dmd -i.

Martin said that was exactly what reggae did, at least when using the ninja backend. We had a new flag for all the D compilers that output a makefile-like dependencies list when compiling something. On the first invocation when you needed to build the binary, you'd be getting that list. Then ninja handled all the checking of the timestamps and such for you. That shouldn't be built into the compiler, as it was a build system thing. Maybe we could just use ninja, but then we'd have another build dependency and we'd need that ninja binary lying around.

Steve said he'd like to see rdmd changed to use that mechanism. Martin said it should be feasible with that extra ninja dependency.

Mathias thought dub should be doing that. He said there was already an rdmd build flag, but it should just do a dmd -i build directly. Contributions were welcome.

Martin noted that in this case, we wouldn't suffer from the most obvious dub problem of it being single-threaded. -i meant everything was in a single invocation.

Timon indicated he was good to move on.

Wrapping up

At this point, we had gotten through all the agenda items. I explained that for this meeting we worked straight through Timon's list because no one else had submitted anything. I reiterated the new process and then asked if anyone had anything else to bring up in the remaining 15 minutes.

Deprecations

Walter mentioned that someone had complained in the forums that the latest release had caused a lot of deprecation messages. Walter had asked for specifics but had gotten no answer. He asked if anyone present knew of any such problem because we shouldn't be having releases spewing out deprecation messages anymore.

No one knew of anything. Martin said that as far as the Symmetry code base was concerned, they'd had zero problems bumping from 2.105 to 2.106. That was the first time in quite a long time they'd been able to make the bump without a single problem.

Walter said we needed to keep that up. No more breaking code deliberately.

Struct destructors and closures

Jonathan said he'd encountered an issue where if you had a type with a destructor, it didn't work in closures. This was causing problems with the refactoring work he was doing at Symmetry, and he was trying to find a workaround.

Steve asked if this was the issue where you return a closure, it included something that had a destructor, then you left that function and the destructor would run at that point rather than keeping it around for the closure to run with it. Jonathan thought so and said that the compiler wasn't smart enough to figure out it was escaping.

Steve said this was forbidden a long time ago. It was forbidden to have a closure over a type that had a destructor. But there'd been a change where someone had said, "This is a hack, we should fix this." And so that then allowed it to happen. So you had to know now that if you wanted to close over a struct with a destructor, you needed to allocate it on the heap.

He said the code to prohibit it was still in there. It was just commented out. But we'd probably have a ridiculous amount of breakage if we turned it back on at this point.

Jonathan said in his case the compiler was complaining about scope destruction, and that made him think it was some problem with scope not being used or something and nothing to do with what Steve was describing.

Casting string expressions to integer arrays

Dennis had previously told us he was working on making compile times faster for std.uni. He said Rikki had discovered that importing the Unicode tables took a long time, like 100ms, for even the simplest "Hello, World" program. That slowed down scripting usage with rdmd also.

So he'd been making lots of progress on cutting it in half by transforming the big array literals, where they had thousands of integer expressions, into hex strings. This was way more compact because it just was internally stored as a byte array instead of an array of AST nodes. But there was a limit to it in that the string expressions could only be as large as 32-bit integers.

He'd been considering extending the capability of string expressions to include size 8 so that you'd be able to cast a hex string literal to a long literal for the compiler to deal with it more efficiently.

He asked if there were any objections to that. Walter said he'd need to think about it. This led to a bit of discussion that ended up with a bit of confusion about what Dennis meant. He provided this example:

cast(uint[]) x"AABBDDEE" == [0xAABBDDEE]

He wanted to be able to use longs here. Walter said he didn't see a problem with it. Dennis said he'd submit a PR.

This was then followed by a discussion about potential issues with endianness between the host and target machines, how allowing reinterpret casts at CTFE, in general, was a bigger can of worms, and why integer literals couldn't be stored internally the same as string literals.

The last word came from Martin, who said that little-endian machines had taken over the world, so it probably wouldn't be a problem in practice. It would mean that we probably needed some extra code in there to catch it.

On a somewhat related note, Dennis said he'd uncovered an issue with template instantiations using string and array literals.

The Next Meeting

Our next monthly meeting was held on February 9, 2024. We had a couple of planning sessions in between.

And now the usual reminder: if you have anything you'd like to bring to us for discussion, please let me know. I'll get you into the earliest monthly meeting or planning session we can arrange. It's always better if you can attend yourself to participate in the discussion of your topic, but if that's not in the cards, I can still put the item on the agenda without your presence with a bit of preparation.

May 14

On Tuesday, 14 May 2024 at 13:23:17 UTC, Mike Parker wrote:

>

The D Language Foundation's monthly meeting for January 2024 was held on Friday the 12th. There were two things of particular note about this meeting.

[...]

I have some things to feedback on those points

Item 6

I had done my DIP thinking mostly about intermediate values on mixin template. That would make the value not included in mixin template, but yes, static foreach also needs that, so a good solution would be one that solves them both.

Item 7

Redub is a tool that is basically a dub clone which improves what dub is not doing very smartly: building. Recently I've done a simple, yet major UX improvement: if dmd is not found, it tries ldc.

But the apex of UX I've done is inside my engine, but I've shared a plenty of times: build selector.

This tool is focused into:

All the items ending with a * are optionally done by the user.

  • Downloads a D compiler if none is found on the user system
  • Downloads and installs git [only on Windows that is automatic. Other systems it shows an error requiring that to be done]
  • [Windows] - Installs MSVC and vcruntime so D is able to build it on systems that doesn't have this option (I needed that when running with a VM).
  • Download Android SDK/NDK [based on the user system], with the supported LDC compiler and sets up the DFLAGS required to build for Android.*
  • Sets up a custom D runtime which supports building to WASM, sets up the DFLAGS required to build it. May also get a supported compiler.*
  • Sets the DFLAGS required for being able to build for iOS.*

Item 8

There was a project which is basically rdmd but faster out there, done by Jonathan Marler.

May 14

On Tuesday, 14 May 2024 at 14:01:28 UTC, Hipreme wrote:

>

[snip]

Item 8

There was a project which is basically rdmd but faster out there, done by Jonathan Marler.

I was thinking the same thing! I had used it a few times, but I just don't use rdmd enough to want to use an alternative to it. If the improved functionality were built-in, then that would be a positive.

May 15
Wow I see I was mentioned at a lot at this meeting!

In saying that I do have some points to add about Item 2 data structures.

Data structures come in one of two forms generally: owning and non-owning.

### Non-owning

Non-owning is the simplest, its an index.
It doesn't own any memory that isn't internal.

Takes in a single memory allocator for its internal state.

For its memory safety you are pretty limited to concerns surrounding read-only-ness of its internal state. Which only exist because the GC allocator might be in use.

You pass a value in, you get a value out. No references.

### Owning

Owning data structures come with two subsets which may be mixed to form a third combined subset.

#### Cleans up

Takes in two memory allocators, one for its internal state, one for its deallocation of values.

Will deallocate any memory given to it, which also means that once memory is placed into it, it now owns it and is responsible for its cleanup.

As for memory safety you have only one (as long as you don't use references out).

Owner escape analysis, this is a DIP Walter has required for me to work on before reference counting. So its going in anyway.

#### References out

I'll assume here that it does not clean up any memory passed in, that you handle this manually, that the data structure is merely an index.

In almost all ways its similar to non-owning, one memory allocator, still has the memory safety concerns of the owner.

Andddddddd you also have to worry about borrows typically from a struct acting as a reference. Which once again is resolved with owner escape analysis (noticing a trend yet?).

------------------------------------------------------------------------

What this tells us is that non-owning data structures can go in pretty much immediately, although there will need to be some remedial work once we get reference counting and with that a solution to read only memory.

But of course because Walter wants owner escape analysis first, it doesn't matter, both owning and non-owning will need to go together.

Note: concurrent data structures will need the read only memory solution due to the locks (notice a trend yet?) as well.

We'll likely want to use long names to differentiate all these behaviors unfortunately. As it could be quite desirable to have all the different variations.
May 19
Mike Parker kirjoitti 14.5.2024 klo 16.23:
> The D Language Foundation's monthly meeting for January 2024 was held on Friday the 12th. There were two things of particular note about this meeting.
> 

Thanks for the write-up once again! Always nice to know what is cooking, even when the news come a bit late.