I fell behind on my meeting summaries in the run up to DConf Online. The December summary will follow in a few days.
The November 2022 monthly meeting took place on the 4th of the month at 14:00 UTC. It lasted about two-and-a-half hours. The following were present:
- Andrei Alexandrescu
- Walter Bright
- Iain Buclaw
- Ali Çehreli
- Max Haughton
- Martin Kinkelen
- Petar Kirov
- Dennis Korpel
- Razvan Nitu
Átila Neves was present at the beginning but had to leave early.
Iain started by noting that the digitalmars.com domain had gone down for about 6 hours the week before because it hadn't been renewed. He was thankful to Mathias Lang for notifying him so quickly and to Walter for being so responsive when Iain called him about it around 5:00 am PST. Walter is the one most at risk in that situation if someone else snatches up the domain, as Digital Mars is his business, but it does raise the question: why are our D pipelines failing when digitalmars.com goes down? Iain believes it's because the Digital Mars FTP site is being queried somewhere in the build scripts. It might be a good idea to have a mirror for that. Walter agreed that D services shouldn't be relying on digitalmars.com.
Also the previous week, one of the build hosts in the autotester died for about 12 hours. When this happens, there's practically nothing we can do. The autotester is only building dmd and DRuntime, and is no longer running any tests. Given that it's being used for less than less, Iain proposed ripping out all the GitHub hooks for the autotester, maybe starting in January 2023. No one argued against it.
He then gave us an update on the 2.101 release. At the time, the release candidate had been announced and he expected the final release to go out in the middle of November. He subsequently announced it on November 15.
He then had a question for me. I had set up an admin group on Hetzner Cloud, and he wondered what it was intended for. I answered that this is where we would be setting up the services the foundation will be taking control of from the ecosystem. Iain said that out of coincidence, he and Mathias Lang had both bought bare metal servers on Hetzner to use in the BuildKite CI pipelines. This reduced individual pipelines from 1 hour per pull request to 18 minutes, making BuildKite less of an issue when a big load of changes comes through. It also means we can add more projects to our BuildKite CI. (For context, Mathias reported in the October meeting that after BuildKite had gone down a few days prior, he restructured it so that it was running from his own server. This gave us control over updating the dependencies, something we didn't have before. Unfortunately, it ended up being slower. The servers he and Iain are paying for rectify that problem. Thanks to them both!)
There was then a discussion that touched on several topics. A couple of highlights: dlang.org is still with Walter's registrar, and we need to transfer it to the DLF's account; should we also transfer digitalmars.com (answer: no); in moving the downloads.dlang.org content from S3 to Backblaze, Iain realized several packages were missing, but they were present at ftp.digitalmars.com and he pulled them from there; does it matter if the main dlang.org site is hosted on Hetzner servers in Europe vs. North America (answer: probably not).
Martin let us know the next LDC release would be a bit later this time. He hadn't had much time and still needed to work out some details regarding the DMD and DRuntime repository merge.
He had just started testing the 2.101.0 RC 1 against the Symmetry codebase. So far it wasn't too bad, but it was hard to tell for sure as a few dub packages needed to be updated. One benefit that was already visible was that it had uncovered some existing bugs related to copy constructors. He had also found a regression which he was going to file. Other than that, he congratulated Dennis for some nice new diagnostics regarding function inference, which means e.g., it's no longer necessary to figure out manually which line in a function causes it to be inferred as
@system rather than
The newest dub version had shown no regressions so far in testing on the Symmetry projects he tested. That's good progress. The new colored output is also very nice.
Petar asked if Martin planned to incorporate LLVM 15 into the next release or delay to the one after. Martin said that Nicholas Wilson and someone else had already taken care of migrating to LLVM 15. According to their tests, LLVM 15 should be working, but there was some work to be done before Martin could run all of the tests with LLVM 15. First, he needed to update the LDC fork of LLVM to LLVM 15 for all targets. Then he needed to overcome an issue with testing vanilla LLVM via GitHub Actions, as the LLVM team had stopped producing certain GitHub artifacts. If he was able to find the time to handle all of that, then the next LDC release would probably support it, but he couldn't say more until he was able to run the tests himself to know the actual status.
For the past month, Razvan had been working on bugs with the
noreturn implementation. He had fixed some, but there were still five or six reported bugs that remained unfixed. He had hit a roadblock in which the backend sometimes wasn't recognizing
noreturn and was asserting.
Martin interjected by noting that everything that the front end defers to the back end affects the glue layer, and the
noreturn stuff is a PITA if it's not handled in the front end and lowered to other constructs. For example,
noreturn globals obviously can't exist, but they're still in the AST, so the glue layer needs to handle them. In LDC, they're assigned a dummy byte, but then that causes a problem in debug info---it needs a type, but what type to assign? The ideal would be to have no
noreturns in the AST; lower them to other constructs instead. All of the issues fixed in the DMD backend need to be duplicated by all other compilers.
noreturn should not get into the backend at all. He'll need to go back and look at what's already in the backend before deciding how to resolve it.
Next, Razvan reported he had been looking at a PR to implement the
throw attribute for functions. The problem with it was that it potentially breaks code that uses the
getAttributes trait when testing for the presence of
nothrow. Walter said that the compiler internally should just view
throw as the absence of
nothrow rather than as a separate attribute. Its only purpose is to turn
nothrow off, so ideally code using
getAttributes shouldn't need to change at all. It's not like
@system, which is a tri-state. With the binary
throw state, you only need to care if a function is
nothrow or not. Martin agreed.
With that, Andrei brought up past debates about
attribute(false). Walter finds that syntax clunky. It was easy to use
throw to turn off
nothrow since it's already a keyword. Petar brought up
@nogc. This led to a long discussion about attribute algebra, attribute soup, negative and positive attributes for disabling each other, attribute inference, circular dependencies, and tangential topics. Finally, Dennis gave his perspective and mentioned another proposal that had come up in the past: using the
default keyword to establish a default set of attributes for non-templated, unannotated functions, which could be used to "reset" the attribute state of any function. Walter thought that
default to reset state is a great idea, and that we should think about that before implementing the
throw attribute. Dennis has since submitted a DIP.
Next Razvan provided a good bit of background on Bugzilla issues related to the evaluation of
__FILE__ in the context of the callee rather than the caller when they're nested in other expressions. He had a PR he wanted Walter to decide on. Walter subsequently left some feedback on the code but has yet to say anything about the overall approach.
Finally, Razvan asked about the status of the move constructor DIP. Max said he hadn't worked on it in a while, but he needed to make some revisions based on feedback he'd received from Eyal Lotem of Weka regarding "last use" inference. Martin suggested that the syntax and the introduction of the move constructor as a language feature be decoupled from the last use evaluation, as the latter is an optimization that allows the compiler to automatically do the move without the user having to call the library function under specific circumstances. He would like to see some built-ins for moving and forwarding, which should be simpler and more performant than the library solution.
Max brought up three DIPs from Garret D'amore aimed at simplifying D's grammar (Deprecate Trailing Decimal Point, Disallow Comments in Trailing Token Sequences, Remove Support for Small Octals). Do we need the DIPs to make the changes? Martin doesn't think we do. Walter wasn't sure yet. I told him I'd email him later to remind him to review them. (And I only did so very recently.)
Dennis provided some background (Garret had implemented a Treesitter grammar for D and these DIPs came out of that) and raised a bigger question: how much value do we place on reducing lexical complexity? Suggested changes in a similar vein had been shot down in the past. On the one hand, some users don't want to see errors raised in their code from seemingly unimportant changes like these. On the other hand, people making tools are frustrated by these cases because of the ugliness of handling them. Simplifying the language is in the vision document. So what's the consensus? Is it worth cleaning things up to make a simpler lexical graph or not? Walter thinks it's worth pursuing. We need to look at it on a case-by-case basis. Iain suggested that such changes might warrant a deprecation cycle. He noted that when he removed the
-transition=complex flag after complex numbers had been deprecated in quotation marks for almost a decade, he still started a twenty-release deprecation cycle for the change.
Max then asked if we could deprecate the existing grammar and replace it with something generated from a Treesitter grammar that would then be machine-readable. This initiated a discussion about D's grammar. There are now two Treesitter implementations (Vladimir Panteleev maintains the other one). Max thinks Garret's implementation doesn't allow certain things the compiler does that are grammatical bugs. Walter pointed out that if people are creating other grammars for D instead, that's a significant failure for us. If it's not machine-readable, then it should be fixed so that it is. Dennis noted that Vladimir had backported some of his Treesitter fixes to the official grammar.
The first item Dennis raised was a question that came up in his work implementing named arguments. The DIP proposes that
ArgumentList in the grammar be redefined to use
NamedArgument in place of
AssignExpression (which is now part of the
NamedArgument definition). But there are places in the grammar that use
ArgumentList where named arguments aren't expected (e.g.
CaseStatement). He wondered if named arguments should be moved out of the parse and into the semantic. Walter suggested defining a new
NamedArgumentList to be used where named arguments are expected, and continuing to use
ArgumentList where they are not. He said errors should be detected as early as possible, so we want named argument errors to be detected during the parse. Dennis said he would submit a PR to amend the DIP.
Next, Dennis brought up the stable vs. master branch distinction that he has encountered in pull request discussions. It's confusing to know which branch to target as it depends on several factors that seem to be controversial and complex. E.g., Is the PR a breaking change? A safety bug fix? A regression? An old issue? Do other commits depend on the branch? Where in the release cycle are we? Etc. He would like some clarification documented. Especially when it's about the release cycle. Because we're behind on the release schedule, it's hard to know when a release is being tagged. This brought on a long discussion with primarily Martin, Iain, and Walter providing input. It could be summed up as: there can be no hard and fast rule. Only low-risk things should go into stable, but whether something is low-risk or not is a judgment call.
Two things came out of this discussion. One was that Martin and Iain agreed there should be more time for the beta and release candidate periods before a DMD release. The other is that we should set up a GitHub Action to automatically merge into
master all commits that get merged into
Ali has started using D again on a work project. He said that when he goes from not using D at work to using it, his motivation increases dramatically. Before he knows it, hours have gone by while he's coding. He wanted to thank everyone who keeps pushing D forward.
I reported that I had four DConf videos remaining to edit and publish.
I then noted that I was preparing to push a new DIP into Community Review and had been working with the author to get it ready (that was DIP 1044, "Enum Type Inference"). I had wanted to get the DIP queue moving again. After pinging some of the authors, the author of that DIP was the only one who was ready to move forward.
Next, I said I wanted to do another Community Conversation for our YouTube channel. I had recently published the second one with Walter and planned to do a third one with him in the future. For the next episode, I wanted to chat with either Iain or Martin. I was thinking of recording in December once DConf Online was out of the way. Neither of them volunteered to go first :-) Ultimately, I didn't have time to get to it in December, but I'm pretty sure the next conversation will be with Iain. I'm trying to catch up on other things I've fallen behind with (like meeting summaries) and will arrange a session with him as soon as we're both able to meet.
I then said that the DConf Online schedule would be going out soon. We had more submissions than I'd expected, so I said we might not do an Ask Us Anything so that I could make room for more talks. As it turned out, I later had to shuffle the schedule around and we did the AUA.
Finally, I reported that my goal of getting most of the ecosystem services migrated to servers managed by the foundation by the end of the year was not going to happen. The primary people involved in the planning, who also manage some of the services themselves, were too short on time to make it happen. I wanted to pull the trigger to get at least some of the things moved first and sort the details out later. Before the meeting, I'd spoken with Vladimir Panteleev and he wisely suggested that we hold off and wait until we get the details sorted first (backups, containers, deployment, etc).
(As I'm writing this, Vladimir is preparing to move the doc tester off of his server and onto a foundation server. This should result in reduced times for the doc tester, and it will also benefit the forums and the wiki. Currently, they're all running on the same machine, and Vladimir has had problems with resource contention. With the doc tester on its own box, the forums should go down less often.)
Petar Kirov, who was present at the meeting, is one of the people involved in the migration planning. He said that we are in a better place now than we were three or four months ago. We've got accounts with several services we'll be using and have set some things up already (e.g., we're using Cloudflare's nameservers, the download archive is being served from Backblaze behind a Cloudflare proxy). He said the main thing is he, Vladimir, Iain, and anyone else involved finding the time to get the work done.
Petar said that he is aware of issues with run.dlang.io, and that he has plans for some changes and fixes. Martin said he uses it a lot for quick little tests, as it's faster than doing the tests locally, but it's currently using 2.099.1. He'd love to see it set to automatically update to the latest compiler version. One of the things Petar wants to do toward that end involves the Nix package manager. He's been using it for the past few years. There are already packages for the three D compilers, but they only target the latest stable releases. He would like to create Nix package expressions for "as old as practical" versions of the compilers. Then, once run.dlang.io (and the DLang Tour) are moved to the foundation infrastructure, we'll have a wider range of compilers to test with, and compilation with multiple compilers should be much faster (his current server is slow).
He told us he was mentoring Vlăduț Chicoș for SAOC (the QUIC protocol implementation) and that Vlad was making good progress.
Andrei asked Razvan about the status of the ProtoObject DIP. Razvan said that Adam Ruppe had raised some valid complaints in the DIP's pull request thread, and those were reinforced in the community review. Razvan and the primary author had no answer for those complaints at the time. The primary author eventually decided to move on (I have belatedly marked the DIP as "Withdrawn"). Andrei thinks it's worth making an effort to resolve any issues the proposal has.
Next, Andrei brought up memory safety and how it has become a major topic of discussion in the broader programming community. Rust has done a tremendous job selling it. He mentioned a comment by Mark Russinovich that it's time to drop C and C++ in favor of Rust for new projects. D needs to have an answer to this. D's GC puts it in a special category, and until we get safety solved it's like having the disadvantages of two things and the advantages of none. It's imperative in this day and age that we get to 100% memory safety. Walter agreed, and said we can't afford not to get to that point.
Razvan brought up Walter's safe-by-default DIP. He thinks now is the time to revisit that DIP, revising it to make
extern(C) functions always default to
@system. I noted that there is a DIP in the PR queue to do just that. I read the section on
extern(C/C++) functions, which Walter said was clear enough. I said that the author had been unresponsive to my emails, and Walter suggested that if that continued then I should try to find someone to take it over.
(I'm preparing to go through the queue and start contacting the authors again next week. If the safe-by-default author continues to be unresponsive, I'll be looking for someone to take it over. So if you'd like to put yourself forward for it, please let me know. Also, note that I'm planning a complete overhaul of the DIP process later this year based on feedback I've gotten from multiple sources. I'll write something up on that in a few weeks. For now, I'm dropping the Final Review round, so any DIPs going through the process before the overhaul will potentially only go through one round of review before being pushed over to Walter and Atila).
Walter started with a bug hunt he had recently undertaken. He was doing some work to implement SIMD comparison operators for vectors and saw that nothing was being enregistered. He finally managed to track it down to a PR from a few months before that he had overlooked which disabled allocation for XMM registers because of a bug. He was able to fix the bug that prompted that PR and reenabled XMM register allocation.
He was working on the SIMD comparison operators because they were disabled. When he first implemented vector operations in D, he had a misunderstanding about them. That recently changed when he took another look at the APL language. He had looked at it decades ago and didn't fully grok it. This time, it all made sense. Then he realized SIMD is an APL-like language in hardware and that he had misunderstood it. So he implemented the missing comparison operators. He said that Iain had some reservations, but Walter thinks it's the best way forward. It's the way GCC implements vector instructions and that GPUs want vector instructions. He's happy to make SIMD operations more of a first-class feature in D. He still needs to decide what to do about array operations. They should operate the same way, but he's going to leave that for later.
Martin said he had implemented vector operations in LDC a couple of releases back. He went into some implementation details that I'm not even going to attempt to describe. He'd had a conversation with Iain a week or so before, and Iain raised some concerns about genericity with which Martin agreed. This was followed by a rather long discussion about different approaches for implementing vector operations: what the hardware expects, what's best for the user, how it impacts generic code, what GPUs expect, auto-vectorization, masks vs. scalar booleans vs. bit vectors, etc. If anyone is curious about the details of this, perhaps Martin, Iain, Walter, or Max could provide some information.
The next thing Walter had was that he was fixing issues that Timon had identified with DIP1000. He had a couple of PRs out, so he was making steady progress on making DIP1000 bulletproof. The issues had come up in discussions that DIP1000 was hard for users to understand even though it's fundamentally simple. Walter believes that's because the syntactic sugar that D heaps on it tends to obscure what's happening with it.
On the other hand, Walter had heard that many Rust developers don't really know how the borrow checker works; they just keep tweaking the code until it compiles. DIP1000 is kind of the same: it doesn't add semantics from your code, it only subtracts them. So the idea of trying attributes until it works is not an entirely unreasonable strategy. And anyway, the compiler is pretty good at inferring attributes when it can. Expanding the kinds of functions on which the compiler can use auto inference will greatly reduce the need to use function attributes. He still thinks we're in good shape with this. He thinks he can continue to find solutions to the problems people bring him as examples showing that DIP1000 is unworkable.
Dennis noted that one problem right now with the "try attributes until it works" approach is that in some cases, it only works because there's a bug and the compiler is accepting something that should be invalid. Then when we fix the bug, we get failures in random projects on Buildkite. Walter said that's a great point. He has nothing to excuse that other than to say that we need to put a priority on fixing DIP1000 bugs. He had gone through Bugzilla a few weeks before and went through all of the outstanding DIP1000 bugs he found. Most of the ones that mattered already had a PR in the queue, and he submitted PRs for some more. There may have been some new issues reported since then, but he thinks he and Dennis had made some great progress on getting DIP1000 into shape. He's annoyed that it's taking so long to get to 100%, but he's happy with the direction it's going.
The next meeting
The next meeting took place on December 2nd at 14:00 UTC. I'll have the summary of that one ready in a few days.
Gripes and wishes
I'd like to remind everyone to keep sending me your gripe lists and wish lists. I've gotten a great response so far. I've received emails from forum regulars, Discord regulars, forum lurkers, people who aren't active in the community anymore, people who are utterly frustrated with D, people who are utterly ecstatic with D, old-timers, newcomers, etc. And the emails keep trickling in.
I want to reiterate that I will post details about what I'm receiving, but it's going to be several weeks before that happens. I also want to emphasize that I plan to use this information to help inform us in making some major changes in process and organization, but that's also a bit farther down the road. I haven't yet discussed my ideas with the others, so I can't say for sure at this time what's going to change or how. But just know that the more information I receive through this initiative, the better.
So if you haven't emailed me yet, please do (please read the post linked above first), and if you have but you think of more to say, bring it on.