June 11

On Sunday, 9 June 2024 at 18:43:51 UTC, Vladimir Panteleev wrote:

>

On Thursday, 30 May 2024 at 18:31:48 UTC, Atila Neves wrote:

>

[...]

I would just like to comment on one aspect of the DIP for now: the "Examples" section is a bit of a sore sight currently. It's difficult to get excited about the idea when the proposed actions it unlocks is "we're changing two defaults and removing three features without providing an immediate replacement".

Regarding the removal of lazy, I'm particularly curious about the consequent fate of assert and enforce, two prominent current users of `lazy. It seems like either way would involve a compromise:

  • Will assert fully become "compiler magic", unimplementable in user code, and enforce replaced with an explicit delegate variant?
  • Will both assert and enforce require an explicit delegate, thus making unit tests quite a bit more syntactically noisy?
  • Will both assert and enforce become "compiler magic" (and therefore unimplementable in user code)?

Good questions. Enforce would have to take a delegate, but assert could be magic. I'm not sure that's a good idea though. In any case, that's an idea of what we could do, I'm not sure we will.

June 12

On Thursday, 30 May 2024 at 18:31:48 UTC, Atila Neves wrote:

>

https://github.com/atilaneves/DIPs/blob/editions/editions.md

One hurdle I can see coming is the following: Writing and maintaining a compiler that just supports multiple editions is error-prone to begin with. A much, much bigger hurdle is writing and maintaining a compiler that supports all interactions between the semantics of different editions. If done as proposed, any compiler supporting 3 or more editions is doomed:

If n is the number of editions to support, the number of semantic interactions is n!/(n−2). For n = 2, that’s just 2 (one interaction forth, and another back). For n = 3, it’s already 6 and for n = 4, it’s 12. The way of handling deprecations is a lot like n = 2, as the compiler must somehow support changes in semantics, at least recognize an erroneous old way and diagnose it properly.

One might assume that the number of interactions would be n choose 2, which would render n = 3 (close to) feasible, but that disregards the difference between back and forth interactions. If functions in modules A and B of different editions call each other, it makes a difference if A.f calls B.g or the other way around: The lexically same signature of A.f might mean something entirely different were it the signature of B.g. (That is the whole point of having editions.)

For three editions X, Y, and Z, semantics could be that XZ is defined through XY and YZ (and the other way around for ZX), but without proof, my suspicion is that this cannot be done in general.

The question how older edition code calls newer edition code: Even in the ideal case, there are inheritance, delegates/function pointers and templates. My best guess is that inheritance and delegates/function pointers with some effort are largely doable. There’s the issue what storage classes and attributes mean exactly, that must be clearly defined. The biggest issue here is that it might not be possible without surprising the programmer. My fear is that templates will become worse than C++ templates. It’s already not that easy to reason about them. (Example: In D, one can’t instantiate any template due to auto ref parameters – and that despite the fact that D officially has no function templates, it just has templates, and IFTI is defined for a template that happens to contain just a function declaration.) Don’t get me started on mixin templates, those are already next to impossible to write in a way that makes them impossible to use incorrectly.

In the not-so-ideal case, there’s a code base and some modules are older and for edition X. Then edition Y came out and newer modules were written for Y – which works as Y and X interactions are well-defined. The some X modules needed fixing and end up calling Y module functions because it’s just practical. Rinse and repeat with edition Z.

If we have to allow interactions between editions, do so by a narrowly defined subset of the language. Essentially, if a module A is for edition X, and module B for edition Y, for B, declarations in A that aren’t in that subset are effectively private, and vice versa.

Interactions between editions is something that even C++ does not do (editions are called language versions, but it’s conceptually the same). Using conditional compilation, you can write files that are compile with C++98 and C++23, but you can’t compile a.cpp with -std=c++98 and b.cpp with -std=cpp23 and expect that you can link a.obj and b.obj. The headers a.hpp and b.hpp, which both .cpp files include, are likely different depending on language version, but even assuming they’re not, even mangling can be different, what stdlib classes/functions do is different, etc., etc. What you could do, however, is use extern "C" declarations.

Back to the common subset for D. From a conservative standpoint, let’s just allow extern(C) non-template stuff. Effectively, extern(C) would be the public-across-editions visibility. The reason is simple: Whatever newer editions do, what extern(C) declarations mean won’t change much. It’s not a panacea either, as extern(C) declarations can carry attributes and their meaning can change. However, there’s no question what the parameter storage class in means on an extern(C) function, it’s just not allowed. Classes are out, too. Non-POD structs are out as well. Start from a very narrow subset, then expand as needed. For compatibility of editions to work with ≥ 3 editions, the subset must be very, very stable.

June 12

On Tuesday, 11 June 2024 at 16:08:54 UTC, Atila Neves wrote:

>

On Sunday, 9 June 2024 at 18:43:51 UTC, Vladimir Panteleev wrote:

>

On Thursday, 30 May 2024 at 18:31:48 UTC, Atila Neves wrote:

>

[...]

I would just like to comment on one aspect of the DIP for now: the "Examples" section is a bit of a sore sight currently. It's difficult to get excited about the idea when the proposed actions it unlocks is "we're changing two defaults and removing three features without providing an immediate replacement".

Regarding the removal of lazy, I'm particularly curious about the consequent fate of assert and enforce, two prominent current users of `lazy. It seems like either way would involve a compromise:

  • Will assert fully become "compiler magic", unimplementable in user code, and enforce replaced with an explicit delegate variant?
  • Will both assert and enforce require an explicit delegate, thus making unit tests quite a bit more syntactically noisy?
  • Will both assert and enforce become "compiler magic" (and therefore unimplementable in user code)?

Good questions. Enforce would have to take a delegate, but assert could be magic. I'm not sure that's a good idea though. In any case, that's an idea of what we could do, I'm not sure we will.

I don’t see why assert would be considered “magic.” On the one hand, it’s a language primitive, so it’s not magic, it’s just defined in some way, and short circuiting operations do exist in && and || and those are around basically forever. On the other hand, why not just evaluate the message regardless if the condition fails? In many cases, the message is a string literal, and almost all cases where it’s not just a literal, it has no side-effects, so the compiler can actually refrain from evaluating the message unless the condition fails – as an optimization. Assert messages with side-effects are such an anti-pattern, maybe it shouldn’t even be allowed.

For lazy, yes, lazy function parameters are weird and I’d rather have explicit delegates. One thing lazy could bring as a storage class applied to a delegate type or array or slice of delegate type parameter: The caller guarantees that each such delegate is evaluated at most once.

TL;DR (next): Don’t give up lazy as a keyword, it might be handy.

For data members of structs or classes, lazy could be implemented so that they’re lazily evaluated. Of course, for mutable objects, one can implement lazy evaluation by hand, but in D, immutable values can’t make use of lazy evaluation except by having state that’s conceptually part of an object actually placed in a global/thread local outside of them, which is incompatible with pure. With lazy, one could have an object be immutable with all its guarantees (maybe with select exceptions such as placing it in the read only section of the binary), but allowing for lazy evaluation. If the language handles the lazy evaluation, it can make guarantees that the programmer simply can’t. It’s a design hole. The essence of immutable is that no-one can observe mutations of this object. If the programmer cannot ask: “Has this lazy data member been evaluated and cached yet?” No mutations can be observed. It does not have to be a data member storage class either, it might be much more practical to implement as a function attribute so that the annotated function runs at most once, and if it did, the result is cached into a (hidden) data member, so that when it runs again, only the value of the data member is returned.

June 14

On Thursday, 30 May 2024 at 18:31:48 UTC, Atila Neves wrote:

>

https://github.com/atilaneves/DIPs/blob/editions/editions.md

The DIP doesn’t mention the addition of new keywords without breaking identifiers identical to them. I’m thinking of impure here, as it would be the most orthogonal way to introduce an inverse for pure.

June 17

On Friday, 14 June 2024 at 17:24:59 UTC, Quirin Schroll wrote:

>

On Thursday, 30 May 2024 at 18:31:48 UTC, Atila Neves wrote:

>

https://github.com/atilaneves/DIPs/blob/editions/editions.md

The DIP doesn’t mention the addition of new keywords without breaking identifiers identical to them. I’m thinking of impure here, as it would be the most orthogonal way to introduce an inverse for pure.

I don't think that scales given how many attributes we have.

June 19

On Monday, 17 June 2024 at 22:47:03 UTC, Atila Neves wrote:

>

On Friday, 14 June 2024 at 17:24:59 UTC, Quirin Schroll wrote:

>

On Thursday, 30 May 2024 at 18:31:48 UTC, Atila Neves wrote:

>

https://github.com/atilaneves/DIPs/blob/editions/editions.md

The DIP doesn’t mention the addition of new keywords without breaking identifiers identical to them. I’m thinking of impure here, as it would be the most orthogonal way to introduce an inverse for pure.

I don't think that scales given how many attributes we have.

Of the ones I mean, we have four: @safe, nothrow, @nogc, and pure. Of those, @safe had @system from the get-go and throw was added by DIP 1029 (seems unimplemented). I’m writing a DIP to add @gc. Then, only pure remains without inverse.

There’s also @live, but it’s not transitive and it me, it seems it makes no outward guarantees (e.g. one can override a @live function by a non-@live one, or assign a non-@live function’s address to a function pointer variable with @live).

June 21
We're briefly discussing editions on Discord and complexity of implementation where I'm making the note that some of the infrastructure and one change is already applied.

However a point I want to make is in the implementation there is a legacy version along with 2024.

https://github.com/dlang/dmd/blob/15f66f89c9d80bbd71274f272e58c273684905ee/compiler/src/dmd/astenums.d#L22

This is a point I've previously made that the base version (aka legacy) should be given the version number 2 and should not have DIP1000 enabled for it.
August 03

On Thursday, 30 May 2024 at 18:31:48 UTC, Atila Neves wrote:

>

https://github.com/atilaneves/DIPs/blob/editions/editions.md

Destroy!

Editions, as outlined and discussed, will easily solve the most problematic issues. There are no compilation failures I have run into in the past few years that would have been hard to solve with editions as described. That means my experience (and anyone like me) would have been ideal during that time period, at only the cost that I have to specify the edition. That is a huge win.

There are lots of comments, such as:
"Mostly, yes, druntime will be kind of stuck."
"How does a newer edition with less @safe bugs treat @safe code from an older edition that has more memory safety holes?"

This is a completely reasonable fear. Suppose there is a feature that is not possible or practical to add in an edition. I don't think it's unreasonable to (rarely!) make a breaking change and start a new stream of editions. The situation will be much better than it is today where my code fails to compile after almost every compiler release. It would be important to put these changes off and make them all at once. So you have, for example, three years of stability and then three breaking changes at once, instead of one breaking change each year. You have to break some eggs if you want to make a omelette. If there are a few of these over the next few years, editions will be better than the current state during the meantime. And then the lessons learned will create a future where it doesn't need to happen as often. I can think of solutions to make this less painful than it is today. This proposal is good as is and if it isn't perfect, it's still a step in the right direction, with the potential to be nearly perfect in the future.

Now to comment specifically on the proposal, you said:
"Modules without a module declaration would be considered to be using the latest edition unless the default edition is specified explicitly with a command line argument."

That is the obvious sensible plan. The only issue I see with this is that is this scenario.
Timon Gehr said:

> > >

How do you compile different modules with different editions on the same command line?

You can attribute a module with the edition you are using to build.

The whole point is not having to edit modules that were written by someone else just to be able to import them.

In ten years, I don't want to have to remember the name of the first edition, not to mention the edition naming may get more complicated as things tend to do. Most code that is not annotated with an edition will be from the first edition so it almost seems like it should be the default. Except that it would obviously be annoying for the end user and a language feature a new programmer would need to know. So that is a no go. This needs a solution and I don't have a great one. Perhaps there is a flag like -edition=first or an easy way to look it up. It does not need to be a part of this proposal, but I think it will need to be a part of editions come two or three years.

As for version naming, let's say this is version 2 of the language. Then the first edition could be "2.1". And the next version could be 2.2. Once you need a truly breaking change, then the edition becomes 3.1 (or start with 2.0/3.0 because we are programmers.) This would be intuitive for someone to understand it's a different language version and needs a different tool. Although, regardless of edition naming, if you try to compile 2024 code with a 2029 (incompatible) compiler, the compiler would easily be able to say "This needs such-and-such compiler" or other, simpler solutions I can think of for new users.

August 03
On Thursday, 20 June 2024 at 15:30:43 UTC, Richard (Rikki) Andrew Cattermole wrote:
> However a point I want to make is in the implementation there is a legacy version along with 2024.
>
> https://github.com/dlang/dmd/blob/15f66f89c9d80bbd71274f272e58c273684905ee/compiler/src/dmd/astenums.d#L22

This is a great solution to my problem I proposed above. I should have gone back and read all the new messages before I posted. Sorry I suck at the forums.
1 2 3 4 5
Next ›   Last »