June 11, 2021

On Friday, 11 June 2021 at 11:27:20 UTC, Dukc wrote:

>
@semantic("implicitByCodeUnit") module b;

void main()
{ import b;
  auto var = "50€".foo; //var iterated by code unit or by code point?
}

should be

@semantic("implicitByCodeUnit") module b;

void main()
{ import a;
  auto var = "50€".foo; //"50€" iterated by code unit or by code
point?
}
June 11, 2021

On Friday, 11 June 2021 at 07:36:47 UTC, Sönke Ludwig wrote:

>

This is something that should have been discussed already, but I can't remember whether that was actually the case, and it always bothers me every time there is friction with new DIP switches.

[...]

I agree per-module UDAs would be nice.

We'd have to be careful about templates though: Currently the emission strategy is that if a template can be found in a root module, it will not be codegened (which is good because codegen is slow). With per-module UDAs we'd probably have to emit defensively when different attributes are used, and have to mangle the attributes into the template (to avoid them being folded despite being different).
This could become quite impactful if e.g. the standard library is shipped with a different default (try to use -allinst and you'll see a massive slowdown).

Another thing is that -preview switch are, for the most part, not finished, nor are they made compatible with libraries. I think every -preview switch should come with its "Enable by default" draft PR to see how much breaks on Buildkite, and those failures should mostly be fixed, so that users have a much better experience.

June 11, 2021
On Friday, 11 June 2021 at 11:27:03 UTC, Ola Fosheim Grøstad wrote:
> I would find it much more reassuring if a comprehensive solution was developed as a completely separate compiler branch.
> future branch that is considered unstable until all the corner cases have been ironed out.
> This also allows more heavy restructuring of compiler internals, like introducing an appropriate IR (which is needed for things like borrowing or ARC, if you want something solid).

This would be short-sighted. It'd mean that the experimental feature developers would then have to backport all the compiler improvements that have been done while the feature was experimental. Its easier to account for the experimental features when doing the restructuring.

Second, if using an experimental feature requires compiling a separate compiler branch, not as many will use it. Hence, less real world testing, and even less issues ironed out.

> The cost of moving to a more complete solution after something incomplete has been made official could break the camel's back.
>
> The piece-by-piece approach is a slippery slope.

We already have a three-round DIP process to catch issues before we make the new feature official. We probably need to be more explicit about what is still experimental and what is official, though. I think it wouldn't hurt to document that status to each of the preview switches in DMD.
June 11, 2021
On Friday, 11 June 2021 at 11:27:03 UTC, Ola Fosheim Grøstad wrote:
> On Friday, 11 June 2021 at 07:36:47 UTC, Sönke Ludwig wrote:
>> This is something that should have been discussed already, but I can't remember whether that was actually the case, and it always bothers me every time there is friction with new DIP switches.
>>
>> Right now, new language semantics are introduced using `-preview` and `-revert` switches, which unfortunately has a massive drawback:
>
> I am troubled in general by the implementation of incomplete solutions and making them gradually available.
>
> I would find it much more reassuring if a comprehensive solution was developed as a completely separate compiler branch. Basically have a stable branch (as is), and then a future branch that is considered unstable until all the corner cases have been ironed out. This also allows more heavy restructuring of compiler internals, like introducing an appropriate IR (which is needed for things like borrowing or ARC, if you want something solid).
>
> The cost of moving to a more complete solution after something incomplete has been made official could break the camel's back.
>
> The piece-by-piece approach is a slippery slope.

I think this is a good point.

Having an unstable compiler would allow removing things that don't work so well again, while having the advantage of having a big user base trying it out (and complaining about bugs) - though this could also be seen as inconvenience for users and authors e.g. when libraries are only working with an unstable branch or change in behavior with the unstable compiler.
June 11, 2021
On Friday, 11 June 2021 at 11:27:03 UTC, Ola Fosheim Grøstad wrote:
> On Friday, 11 June 2021 at 07:36:47 UTC, Sönke Ludwig wrote:
>> [...]
>
> I am troubled in general by the implementation of incomplete solutions and making them gradually available.
>
> I would find it much more reassuring if a comprehensive solution was developed as a completely separate compiler branch. Basically have a stable branch (as is), and then a future branch that is considered unstable until all the corner cases have been ironed out. This also allows more heavy restructuring of compiler internals, like introducing an appropriate IR (which is needed for things like borrowing or ARC, if you want something solid).
>
> The cost of moving to a more complete solution after something incomplete has been made official could break the camel's back.
>
> The piece-by-piece approach is a slippery slope.

Doesn't Rust do something like this? A problem I read concerning its ecosystem is the tendency to target nightly, and thus using a more stable branch of the compiler leaves one high and dry, so to speak.
June 11, 2021
On Friday, 11 June 2021 at 11:48:22 UTC, Dukc wrote:
> This would be short-sighted. It'd mean that the experimental feature developers would then have to backport all the compiler improvements that have been done while the feature was experimental. Its easier to account for the experimental features when doing the restructuring.

I would go the opposite direction. Improvements in the unstable branch that has proven itself stable and with low semantic impact (like bugfixes and security fixes) would be backported as a revision of the stable branch.

> Second, if using an experimental feature requires compiling a separate compiler branch, not as many will use it. Hence, less real world testing, and even less issues ironed out.

It should be compiled and packaged as a nightly delivery. (Automated)

June 11, 2021

We have enough features in D.
But, one have to use d carefully in production,since it is not stable enough.
We need a stable D version. Then gradually experiment.
Rather than only the latest version one. Although the development is fast, there are a lot of bugs. This scares away beginners.
I think after D teams finished ImportC, we should first create a stable version.
This stable version lasts for 2 years, mainly fixes bugs.
And the new and latest version, d team can do BIG change. Anyway, we have a stable one available.
In my opinion, the Big change is to divide large files into small files, so that the granularity of source files is smaller and the dependency relationship is clearer. By the way,solving the problem of GC .So it's less likely to have fatal errors.
This way,many people would like to join the d development.

June 11, 2021

On Friday, 11 June 2021 at 13:10:23 UTC, zjh wrote:

>

We have enough features in D.

Normal user using features when they are immature,resulting only increasing the burden of users.
A stable d build then preventing your from immature features.

June 11, 2021

On Friday, 11 June 2021 at 13:10:23 UTC, zjh wrote:

>

In my opinion, the Big change is to divide large files into small files, so that the granularity of source files is smaller and the dependency relationship is clearer.

There is really no need to start with a big change. The best approach is to:

  1. First insulate independent parts. So, insulate the backend from the front end, by putting a layer between (most likely a high level IR).

  2. Then move independent things like inlining out of the frontend and onto the middle layer.

  3. Modularize.

  4. Iterate between redesigning interfaces / refactoring frontend internals.

The frontend really only need to deal with type unification, templates and CRFE. Maybe I missed something, but roughly.

When you have reduced the complexity of the front end then you can start to modularize it. Otherwise you end up doing the job twice.

June 11, 2021
On Friday, 11 June 2021 at 11:57:03 UTC, surlymoor wrote:
> Doesn't Rust do something like this? A problem I read concerning its ecosystem is the tendency to target nightly, and thus using a more stable branch of the compiler leaves one high and dry, so to speak.

I don't know. But I think some languages (Ada?) have different feature-profiles. D does also have a profile called "BetterC".

So, one could define a "maximal compatible" feature profile for D and use a linter to verify that a library stays within that profile.

Say, if you wanted to write a geometry library, then you might want to target BetterC as the minimum profile, but use conditional version statements to enable more features for other profiles.

D as a language have the mechanisms, I think, but it isn't used effectively as it requires someone to work out a "standard way" of writing libraries.