Jump to page: 1 2 3
Thread overview
DLF September 2023 Monthly Meeting Summary
Nov 13
RazvanN
Nov 13
Tim
Nov 15
RazvanN
Nov 13
zjh
Nov 13
zjh
Nov 13
matheus
Nov 13
matheus
Nov 13
matheus
Nov 13
zjh
Nov 13
zjh
Nov 13
M.M.
Nov 13
jmh530
November 12

Well. For the first time in all my years of using these forums, I've managed to post something that exceeds the byte limit. You'll find the September 2023 Monthly Meeting Summary at the following link:

https://gist.github.com/mdparker/f28c9ae64f096cd06db6b987318cc581

November 12

On Sunday, 12 November 2023 at 19:50:02 UTC, Mike Parker wrote:

>

Well. For the first time in all my years of using these forums, I've managed to post something that exceeds the byte limit. You'll find the September 2023 Monthly Meeting Summary at the following link:

https://gist.github.com/mdparker/f28c9ae64f096cd06db6b987318cc581

Two notes:

  • Making the build bit of build.d simpler would be nice although reading the source code will reveal that it doesn't just build the compiler.

  • I think Martin is wrong about not upstreaming stuff from LDC into DMD. Nothing good will come of them diverging (which they already have).

The UX of having multiple compilers is already terrible — conservatively let's estimate that "oh you need to download LDC if you want a compiler with an optimizer that works to a modern standard" halves the number of people bothering to try D.

Every little detail that you have to think about other than actually writing code hurts D both in terms of adoption and massive fragmentation of the projects people have to contribute to.

Standardising proper hooks such that LDC doesn't have to use a fork of the frontend (it's not really dmd-as-a-library in any sense that would be tolerated for a smaller project) is also important dogfooding for the frontend as a codebase.

November 12

On Sunday, 12 November 2023 at 19:50:02 UTC, Mike Parker wrote:

>

https://gist.github.com/mdparker/f28c9ae64f096cd06db6b987318cc581

>

There was a side discussion about how the extern(C++) interface affects dmd-as-a-library.

Personally, my number-one complaint with dmd-as-a-library is that I am forced to use extern(C++) when creating my own Visitor classes.

In dmdtags, this requirement has made it necessary for me to implement my own C++-compatible Span and Appender types, just to avoid the C++ mangling errors caused by D's built-in T[] slices.

I have no use for overriding AST nodes, but the ability to use extern(D) visitors with dmd-as-a-library would be a welcome improvement.

November 13
Part of the problem with shared is that it is completely inverse of what it should be.

It fundamentally does not annotate memory with anything extra that is useful.

At the CPU level there are no guarantees that memory is mapped only to one thread, nor at the language level.

Therefore all memory is shared between threads.

As long as people have this inverted mindset in place and instead of wanting to prove its thread owned shared is going to be a problem for us.

Remove shared, add atomic storage class. Figure out thread owned/shared memory some other day.
November 13

On Sunday, 12 November 2023 at 19:50:02 UTC, Mike Parker wrote:

https://gist.github.com/mdparker/f28c9ae64f096cd06db6b987318cc581

I can't access it,please post it here.

November 13

On Monday, 13 November 2023 at 00:55:37 UTC, zjh wrote:

>

On Sunday, 12 November 2023 at 19:50:02 UTC, Mike Parker wrote:

https://gist.github.com/mdparker/f28c9ae64f096cd06db6b987318cc581

I can't access it,please post it here.

https://paste.myst.rs/u074ali8

November 13

On Monday, 13 November 2023 at 00:55:37 UTC, zjh wrote:

>

On Sunday, 12 November 2023 at 19:50:02 UTC, Mike Parker wrote:

https://gist.github.com/mdparker/f28c9ae64f096cd06db6b987318cc581

I can't access it,please post it here.

I can't. It's too big. That's why I posted it there.

November 13
On Monday, 13 November 2023 at 03:07:07 UTC, Mike Parker wrote:
> On Monday, 13 November 2023 at 00:55:37 UTC, zjh wrote:
>> On Sunday, 12 November 2023 at 19:50:02 UTC, Mike Parker wrote:
>>
>> https://gist.github.com/mdparker/f28c9ae64f096cd06db6b987318cc581
>>
>>
>> I can't access it,please post it here.
>
> I can't. It's too big. That's why I posted it there.

Well maybe splitting in 2 parts? - Let's try:

Part 1:

DLF September 2023 Monthly Meeting Summary

The D Language Foundation's monthly meeting for September 2023 took place on Friday the 15th at 15:00 UTC. After we'd had one of our shortest meetings ever the previous month, this one was a bit of a long one, lasting a little over two hours.

Note that this was Timon Gehr's first time joining us for a monthly. I'd spoken with him at DConf, and he'd expressed interest in joining both our monthlies and our planning sessions when he has the time for them. I'll be inviting him as a permanent member as long as he's willing to be one.
The Attendees

The following people attended the meeting:

    Walter Bright
    Timon Gehr
    Martin Kinkelin
    Dennis Korpel
    Mathias Lang
    Átila Neves
    Razvan Nitu
    Mike Parker
    Adam D. Ruppe
    Robert Schadek
    Steven Schveighoffer

Robert

Robert got us started by letting us know he had done some preliminary JSON 5 work at DConf. He also gave an update on his script for the Bugzilla to GitHub migration. He had changed it to use a "hidden" API that someone from GitHub revealed to him when he reached out for assistance. Though there are still rate limits to deal with, his script was now much faster. The previous script would have taken days to migrate the dmd issues, but for a test run, he was able to do it in one sitting at DConf. He was ready to show me how to use it so I could test it and provide him with feedback.

Other than that, he'd done a few small things on DScanner and was waiting on Jan Jurzitza (Webfreak) to merge them. He noted that Walter had asked him to write an article for the blog related to his DConf talk. Robert had an idea for an article related to the DScanner updates to supplement the talk.

(UPDATE: During a subsequent planning session, Robert reminded me that the only reason I had volunteered to do the migration was that he didn't have admin access to our repositories. That was easily rectified. He will now be doing the migration. At our most recent planning session, we talked about a migration plan. Before taking any steps, he's going to chat with Vladimir Panteleev. Vladimir raised several concerns with me a while back about the Dlang bot and other things the migration might affect. Robert wants to get up to speed with all of that before moving forward.)
Me

I told everyone I'd just gotten home from my DConf/vacation trip two days before the meeting and had spent a chunk of that time decompressing. I did manage to get a little bit of post-DConf admin out of the way the night before by going through all the receipts from everyone eligible for reimbursement to let them know how much was due to them. I went into some details on how I was going to make those payments. The big news there was that we managed to get enough in revenue from registrations and sponsorships that we came in under budget, i.e., the amount Symmetry needed to send us to make up the difference was less than the total they'd allocated for reimbursements. (Thanks to Ahrefs, Ucora, Decard, Funkwerk, and Weka for helping out with that!)

I then reported that I'd started editing Saeed's video. The venue had provided me access to all of their footage this year. Last year, they only gave me footage from one camera and wanted us to pay extra for more. This year, I have footage from three cameras ('main', 'wide', and 'left') as well as the video feed of the slides.

Next, I noted that John Colvin and I had participated in an after-action meeting with Sarah and Eden from Brightspace, our event planners (if you were at DConf, Eden was the young woman sitting out front all four days, and Sarah was with her on the first day). We all agreed that, aside from the unfortunate laptop theft and COVID outbreak, the mechanics of the conference went well this year. We went through some feedback we'd received to discuss how to improve things next year (more info on badges, an actual registration form to get that info, etc.), and tossed around some ideas on how to prevent future thefts and mitigate the risk of COVID outbreaks. One thing we agreed on there is to have an extra person at the door whose main job is to check badges. There will surely be other steps to take once we consult with the venue. They're evaluating what measures they can take to avoid a repeat at any event they host.

I also let everyone know what Sarah said about our community. Due to disruptions at Heathrow at the start of the conference, several attendees found themselves with canceled flights. A number of them had an extraordinarily difficult time arranging transportation. Sarah told us that in her years of event planning, she'd never seen so many people go to the lengths that this group went through to attend an event. She found that amazing and said we have a special community.

John and I agreed that planning for DConf '23 got moving too late and that we needed to start planning earlier for DConf '24. We'd like to push it back into September, since that would be past peak travel season, meaning cheaper airfare and lodging. Unfortunately, that's also peak conference season. CodeNode had offered us a nice rate for the off-peak period the past two years, but the cost during the peak period was prohibitive. We're going to see if we can get a later date at a reasonable price, and also look into moving back again to May.

Dennis wondered if it would be worth dropping the Hackathon day to reduce the cost, or maybe figure out a way to better utilize it, or perhaps find a more casual space for it. I replied that it was already significantly cheaper than the other three days. We budget for fewer people, which reduces both the venue and catering costs, and we don't use the A/V system. I think we should find a way to better utilize it (some people do get work done that day, and there's a good bit of discussion going on, too). This year, several people attended Saeed's IVY workshop sessions. I said that Nicholas Wilson had an idea for a workshop for next year. Brian Callahan had also suggested workshop ideas.

(UPDATE: Plans are already afoot for DConf '24. I've got a meeting with two Symmetry people this week to discuss the budget, and all three of us will be meeting with Brightspace soon to talk about planning. I hope to announce the next edition of DConf Online soon. Last year, having it in December so soon after DConf was a royal pain. So I'm pushing it into 2024 this time. I just need to get a solid idea about what time of year we're doing DConf before I can schedule it. I want to keep six months between them.)
Adam

Unreachable code

Adam started by bringing up the "statement unreachable" warning. He said it's really annoying and we should just eliminate it. One false positive is enough to destroy the value it delivers, and it doesn't actually reduce bugs. He had a PR that versioned it out.

Walter said he had initially been against adding that warning, but it had highlighted some unintentionally unreachable lines in his code. He said we could debate how much value it has, but it's not valueless.

Timon said he was in favor of getting rid of it because he found it mostly annoying.

Steve agreed that the feature has very little use, especially in templated code where something might be unreachable in one instantiation, but reachable in another. You then have to work your template around that and it causes all kinds of grief. It's valuable if you're using it to find those kinds of problems, like in a linting context. But most people have warnings turned on as errors, so when they don't want to see this one message, they either have to turn off warnings-as-errors or add a workaround in code. Dub has warnings as errors by default. He said he'd rather see the compiler just remove unreachable code rather than warn about it.

Mathias didn't think it was completely useless. It had helped him a couple of times. But, as Steve said, it's a pain with templates. If you have a static foreach or a static if, you're going to hit it pretty quickly. He often ends up writing conditions that are always true just to shut the compiler up. It's not realistic to avoid warnings. He equates a false positive in the compiler to a broken CI: when people get used to a broken CI, they ignore it completely and no longer rely on it. You want your CI to always be reliable. You want the same for your compiler. If it's giving false positives, get rid of that feature.

I asked if this was the time to start talking about enabling/disabling specific warnings on the command line. Walter said that's so tempting and it's so awful. There was a general chorus of "No".

Martin said he agreed with all the points raised so far. This is a feature that should be in a configurable linter for those who want it. It had helped him a couple of times, but it had gotten in the way on many, many more occasions than where it was useful.

Walter asked if it would be possible or make sense to disable the warning in template expansions. Adam said yes, but gave the example of bisecting a function. In that case, you'll stick an assert(0) somewhere as a debugging aid and now it won't compile anymore. So what's the point? Walter agreed that's annoying, but that he usually comments out the rest of the code to silence the warning. Adam said he was just asking to look at the cost-benefit. Even if the benefit is non-zero, the cost is significant.

Walter considered it, then said it was a tough call. I told him that this sounds like it's not enabling the realization of ideas, but rather frustrating the realization of ideas (the first clause of the DLF's IVY statement is 'to enable the realization of ideas'). Walter agreed and said it's not a big deal if it's removed and consigned to the linter.

Dennis liked the decision, but he made the point that it's nice for this information to be utilized by editors. For example, code-d grays out unused parameters, so you can see it's unused, but it's not intrusive in any way. He suggested that we disable the check in dmd, but keep the code for it so that it can be used by clients of dmd-as-a-library. Walter said that's a good idea.

(UPDATE Adam has since modified his PR so that the check is disabled but not removed, and the PR has been merged.)

DLF roles

Next, Adam asked who was responsible for decisions about Phobos, saying that it used to be Andrei. I replied that it was Átila.

Then he said he'd heard something about an ecosystem management team and asked about its status. I explained that was an idea I'd had a while back. I'd posted about it in the forums and mentioned it in my DConf '23 talk. I was trying to organize a group of volunteers who could bring order to the ecosystem (identify the most important projects, help us migrate services to new servers, etc.). Though things got off to a positive start, in the end, it never materialized. The volunteers all had time constraints that caused long delays, and I wasn't checking in with them frequently enough to coordinate them. Then we entered the IVY program and that led us to have a massive change of plans (more accurately, it caused us to start planning things where there were no plans before). So the "ecosystem management team" that I envisioned is no longer a thing. Instead, we have a high-level goal to "Enhance the Ecosystem". Mathias and Robert are in charge of that. Once they get moving at full speed, they'll be doing those things I had expected the management team to do.

Adam mentioned that Andrei used to reach out and actively solicit people to do specific tasks. That was why Adam wrote the octal thing; Andrei had put it out as a challenge to see if someone could do it. He said he kind of missed Andrei. Adam never cared for his code reviews, but Andrei did some interesting things. I noted that reaching out to people and finding new contributors is in my wheelhouse. I've reached out to a few existing contributors for discussions already (including Adam and Steve, which is how they ended up in our meetings) and will reach out to more in the future (note that Razvan, Dennis, and I have discussed different steps we can take to bring more active contributors in and implement some ideas a little further down the road; Dennis has already started a contributor tutorial series on our YouTube channel and plans to continue it).

(I should note for those who aren't aware that Razvan, Dennis, and I are each paid a small monthly stipend for our DLF work, mostly courtesy of Symmetry, for a fixed number of hours per week which we often exceed. Additionally, Symmetry allows Átila one work day each week to spend on DLF stuff. We're the only ones receiving any compensation. Any work Mathias and Robert do for the DLF is done primarily on a volunteer basis on their own time.)
Steve

Steve reported that he'd been trying to get his Raylib port to use ImportC. He'd found that ImportC had come a long way. He'd gotten to a point where it was almost working except for a few linker errors. He'd been working with Martin to solve them and expected them to be fixed in the next LDC release. It was possible that with the next version of LDC, he could have a Raylib port with internal parts that were ImportC enabled. He thought that was just exciting. If ImportC can get to the point that you can just import what you need from C, that would take a lot of headaches away from creating ports and bindings.

Next, he said he'd started his next D coding class. It reminded him that on Windows, more polish is needed in the support for someone who's never done software development to get everything up and running. Before, he'd set up a web page showing the required steps, but the kids still couldn't get it done. He'd like to see this sort of thing get focus and support from the community.

But everything else had been going well. He'd been really happy with the responsiveness of all the people working on D.
Timon

Timon said that at the DConf Hackathon, he had aimed to fix some bugs, but he'd given in to temptation and did something else: he'd implemented tuple unpacking in foreach loops. He then shared his screen to give us a demonstration. Then he showed us a problem he'd uncovered. The way the Phobos tuple is implemented causes multiple postblit and destructor calls on a struct instance in certain situations. With his tuple unpacking, it's even worse. Ideally, what we want is to replace all of these calls with a move. This has been a bit annoying, but he doesn't know what to do about it. It's a limitation of the language.

Walter thanked him for working on tuples. He's unable to do anything with tuples at the moment because he's committed to stabilizing the language first, but what Timon was doing with tuples was important. Timon said he was just keeping it rebased on master. The examples he showed us were running on master.

Dennis said that move semantics and copy constructors always confuse him. He found it difficult to follow the DIPs that have tried to address this stuff. Átila said this had kind of fallen by the wayside, and it needs to be finished, as there are definitely copies happening right now that shouldn't be.

I suggested that we treat Walter's 'Copying, Moving, and Forwarding' DIP as a stabilization feature and not an enhancement. I think we must get it in. Átila agreed because we shouldn't be having copies for no reason. We claim to have moves and then sometimes they don't work.

Timon agreed and brought up a related point. He didn't see that there was currently a legal way to deallocate an immutable value, where the runtime system is doing it for you. Dennis asked if he meant "no safe way" or "just no way at all". Timon repeated "no legal way". Like if you have a function that just deallocates an immutable value, the compiler can elide it because it returns void. Because it's pure. There needs to be a way to say that every value needs to be consumed somehow. By default, this is with the destructor, but it should be possible to have a function that says, "I'm consuming this value now" and then it's gone after the function call. We have core.lifetime.move, but it doesn't actually remove the thing from consideration. It just default initializes it and then you get this pure destructor call again.

Martin said that Timon's demo was cool stuff and suggested that his lifetime issues are coming from the glue layer, which emplaces the result of function calls in final local variables, directly if possible. It needs to be handled in the glue layer because of some special cases like struct constructors. That's pretty ugly. If we kept the interface as it is right now, tuple support would need to be reimplemented in every glue layer, not just dmd, to make things nicely work.

Martin continued: Regarding the runtime stuff, as Timon mentioned, there are language limitations. The basic building blocks are there, but Timon's destructor example is one of the limitations. That's why Martin has been pressing for move and forward to be built-ins instead of the current library solutions. We could also get rid of some template bloat if they were built in. And sure we don't want those extra postblit and destructor calls, but even moves can be quite expensive to destruct if the payload is quite big. E.g., a 4kb move isn't free. Ideally, we'd have direct emplacement. But we'd probably need to change the AST to have it.

Timon thanked Martin for the insight. He thinks this is a fundamental thing in the language that should have some sort of resolution. Then Martin went into some details about how it's hacked into the language and how move and emplace work right now. He also said he'd found multiple implementations in DRuntime and tried to streamline everything to use core.lifetime.move, but ideally move and emplace should be built-ins, not library solutions.

Walter said he'd written the forwarding and moving DIP so long ago that he couldn't remember the details, so he'd need to reacquaint himself with it. He noted that move semantics became of interest in the @live implementation, so any proposal to build move semantics into the language would probably affect @live for the better. The ownership/borrowing system is based on moving rather than copying. He didn't have anything insightful to say about it at the moment. He'd handed the DIP off to Max a while back (when we decided that Walter and Átila shouldn't judge their own DIPs anymore, which in hindsight was a mistake) and it never got to the finish line.

I told Walter that I had some feedback on that DIP from Weka. Once he's ready to look at it again, I'll forward it to him and reach out to Weka for any clarification. I suggested we come back to this two months or so later on, given that he had so many other things to work on. Walter said he had so many priorities at the moment. Átila had been pushing him to get moving on changing the semantics of shared, which he'd agreed to do. That was holding up a lot of people, too. And then there were the ImportC problems. It's an unending river of stuff. So he agreed we should push the move stuff a little further down the road for the time being. I said I'd make a note and bring it up again in a couple of months.
Mathias

Mathias brought up -preview=in. At a past meeting, Mathias had agreed to change the behavior for dmd such that it always passes by ref (Walter felt that having it sometimes pass by value and sometimes by ref was an inconsistency and wanted consistent, predictable behavior; Martin disagreed, seeing it as an optimization, and said he would keep the current behavior in LDC). Mathias hadn't gotten around to changing it. So a few weeks before the meeting, Walter had emailed him about it. So he'd started implementing it and that had created a bug in dmd. He had been trying to track it down but hadn't had much time for it. It was one of his priorities at the moment.

He then reported that he'd had a productive discussion with the Ahrefs people at DConf. They're very interested in D's C++ integration. They had a problem in that they had some classes they wanted to pass by reference, but at the moment when you have a function that takes a class argument, it manages it as a pointer. They had talked about ways to pass it by ref instead. Mathias wanted to implement a proof of concept and had already discussed it with Walter.
Martin

Martin said he'd been working on the D 2.105 bump for LDC. It had been going quite smoothly so far. He'd finally been able to test the Symmetry codebase against the latest compiler releases. Some of the 2.103 and 2.104 regressions had already been fixed by then. Some hadn't, but they had been fixed since then. He thanked Razvan for that. Everything was looking good. He was hoping to be able to test Symmetry's codebase early against the 2.106.0 release.

He closed by telling Steven that fixes for the ImportC issues he'd reported should be part of the LDC 1.35 release.
Dennis

Dennis reported that Cirrus CI no longer had unlimited free credits. On that day, he'd seen a PR that had sent us over the monthly limit, and it was only the middle of the month. He suggested we need to upgrade or reduce the number of CI jobs.

Martin said that LDC had the same problem, but was happy to see we'd made it that far. He said that in August, the LDC project had used 180 credits, so 50 credits just wasn't much. He wasn't sure what they were going to do either. It was the only native 64-bit ARM platform they could use and test.

We had a bit of discussion about how much it might cost us in total per month if we were to upgrade our plan. Martin suggested we could shuffle things around a bit. He'd been looking into CircleCI again and they had a better offer for open-source projects. Of course, that could change at any time, but he'd done some calculations and they had a much higher service limit. He wasn't sure about Mac OS and if Circle would be feasible for testing dmd and M1. But as far as he was aware Cirrus was the only one that had support for BSD. Another option was hosting completely custom runners.

Steve said it would be good to try to work out a budget. Use what we can for free, pay for what we need. Maybe try to find volunteers to host like we used to do with the auto tester. Because one day there might not be any free options left.

Mathias advised that we tie ourselves up more with GitHub. They have a good base offering that he didn't think would go away, but even if it did, it can pretty easily register any platform and any runner. A few bare metal machines here and there could do the job. It would essentially be the same as the auto tester we had before but on GitHub. And much more open, so we could make it easier for people to register a machine if they want to test. He notes that we have two servers running the BuildKite tests, one managed by Ian and one by Mathias. They handle the work just fine.

Dennis said we don't need to run all the checks on every commit on every PR. That's wasteful. We could first do a quick test to see if the compiler builds at all and then split it off to all the platforms and BuildKite to reduce our CPU load. GitHub has pipelines, so it's possible to have different stages of CI.

Walter said that's a good idea. Martin noted one thing to keep in mind was that the latency time for the first feedback is nice if the first test works, but if it just happens to fail on ARM or something, then the latency is twice as high as it was before. That might not be a problem for dmd, but would be painful with LDC where some jobs take up to an hour and a half. Dennis said that in practice it either fails immediately or it compiles but doesn't compile the standard library, or it runs the test suite but not BuildKite. There are three stages at which either all platforms fail or all succeed.

Mathias said he would look into it.
Razvan

Template issues

Razvan started with the regressions Martin had mentioned he'd fixed. They were caused by a PR from Walter to stop the compiler from emitting lowerings that instantiate the runtime hook templates when the context is speculative. He said normally, when you find e.g., the new array expression, you immediately lower it to the newarray template, then instantiate the template and analyze it, and store it in some field. But if that context is CTFE, then in most cases you don't need to do that. You're going to interpret the expression anyway. But there are some situations where you still need to do the lowering even in a CTFE context. So if the lowering isn't done, you end up with instantiation errors in those situations.

He said the solution here would be to always instantiate the template and save it as a lowering, but then you don't always have to emit the code for it. That approach also comes with problems. What the frontend does now is when you have a template instantiation in a speculative context, it marks the template instance as not needing codegen. But then if the instance instantiates other templates, those are no longer marked as being speculative. And if the root template is not generated but the children instances are generated, this also leads to problems. As an example, he cited one of the regressions: given a __ctfe block that instantiated map, passing it a lambda as an alias parameter, then the code did not get lowered to the hook because it was in a CTFE context. But then that lambda was passed to other templates that map instantiated, and when it was analyzed in those contexts, it caused ICEs.

His TL;DR: we need to fix this by seeing what the root instantiation is and propagating the fact that any child instantiations it triggers don't need codegen.

Martin said that everything Razvan described is supposed to happen and works in most circumstances. It's all very complex stuff. He then went into quite some detail about what happens with template instantiations triggered from a speculative instantiation. There are some issues with the current implementation that we have some workarounds for, but he felt that the regressions weren't related to that. He said that in some cases, we rely on the new template lowerings to generate errors by themselves to catch some error cases and have some nice reasons for those failures. Shortcutting those in CTFE by just checking you're okay, we're going to skip the lowering because we're going to assume it always works. That's not going to cut it. That's the main problem. We need to remove those checks when there's a possibility that the lowered template instantiations can fail. That's what was fixed and it's working nicely now.

Martin said maybe a more general solution was to abandon the lowering in the frontend. Walter suggested that maybe the lowering should be done the old way, in the glue code. Razvan said that doing it that way is like giving up on the issues that are surfacing right now because we're using templates in the runtime. Other template issues have come up that people hadn't been hitting so often, but now that we're using templates in the runtime they're more obvious.

He said that another issue is with attributes. Those lowerings are done from all sorts of attributed contexts. Currently, inference doesn't work if you have cyclic dependencies on template instantations. The compiler just assumes they're @system, not pure, not @nogc, and so on. He doesn't think the template lowerings in the frontend are the problem here, but that there are some template emission bugs, or instantiation bugs, that we need to fix anyway.

Martin agreed.

Regarding the attribute inference problem, Timon said maybe it's a matter of computing the largest fixed point and not the smallest fixed point. The compiler could infer an attribute whenever there is no reason that it cannot be there, but currently, it infers them if it thinks there is a reason for them to be there. That seems like the wrong way around. But if we don't do any introspection, it's a bit tricky to get it right.

Martin said that in the past, Symmetry's codebase had been hit by an issue where attribute inference was stopped. The compiler skips it at some points. He cited an example when an aggregate hasn't yet had semantic analysis in a particular instantiation where we need one of its member functions. Inference is just skipped completely. It's a tricky problem.

Razvan said he had just wanted to point out that he's looking into this problem when instances that don't need codegen are getting codegen. He hoped to find out what the issue was.
November 13
On Monday, 13 November 2023 at 04:46:07 UTC, matheus wrote:
> ...

Part 2:

AST nodes in dmd-as-a-library

Since DConf, Razvan had been considering how dmd-as-a-library could offer the possibility to override any kind of AST.

He gave us this example of the expression class hierarchy used by the AST implementation:

module expression;
import std.stdio;

class Expression {}

class BinExp : Expression
{
    void fun()
    {
        writeln("expression.fun");
        BinExp p = new BinExp();
        Expression e = new Expression();
    }
}

class BinAssignExp : BinExp
{}

class AddAssignExp : BinAssignExp
{
    override void fun()
    {
        writeln("AddAssignExp");
    }
}

class MulAssignExp : BinAssignExp
{}

Then in ast_family.d, the default ASTCodegen looks like this:

struct ASTCodegen
{
    public import expression;
}

To create a custom AST and override the default behavior of e.g., BinExp, you need to do something like this:

struct MyAST
{
    import expression;
    import std.stdio;

    alias Expression = expression.Expression;
    class MyBinExp : expression.BinExp
    {
        override void fun()
        {
            writeln("MyBinExp.fun");
        }
    }

    alias BinExp = MyBinExp;
    alias BinAssignExp = ?
}

The problem with this is that you now have to declare all of the AST nodes that inherit from BinExp so that they use your custom implementation. This is not a workable solution. We need the ability to specify not only that we're overriding a particular node, but that other nodes need to use it.

First, he thought about templating the AST nodes and inheriting from the templated version, but that means heavily modifying the compiler. Then he came up with a solution using mixins. You just mixin the code of the AST node that you want.

With this approach, the AST nodes are now mixin templates:

module expression_mixins;
import std.stdio;

mixin template Epression_code()
{
    class Expression {}
}

mixin template BinExp_code()
{
    class BinExp : Expression
    {
        void fun()
        {
            writeln("expression.fun");
            BinExp p = new BinExp();
            Expression e = new Expression();
        }
    }
}

mixin template BinAssignExp_code()
{
    class BinAssignExp : BinExp
    {}
}

mixin template AddAssignExp_code()
{
    class AddAssignExp : BinAssignExp
    {
        override void fun()
        {
            writeln("AddAssignExp");
        }
    }
}

mixin template MulAssignExp_code()
{
    class MulAssignExp : BinAssignExp
    {}
}

And then the expression module becomes:

module expression;
import expression_mixins;
import std.stdio;

mixin Expression_code();
mixin BinExp_code();
mixin BinAssignExp_code();
mixin AddAssignExp_code();
mixin MulAssignExp_code();

In ast_family, ASTCodegen remains the same, but now you can do this for your custom AST:

struct MyAst
{
    import expression_mixins;
    import std.stdio;

    mixin Expression_code();

    mixin BinExp_code() t;
    class MyBinExp : t.BinExp
    {
        override void fun()
        {
            writeln("MyBinExp.fun");
        }
    }

    alias BinExp = MyBinExp;
    mixin BinAssignExp_code();
    mixin AddAssignExp_code();
    mixin MulAssignExp_code();
}

We could have something in the frontend library to generate the boilerplate automatically. But the main thing is that now you can replace any default node in the hierarchy with your custom implementation without needing to redeclare everything. In this example, everything that inherits from BinExp is now going to inherit from MyBinExp instead. This works. He showed a runnable example.

He doesn't think this is that ugly. And for what it gives us, basically a pluggable AST, any perceived ugliness is worth it. He said it would be great if we could reach a consensus on how to go forward.

Átila said he liked it.

Razvan noted that a problem is that the semantic routine visitors aren't going to work anymore. But the cool thing is you can also put those in mixins. You mix those in with your custom AST, you inherit from and override whatever visiting nodes you want, and you get all of the functionality you need.

Timon said his main concern with this kind of scheme is that he has tried them in the past, and usually dmd dies when it tries to build itself. He thinks the current version of dmd will choke on this at some point. It always appears to work at the start, but if you scale it up enough, random stuff starts to break, like an "undefined identifier string", a general forward reference error, or an ICE, etc.

Razvan said he had encountered the undefined identifier thing, but you can work around that by inserting an alias in the problem spot. Regardless, he argued that any such error is a compiler bug that needs to be fixed.

Timon agreed, but his question was how do you navigate that when the compiler can't build itself because of a compiler bug? Razvan said he'd fix the bug.

Dennis noted that the bootstrap compiler wouldn't have the fix. Martin said you'd have to raise the bootstrap compiler version. He then said one thing that bothers him is that it works okay as long as you have one single AST. However, tools that require more than one AST would all need completely different class hierarchies. Even if you're interested in just the little BinExp, you might affect tens of those classes but not the hundreds of classes overall. You might want to share an overridden class among the ASTs.

Razvan said he thought it could be done. You'd declare the class outside of the AST family and then just use an alias inside your family.

Steve said his problem with this was that when you see the implementation in the compiler, you look at BinExpression and see that AddAssignExpression inherits from it, there's a possibility that it might be inheriting from something completely different from the one right above it. It's going to be hard to keep track of that. Especially if we have an alternate AST e.g., for dmd-as-a-library. Keeping track of where things go and what things are inheriting from is going to be confusing.

Razvan said if you're working on the compiler, there shouldn't be any confusion. What you see in the mixins is the implementation. Nothing's going to be overriding it. For users of the compiler library, it might be a bit confusing, but still, you're going to have to select the AST nodes you want to override. He expects the interface is going to be much simpler than all of this boilerplate, all the multiple mixins. It can be automated. Then you just have to specify which classes you want to override. Yeah, it may be a bit more confusing, but the way dmd is organized now, he doesn't see how you can just preserve the current state of the code and move forward.

Mathias thought it was a pretty terrible API for a very simple problem, but it's the only solution he's seen so far that does the job. He's used something like this himself in the past. So far, it seems to be the only solution we have.

Walter said there's another way. Instead of putting the mixins outside the AST nodes, put them inside. Have the AST nodes mixin a user-defined template. That way, features can be added in without turning the whole thing inside out.

Razvan said he'd thought about that also, but then you'd still have to define all the definitions. Walter said they could just be blanks in the main compiler. Razvan agreed, but users of the compiler library would still have to define all the AST nodes and then manually mixin the ones they're interested in. Relating to Steve's comment, this would be a much uglier approach. Because right now, looking at the expression implementation, the first instinct is that it's ugly. But it's very nicely encapsulated. You have the entire class there and you just plug it in your AST family.

He said that this has been a problem for so many years and we didn't have a solution for it. And this works. The good part is that you can implement it incrementally. You can take each AST node incrementally, put it in a mixin, and then just insert it in the AST code and see if it works. And if you have any problems because of compiler bugs, it's going to be super easy to track. Yes, it's going to require massive changes in the compiler, but right now, if you don't modify all of the AST nodes, it's impossible to override them.

Walter said he didn't understand why putting the mixins inside the classes rather than outside them would not also accomplish the same thing, but be minimally disruptive. Átila said you'd have to define all of the child classes. You still have to write them up by hand. Walter didn't think that was true. Razvan said when you define your AST, you'd still have to write class Expression and then mixin the contents. Walter said you redefine what you're mixing in. Steve said you can pass all the things you want to mixin to the class. And then it uses that to mixin the code.

Walter said in each semantic node you have, you list the inheritance and things like that, and then you mix in the features template. The features template would be different for the compiler as opposed to the user's library. Then if you want to modify an AST node, you just modify the mixin for that AST node. And that's all you'd have to do.

Razvan asked what would happen when he wanted to add another field or another function. Pass them as a parameter? Átila said that wouldn't work because you'd have to pass in as many mixins as you have AST nodes in the API for this, or you only pass one and it gets mixed into every AST node.

Walter suggested another method. We'd removed the semantic routines from the AST and now they're done separately. We could take that further and remove more of the overriding functions and all that. Have the AST tree look like the one in ASTBase, i.e., it's just the hierarchy and not much else. Then the user can modify that. It's just a thought that might be worth exploring. Just stip out all of the AST stuff that's specific to the compiler so it's more of a bare AST that the user can use without needing to modify it.

Razvan said we had this approach four years ago, but at some point, we just started pulling out stuff, and at that point, Mathias had said it wasn't obvious where this was leading us. Even if we did that, Razvan thinks it's orthogonal because you still won't have the possibility to add new fields to the AST unless you also use it in conjunction with a mixin solution.

Walter agreed it doesn't add new fields. Átila asked if you'd need to. Wouldn't it just be that you'd write your own visitor? Why would you need to modify the AST at that point?

Razvan: Because the AST is currently being used by the semantic routines. The parser expects a specific AST structure. So if you want to add some fields to just store some info for use during semantic analysis...

Timon said having mixed-in features per class works because you can do static if based on the type of this to see in which node you are, and you can inject your code exactly into the right node. But this is exactly what his experimental compiler frontend hobby project was doing and it completely broke with dmd 61.

Dennis said he wasn't sure yet what application of dmd-as-a-library really requires you to override classes. Instead of trying to statically enhance the classes, what if every AST node had either an identifier or just an extra void pointer for library things? A library could then dynamically cast and read that field without heavy template and mixin machinery.

Razvan cited an example from the unused import tool. There, when the compiler does name resolution, it has a search method in the scope of the symbol class. He'd just need to override that method and do something a little bit extra, like store some information or do something a bit different.

Here the conversation veered off a bit into the use of virtual/final functions in dmd, how things would need to change for dmd-as-a-library, and the possibility of breaking user code with API changes. Razvan noted Luís Ferreira was excited about the possibility of modifying AST nodes for a personal project of his that might end up being used by Weka. Right now, he has to jump through a lot of hoops because we don't provide a proper interface.

Razvan said he's willing to work on this and make some PRs to dmd to see if this can work or if he's just hitting roadblocks. He asked if Walter would approve of exploring it or thought it was just too ugly. He said of course Walter could explore the other solution, too. But it looks like it's either this one or that one. We don't have too many options.

Átila repeated he liked it. Walter asked how other compilers do it. That brought up some talk about clang and the Java compiler.

Martin said he wouldn't worry too much about breaking the compiler's API for now. We're at the complete starting point of being able to use it as a library. This is the first baby step. So we shouldn't worry about huge changes in the API. Once we have stable tools depending on it, that's a different story. LLVM has breaking API changes. Sometimes they even transition over three versions with different deprecation steps. He doesn't see a problem with D doing the same.

Walter was concerned this is a wrenching change to the compiler internals and isn't sure it's worth it. There are a lot of AST nodes in the compiler, and rewriting them all...

Razvan said that for him, this isn't any different than when we templated the parser or when we moved out semantic routines. Walter said templating the parser kind of failed. Razvan said that's because we didn't follow up with semantic. We templated the parser, but libdparse already existed, so we weren't offering anything new. There are a bunch of libraries out there doing bits of what dmd is already doing, but this is kind of a different scale.

Walter said he understands what it does, he's just concerned about it. He didn't understand what people wanted to do with dmd-as-a-library. He wasn't sure about what capabilities were needed. He was also concerned that anyone wanting to use dmd-as-a-library would have to learn too much about the compiler to use it effectively. He kind of likes the void* approach where a hook can be added without the compiler knowing anything about it. He was just reluctant to endorse something he didn't understand. And that's a large change in the compiler.

Timon asked why we didn't just template the semantic analysis on the AST family and then let the library go wild with having its own hookable AST. That shouldn't interfere with the standard compilation of the compiler. You wouldn't edit the existing AST nodes.

Walter said that if the compiler is properly modularized, it isn't necessary to template this stuff. You just import a different module for different behaviors if things are properly encapsulated. One thing we might look at is seeing if we can reduce the number of imports the AST nodes do. Try to make them more pluggable. So instead of using templates and all of that stuff, just import a different module with your own feature in it. There are lots of ways to do it.

Timon asked how you'd tell the existing module to please import my module instead of the existing one. That would still need templating. Walter said you'd do that by having your module in a different directory and using the -I switch to point the compiler to it. Then you've got a different implementation.

Razvan said that's not a library anymore. You'd have to replace the existing files with your files. That doesn't solve the issue.

Walter: So what you're saying is it should be a compiled dmd-as-a-library or a source code dmd-as-a-library.

Razvan said yes. If you're modifying the files, you might as well just be using a fork of the compiler. What's the point? You want to stay up to date with the latest compiler and you want to have all the features there and just plug in what you want to use.

Adam said you don't modify the files, you replace them entirely in the build system. But that is a pain. Razvan said he wanted to reuse as much as he could of the existing code, so a fair amount of copy-paste would be required for a specific file.

Walter said he understood what Razvan was saying, but if we better encapsulate the modules, that's much less of a problem to stay up to date with the rest of the compiler. For example, he'd finally gotten the lexer and parser to not import the rest of the compiler. Now somebody can plug in a different lexer and a different parser and then use the rest of the compiler unmodified. He didn't see any fundamental reason why it couldn't be done with AST nodes. Just import a different statement.d file if you want to change the statement nodes. If it's properly encapsulated, then it's easy to just replace the compiler's statement.d with the user's version. And they can tweak it as they see fit.

Timon said the fundamental problem to be solved is how to make semantic reusable on many different kinds of AST nodes. He wasn't convinced that replacing D files in the build system was a good interface for that. Maybe there was some way to marry the two approaches that wasn't too devastating for either of them.

Walter said there had been some discussion about replacing build.d with a single command to dmd that just simply reads all the files in the directory and compiles them all together. He thought that was an interesting idea and that it would make it much easier to build things out of the dmd source. He agreed that build.d was overly complex for what it does, which should be just compiling the files. Why is it this massive, complicated thing?

Átila said what people ideally would like to do is just type in a dub.json/sdl file that dmd is a dependency and it just works. Walter agreed.

Martin said this was exactly the same discussion as the one we'd had before about putting linter stuff in the frontend. Ditto for the decision to keep the check of unreachable code in the compiler just so tools can use it. Replacing the D files wasn't going to cut it. We were going to need something in between. There will be a fork of the compiler as a base for dmd-as-a-library. So we could do there the mixins, void*, or whatever so that it can be used. It's a somewhat opened-up interface to the frontend with some slight modifications. So you could add modifications like void*, or add some extra state to all of the AST nodes, extra visitors, or whatever extra functionality we need which would cost us and which we don't want to see in the frontend.

For example, if we have to remove the final attribute from some of the performance-critical methods just to be able to use the frontend as it is for arbitrary tools, we could just do it in the fork. That would be a viable approach to have dmd-as-a-library as a separate project on GitHub. We can update with every major or minor version and tools could build on top of that. Then we don't have to add all the extra stuff in the compiler itself and make it slower or uglier in the process. That's similar to how LDC uses dmd as a frontend. They rewrite the history of the monorepo, exclude some stuff, move some files to other places so they can use them more conveniently, and have some little adaptations like extra fields in some places, replace functions, etc. It works. Keeping up with the latest dmd changes isn't too bad. If that was dealt with by the community team, similar to how we maintain dub, we should be good.

Walter said he was glad Martin had brought that up. He suddenly realized Martin and Iain were already using dmd as a library, so their experience would be extremely informative on how to better do this. And maybe by better supporting them, we'd be implicitly better supporting dmd as a library. He didn't know how they were integrating it into LDC and GDC. He said he was sorry he hadn't thought of that before.

Martin said LDC and GDC were obviously modifying some parts of D, at least LDC was, but just very few occasions. With the rest of the interface, they were interoperating with C++. That's a special case for them and not comparable to other D tools that would use dmd-as-a-library directly. He'd said before that it was all working quite nicely. You couldn't do anything much better to simplify his life in this regard. Some people had said they'd like to see LDC modifications upstreamed, but he didn't share that view; he wasn't proud of having all of those special case additions in LDC that they do for the frontend editions. They were probably added 10 years ago by Kai or someone. They're ugly, but they're working. He wouldn't want to see those upstreamed into dmd, nor all the LDC intrinsics upstreamed into DRuntime.

Mathias said those aren't special cases, they're use cases for dmd-as-a-library. Martin said yes, but he didn't see the connection. Mathias said they're good examples of the kinds of features people would need for dmd-as-a-library. Currently, they're hacks because you put them in an LDC version block and modify the code. But by upstreaming them, you'd be turning the hacks into proper configuration points, which is the point.

Martin agreed. His point was he didn't want to uglify the code and reduce the readability of the runtime and the compiler's source. The same thing applies to the linters or these planned extension points. So we could keep the frontend as it is and have that intermediate dmd-as-a-library project with some modifications to simplify some of the hooks, add extra state, and maybe make the whole AST replacement if we need to replace that functionality.

He continued by reiterating that Walter's suggestion of replacing D files is obviously not what you want. You want to reuse most of the functionality. Like the example Razvan brought up with the unused imports. You just need to replace one little search function. So having dmd-as-a-library as its own little project to start hacking from is quite fine. And we can see how it evolves.

There was a side discussion about how the extern(C++) interface affects dmd-as-a-library. Then I called on Steve, who had raised his hand sometime before, but I suggested we should wrap this discussion up soon, as it didn't appear there was going to be a clear point where we could do that.

Steven said this was very similar to the discussions we'd had about a new Phobos version, i.e., reusing code that is compiled in a completely different way but is the same code and we don't want to make copies of it. We should think about the experience from that and how we didn't end up with a good solution. Átila agreed.

I said we should do what Razvan had suggested and take this offline. I asked Razvan to start an email with everyone who might have feedback CC'ed. And maybe we could establish a base from which to work in a future monthly meeting or planning session. Everyone agreed.

(UPDATE: There was some progress on this. Some separate meetings focused on dmd-as-a-library were held in October. I did not participate in these, but I'll post an update on what transpired once I've caught up on the other summaries.)
November 13
On Monday, 13 November 2023 at 04:46:44 UTC, matheus wrote:
> ...

Part 3:

Átila

Átila said he'd asked Roy after DConf if he wanted to write a spec for the new shared semantics but hadn't yet received a reply. He thought we could go ahead with it anyway, as it's probably going to be obvious in all the cases what we should do.

He'd also been thinking about Robert's suggestions regarding safe code, and the more he'd done so, the more they made sense. He didn't think we needed to wait for editions, because then we could solve all the problems we've had with DIP 1000, possibly in one fell swoop. The whole issue was we were making people have to add scope everywhere because they use @safe, even if the functions weren't doing anything "unscopey". They wouldn't have to anymore because it would only apply to @trusted. He thought we should go ahead with that, too.

Another thing: he'd been trying to play around with Timon's idea of pointers that have a compile-time enum saying if they're isolated or not. He wanted to do a proof of concept with a vector type and some allocators to see what that looks like. That might inform some decisions on how to add isolated to the language.

He'd also been thinking about how we could do editions.

I brought up an email Átila had sent recently about what features should go in the next edition. I suggested that's not what we need to be thinking about right now. We should first define exactly how editions were going to be implemented. For example, Walter had suggested the approach of attaching editions to module declarations. There are other possibilities. So we need to define what that looks like first. We should also be thinking about which features we need to have stabilized in the current language, which is going to be the default edition. What's broken now that we're not going to fix now, but want to defer until the first new edition?

Átila agreed. He said he'd wanted to know what we could maybe do with the new edition so that it could inform what potential issues could arise rather than dealing with it in the abstract. It's more because of that. Not even, "that's what we're going to do", but "what would we do if we could do anything" so that we could think of what could go wrong.

I thought this should be a high priority for us. I proposed that we put it on the agenda of our next planning session. Everyone agreed.

(SPOILER: I'll post the September planning update a day or so after this summary, but in that session, we agreed that Átila would write up the proposal for editions by November 1st. We've since extended that to December.)

Before yielding his time, Átila said he'd like to write a program that fetches all of the dub packages and checks which ones still built so he could get a list of stuff we need to test the kinds of modifications like the shared and safe stuff, along with as many future dub packages as possible, instead of the hand-selected, curated list we have now.

Dennis thought even Phobos would break. He said that DIP 1000 was not a breaking change. The breaking change was fixing the accepts invalid bug that, so far, slicing a local static array has been allowed. Robert's proposal was strictly more breakage than DIP 1000.

Átila said he understood that, but that code was obviously wrong anyway. The code was broken. It's just that the compiler didn't tell people. And the issue with DIP 1000 was adoption. The issue with that was having to add scope everywhere.

I said that's the kind of thing I was talking about earlier. If fixing shared or scope or anything was going to break a lot of code, shouldn't we defer fixing them to a new edition and not do it in the default language? Walter said DIP 1000 clearly had to be in an edition.

Átila said, "Okay." But the shared thing is a no-brainer. The code is completely wrong and we're not going to break anything by lowering it to atomics. Walter agreed.

Timon asked if the idea is to do operations on shared variables atomically. Walter said yes, for the ones where the CPU supports an atomic operation. If the CPU doesn't support an atomic operation on a variable, then it will not be allowed. It varies by target.

Timon said yes, but an atomic operation is not the only way to synchronize something. Átila said that's true, but the thing is, there's always an opt-out by casting away shared yourself. If you're going to do something to shared directly, you should either pass it to a function that takes shared, or cast it away and lock a mutex, or whatever you want.

Timon brought up Manu's push to just reject all of those operations. Átila said that doesn't work. He'd been trying to fix it this year and it's been one bug after another. Timon asked what the issue with that was. Átila said there were many, but gave one example: synchronized(mutex) doesn't compile with -preview=nosharedaccess because you're accessing the mutex. The runtime doesn't build with it and Phobos won't either. He'd been trying to plug the holes and he realized at DConf that it's better not to try to do that at all. Instead, just lower it to an atomic in every one of those cases. If it doesn't compile, then it doesn't compile. What are you trying to do with it anyway?

Walter told Timon that the problem was you couldn't pass a shared value as an argument. If you've created a shared pointer to something, you can't do anything with it because it's shared. If you want to access the shared pointer, you have to pass it by reference to std.atomic. However, there are instructions on the CPU to atomically load a pointer type and other basic types. Why not take advantage of that and allow shared operations when the target CPU can support them directly?

Timon said it seems that we'll always synchronize. Átila said that's better than the world right now. And again, if that's the case and your profile shows that's your problem, cast shared away.

Walter said it's also less efficient to always be calling the std.atomic functions when you can just use the CPU instructions if it has them. Timon said it seems the solution to that should be intrinsics. The proposal seems like intrinsics but with nicer syntax. He thought there were some decisions about how to do the synchronization exactly that you're kind of locking this default syntax into when you do this.

Átila said the default is to do something sensible. If you want to do something else, you can. Timon said what is sensible semantics for your atomic load and store may depend on the context. Átila agreed, but for most cases, it's going to be sequential consistency. Again, if you want something else, then you use something else. If you explicitly say, "atomic load with something else", then that's fine. That's what happens. You should still be able to write code that does explicit atomic load, store, fetch, add, whatever, and it should just work. It won't add to that, because then we would have failed.

Timon said it could just contribute to the confusion because the code will do some synchronization stuff implicitly, but you have to think about it from, like, both threads if they're participating in it. If it's implicit, you kind of don't see it in code anymore. To him, that's a less good solution than having it fail compilation if you don't do it right.

Walter said it's the same issue we have with vector instructions. If the instruction exists for the operation you requested in the code, the compiler will generate the vector instruction for you. If the instruction doesn't exist on the target CPU, the compiler will fail to compile and you get to decide what alternate path you want to take. So what it means is that if it compiles, it will work. If it fails to compile on your target machine, you're going to have to figure out a workaround for it.

Walter continued, saying that if the CPU has instructions to do an atomic load of an integer, the compiler will support that and it will give you atomic loads. If you're compiling for a CPU that does not support atomic loads of an integer, it will fail to compile. This is exactly the way vector instructions work. He thought we'd been successful with that. He's happy with it.

He said that means that when you move to different platforms, maybe your shared code will break, and maybe not. But that's kind of how it should be, because shared is implicitly wrapped up with how the CPU is constructed. So that's what he thought we should do.

Timon asked what happens if you're in a heterogeneous system that has different kinds of hardware operating on the same memory, with different coherency guarantees and features. How does the compiler know about this?

Walter and Átila both said that it doesn't. Átila said you cast away shared and you do your weird stuff yourself because he didn't know how we could support that. Walter agreed. Timon said that was the point of saying that if it's shared, you have to do something sensible on your own, the compiler will not assume that doing certain things is sensible. Walter said the compiler assumes what's sensible based on the CPU it's targeting.

Timon said yes, but in concurrent program, the compiler knows what it's generating the code for now, but it doesn't know what other devices are on the same system and accessing this memory. Walter said Timon was suggesting that the lock instruction on a CPU was completely useless. Timon said he wasn't saying that. You should probably specify what you want. Átila said you can. Timon said that definitely you can, it's just that to him, the reasoning made sense that there is no sensible default. But of course, they could disagree with that and say the sensible default is sequential consistency, assuming that all the threads are running on this CPU that you're compiling for. It's just a different trade-off.

Walter said he's going to assume that the CPU maker hasn't screwed up with the implementation of shared loads. Timon emphasized he was just saying there is not a single way to synchronize access. Átila reiterated that you can do your own synchronization instead. And if you're somebody who cares about this, that's what you're going to be doing anyway.

Given that we'd already gone over two hours and a few participants had dropped out, Walter suggested we cut this particular discussion here and consider putting a cap on long discussions in the future.
Walter

Walter said he'd been able to fix a bunch of things that had come up at DConf. He was very disappointed that he'd been forced to adopt Microsoft assembly syntax for ImportC because they offered no path through their header files that didn't end up using their version of inline assembler. We'll see how that works out.

He'd spent a lot of time converting his DConf presentation into an article.

He'd also been working on 'The Didacticus' for D, a set of general design principles that would be congruent with our vision of changing the way programmers think. We'd never really sat down and made a list of the general principles that make D a unique and useful language. He thought it was long past time. He'd written up a draft form that he'd sent to a couple of people. He hoped everyone was happy with that.

(UPDATE: Right now, Walter is currently working on what I think is the fourth, or maybe fifth, draft of the principles. He started with a list of the principles themselves and then refined that list, and has since been expanding on defining and describing them. Once he's finished, we'll publish this on the website alongside the DLF's vision statement and that of each DLF associate.)

Conclusion
This meeting was followed by a planning session on the 23rd. We then had a quarterly meeting with industry reps on October 6th. Our next monthly was on October 13th.

That's it!

Matheus.
« First   ‹ Prev
1 2 3