On Wed, 28 Aug 2024 at 20:11, Dennis via Digitalmars-d <digitalmars-d@puremagic.com> wrote:
On Wednesday, 28 August 2024 at 08:59:49 UTC, Manu wrote:
> Oh wow... that's bad. Is there any plan to fix this?

I want to, but with dmd's current architecture, I don't know how.
The tricky case is:

```D
// example of @safe inference, but the same applies to pure
nothrow and @nogc

void systemFunc() @system;

void fun1()()
{
     fun2();
     systemFunc();
}

void fun2()()
{
     fun1();
}

void main0() @system
{
     fun1();
}

void main1() @safe
{
     fun2(); // should error
}
```

fun1 gets analyzed first, which gets interrupted when it sees the
call to fun2.
Then fun2 gets analyzed, but that sees a call to fun1, which at
that point is still in the process of inferring attributes. The
current implementation gives up here and infers fun2 as `@system`.

So I tried replacing that pessimistic assumption with an
optimistic assumption, but in this case, `fun1` will turn out to
be `@system` because of the `systemFunc()` call. But at the time
`fun2` ends its analysis, this is completely unknown. Until
`fun1` ends its analysis, the needed information isn't there.

I could start by inferring `fun2` as `@safe` and then retract
that once `fun1` finishes analysis, but currently, the compiler
assumes a function type to be final after its body was analyzed,
so mutating the type later is going to mess up things.

So without re-architecturing dmd's semantic analysis, I don't see
a way out.

I reckon while parsing a function X, you could gather any evidence available that invalidates the inference, and in lieu of any invalidation when you encounter Y that's not itself resolved, place a token on X that it's waiting on Y (and another token that it's waiting on Z, etc), then also place a token on Y and Z that says when it's finished it's own resolution it should poke the result back to X.
What will happen then is Y may run inference and be invalidated, in which case it reports the invalidation result back to X, and that may invalidate a cascade of pending inferences... or it may itself not be finalised waiting optimistically on the inference of X (or some other cycle).
At the end when everything's had a go, all outstanding tokens must be involved in optimistic cycles since there was nothing in any of those functions that invalidated their inferences; and since they're all outstanding on an optimistic cycle, then I think they naturally all satisfy eachother in the optimistic case. So just close out all outstanding tokens in the optimistic case... does that sound right?