1 day ago

On Sunday, 6 July 2025 at 23:53:54 UTC, Timon Gehr wrote:

>

The insane part is that we are in fact allowed to throw in @safe nothrow functions without any @trusted shenanigans. Such code should not be allowed to break any compiler assumptions.

Technically, memory safety is supposed to be guaranteed by not letting you catch unrecoverable throwables in @safe. When you do catch them, you're supposed to verify that any code you have in the try block (including called functions) doesn't rely on destructors or similar for memory safety.

I understand this is problematic, because in practice pretty much all code often is guarded by a top-level pokemon catcher, meaning destructor-relying memory safety isn't going to fly anywhere. I guess we should just learn to not do that, or else give up on all nothrow optimisations. I tend to agree with Dennis that a switch is not the way to go as that might cause incompatibilities when different libraries expect different settings.

In idea: What if we retained the nothrow optimisations, but changed the finally blocks so they are never executed for non-Exception Throwables unless there is a catch block for one? Still skipping destructors, but at least the rules between try x; finally y;, scope(exit) and destructors would stay consistent, and nothrow wouldn't be silently changing behaviour since the assert failures would be skipping the finalisers regardless. scope(failure) would also catch only Exceptions.

This would have to be done over an edition switch though since it also breaks code.

1 day ago

On Monday, 7 July 2025 at 21:44:49 UTC, Dukc wrote:

>

I understand this is problematic, because in practice pretty much all code often is guarded by a top-level pokemon catcher, meaning destructor-relying memory safety isn't going to fly anywhere. I guess we should just learn to not do that

Meant that should learn not to rely on destructors (or similar finalisers) for memory safety.

Not that should learn out of Pokemon catching at or near the main function.

1 day ago

On Monday, 7 July 2025 at 21:54:23 UTC, Dukc wrote:

>

On Monday, 7 July 2025 at 21:44:49 UTC, Dukc wrote:

>

I understand this is problematic, because in practice pretty much all code often is guarded by a top-level pokemon catcher, meaning destructor-relying memory safety isn't going to fly anywhere. I guess we should just learn to not do that

Meant that should learn not to rely on destructors (or similar finalisers) for memory safety.

I can see a perfect storm with destructors being skipped in combination with having stack memory in a multi-threaded program, so that the very act of skipping destructors is what causes memory corruption. It breaks the structure the programmer diligently created.

If D can't gracefully shutdown a multi-threaded program when an Error occurs - i.e. catch the Error at the entry point of a thread, send upwards to the main thread and cancel any threads or other execution contexts (e.g. GPU) - then the only sane recommendation is to avoid all asserts or call abort on the spot. Which would be very unfortunate.

1 day ago
On 08/07/2025 9:17 AM, Dukc wrote:
> On Sunday, 29 June 2025 at 18:04:51 UTC, Richard (Rikki) Andrew Cattermole wrote:
>> Should an assert fail, the most desirable behaviour for it to have is to print a backtrace if possible and then immediately kill the process.
> 
> No, this breaks code a bit too hard as written by many.

We've confirmed it.

> I think that ideally, when you wait for or poll a message from a thread (or fiber) that has exited with an unrecoverable error, that error would get rethrown from the waiting point. That way, unless the error is handled every thread would eventually get killed.

Threads quite often never join. This is a very real problem, the initiation of this N.G. thread was because a thread wasn't joined and it died, but didn't kill the process.

The default for a thread should be to consume the Error and kill the process. But configurable in case people do handle it appropriately.

1 day ago
On 08/07/2025 9:44 AM, Dukc wrote:
> On Sunday, 6 July 2025 at 23:53:54 UTC, Timon Gehr wrote:
>> The insane part is that we are in fact allowed to throw in `@safe nothrow` functions without any `@trusted` shenanigans. Such code should not be allowed to break any compiler assumptions.
> 
> Technically, memory safety is supposed to be guaranteed by not letting you _catch_ unrecoverable throwables in `@safe`. When you do catch them, you're supposed to verify that any code you have in the try block (including called functions) doesn't rely on destructors or similar for memory safety.
> 
> I understand this is problematic, because in practice pretty much all code often is guarded by a top-level pokemon catcher, meaning destructor-relying memory safety isn't going to fly anywhere. I guess we should just learn to not do that, or else give up on all `nothrow` optimisations. I tend to agree with Dennis that a switch is not the way to go as that might cause incompatibilities when different libraries expect different settings.

Currently threads in D do not have this guarantee.

More often than not, you'll find that people use threads in D without joining them or handling Error's. People don't think about this stuff, as the default behavior should be good enough.

> In idea: What if we retained the `nothrow` optimisations, but changed the finally blocks so they are never executed for non-`Exception` `Throwable`s unless there is a catch block for one? Still skipping destructors, but at least the rules between `try x; finally y;`, `scope(exit)` and destructors would stay consistent, and `nothrow` wouldn't be silently changing behaviour since the assert failures would be skipping the finalisers regardless. `scope(failure)` would also catch only `Exception`s.

Long story short there is no nothrow specific optimizations taking place.

The compiler does a simplification rewrite from a finally statement over to sequence if it thinks that it isn't needed.

The unwinder has no knowledge of Error vs Exception, let alone a differentiation when running the cleanup handler. Nor does the cleanup handler know what the exception is. That would require converting finally statements to a catch all which will have implications.

1 day ago
On 08/07/2025 6:20 PM, Sebastiaan Koppe wrote:
> If D can't gracefully shutdown a multi-threaded program when an Error occurs - i.e. catch the Error at the entry point of a thread, send upwards to the main thread and cancel any threads or other execution contexts (e.g. GPU) - then the only sane recommendation is to avoid all asserts or call abort on the spot. Which would be very unfortunate.

Not just asserts, this also includes things like bounds checks...

The entire Error hierarchy would need to go and that is not realistic.
1 day ago

On Tuesday, 8 July 2025 at 07:47:30 UTC, Richard (Rikki) Andrew Cattermole wrote:

>

Long story short there is no nothrow specific optimizations taking place.

Wrong! There are, like Dennis wrote:

>

Agreed, but that's covered: they are both lowered to finally blocks, so they're treated the same, and no-one is suggesting to change that. Just look at the -vcg-ast output of this:

void start() nothrow;
void finish() nothrow;

void normal() {
    start();
    finish();
}

struct Finisher { ~this() {finish();} }
void destructor() {
    Finisher f;
	start();
}

void scopeguard() {
    scope(exit) finish();
    start();
}

void finallyblock() {
    try {
        start();
    } finally { finish(); }
}

When removing nothrow from start, you'll see finally blocks in all function except (normal), but with nothrow, they are all essentially the same as normal(): two consecutive function calls.

This means that the three latter functions execute finish() on unrecoverable error from start() if it is designated throwing, but not if it's nothrow.

My suggestion would make it so that the functions aren't executed in either case. You would have to do

try start();
catch(Throwable){}
finally finish();

instead, as you might want to do already if you wish the nothrow analysis to not matter.

>

The unwinder has no knowledge of Error vs Exception, let alone a differentiation when running the cleanup handler. Nor does the cleanup handler know what the exception is. That would require converting finally statements to a catch all which will have implications.

I'm assuming such a conversion is done somewhere at the compiler anyway. After all, the finally block is essentially higher level functionality on top of catch. try a(); finally b(); is pretty much the same as

Throwable temp;
try a();
catch (Throwable th) temp = th;
try b();
catch (Throwable th)
{   // Not sure how exception chaining really works but it'd be done here
    th.next = temp;
    temp = th;
}
if(temp) throw temp;
1 day ago
On 08/07/2025 10:39 PM, Dukc wrote:
> On Tuesday, 8 July 2025 at 07:47:30 UTC, Richard (Rikki) Andrew Cattermole wrote:
>> Long story short there is no nothrow specific optimizations taking place.
>>
> 
> Wrong! There are, like Dennis wrote:

I've found where the compiler is implementing this, verified it.

Its not nothrow specific.

Its Exception specific, not nothrow. Its subtle, but very distinct difference.

1 day ago

On Sunday, 6 July 2025 at 23:53:54 UTC, Timon Gehr wrote:

>

nothrow does not actually have this meaning, otherwise it would need to be @system

I'd say nothrow de-facto has this meaning (per Walter's intention), and catching Error is currently @system, although only implied by documentation and not enforced by the compiler.

>

Here's the output with opend-dmd:

So I take it opend changed that, being okay with the breaking change? Because @safe constructors of structs containing fields with @system destructors will now raise a safety error even with nothrow.

>

This is great stuff! But it's not enabled by default, so there will usually be at least one painful crash. :(

Indeed, I tried enabling it by default in my original PR but it broke certain Fiber tests, and there were some (legitemate) doubts about multi-threaded programs and interfering with debuggers so that's still todo.

>

Well, you were asking for practical experience.

Specifically related to stack unwinding. The logic "throw Error() should consistently run all cleanup code because a null dereference just segfaults and that sucks" escapes me, but:

>

Taking error
reporting that just works and turning it into a segfault outright or even just requiring some additional hoops to be jumped through that were not previously necessary to get the info is just not what I need or want, it's most similar to the current default segfault experience.

That tracks. I guess the confusion came from two discussions happening at once: doubling down on throw Error() and doubling down on abort() on error.

>

I don't really need it, but compiler-checked documentation that a function has no intended exceptional control ...

Some people in this thread argue that code should be Exception safe even in the case of an index out of bounds error. If that's the case, the throw / nothrow distinction seems completely redundant. Imagine the author of a library you use writes this:

mutex.lock();
arr[i]++;
mutex.unlock();

Instead of this:

mutex.lock();
scope(exit) mutex.unlock();
arr[i]++;

The idea that best-case Errors (caught right before UB) function just like Exceptions breaks down. nothrow is useless if you don't want anyone to write different code based on its presence/absence, right?

>

In any case, doing this implicitly anywhere is the purest form of premature optimization.

While I can't say I have the numbers to prove that its performance is important to me, I currently like the idea that scope(exit)/destructors are a zero-cost abstraction when Exceptions are absent. Having in the back in the mind that it would be more efficient to manually write free() at the end of my function instead of using safe scoped destruction might just (irrationally) haunt me 😜.

>

Correctness trumps minor performance improvements.

I'd usually agree wholeheartedly, but in this situation it's "correctness after something incorrect has already happened" vs. "performance of the correct case" which is more nuanced.

1 day ago

On Tuesday, 8 July 2025 at 10:47:52 UTC, Richard (Rikki) Andrew Cattermole wrote:

>

I've found where the compiler is implementing this, verified it.

Its not nothrow specific.

Whether a function is nothrow affects whether a call expression 'can throw'

https://github.com/dlang/dmd/blob/9610da2443ec4ed3aeed060783e07f76287ae397/compiler/src/dmd/canthrow.d#L131-L139

Which affects whether a statement 'can throw'

https://github.com/dlang/dmd/blob/9610da2443ec4ed3aeed060783e07f76287ae397/compiler/src/dmd/blockexit.d#L101C23-L101C31

And when a 'try' statement can only fall through or halt, then a (try A; finally B) gets transformed into (A; B). When the try statement 'can throw' this doesn't happen.

https://github.com/dlang/dmd/blob/9610da2443ec4ed3aeed060783e07f76287ae397/compiler/src/dmd/statementsem.d#L3421-L3432

Through that path, nothrow produceds better generated code, which you can easily verify by looking at assembler output of:

void f();
void testA() {try {f();} finally {f();}}

void g() nothrow;
void testB() {try {g();} finally {g();}}
>

Its Exception specific, not nothrow. Its subtle, but very distinct difference.

I have no idea what this distinction is supposed to say, but "there is no nothrow specific optimizations taking place" is either false or pedantic about words.