On Sunday, 6 July 2025 at 23:53:54 UTC, Timon Gehr wrote:
>The insane part is that we are in fact allowed to throw in @safe nothrow
functions without any @trusted
shenanigans. Such code should not be allowed to break any compiler assumptions.
Technically, memory safety is supposed to be guaranteed by not letting you catch unrecoverable throwables in @safe
. When you do catch them, you're supposed to verify that any code you have in the try block (including called functions) doesn't rely on destructors or similar for memory safety.
I understand this is problematic, because in practice pretty much all code often is guarded by a top-level pokemon catcher, meaning destructor-relying memory safety isn't going to fly anywhere. I guess we should just learn to not do that, or else give up on all nothrow
optimisations. I tend to agree with Dennis that a switch is not the way to go as that might cause incompatibilities when different libraries expect different settings.
In idea: What if we retained the nothrow
optimisations, but changed the finally blocks so they are never executed for non-Exception
Throwable
s unless there is a catch block for one? Still skipping destructors, but at least the rules between try x; finally y;
, scope(exit)
and destructors would stay consistent, and nothrow
wouldn't be silently changing behaviour since the assert failures would be skipping the finalisers regardless. scope(failure)
would also catch only Exception
s.
This would have to be done over an edition switch though since it also breaks code.