1 day ago

On Sunday, 6 July 2025 at 14:04:46 UTC, Richard (Rikki) Andrew Cattermole wrote:

>

And then there is contracts that apparently need to catch AssertError.

That's an anomaly that should be solved on its own. It doesn't work with -checkaction=C, and there's a preview switch for new behavior requiring you to explicitly create in contracts that are more lenient than the parent: https://dlang.org/changelog/2.095.0.html#inclusive-incontracts

Perhaps we can open a new thread if there's more to discuss about that, since this thread is already quite big and discussing multiple things at the same time isn't making it easier to follow ;-)

>

Why would it effect inference?

Leave the frontend alone.

Do this in the glue layer. If flag is set and compiler flag is set to a specific value don't add unwinding.

The frontend produces a different AST based on nothrow. Without any changes, field destructors are still skipped when an error bubbles through a constructor that inferred nothrow based on the assumption that range errors / assert errors are nothrow.

23 hours ago
Timon's method is reasonable as his particular situation requires it, and is an example of how flexible D's response to failures can be customized.

It's not reasonable if the software is controlling the radiation dosage on a Therac-25, or is empowered to trade your stocks, or is flying a 747.

Executing code after the program crashes is always a risk, and the more code that is executed, the more risk. If your software is powering the remote for a TV, there aren't any consequences for failure.
20 hours ago

On Sunday, 6 July 2025 at 15:34:37 UTC, Timon Gehr wrote:

>

Also it seems you are just ignoring arguments about rollback that resets state that is external to your process.

I deliberately am, but not in bad faith. I'm just looking for an answer to a simple question to anyone with a custom error handler: how does the compiler skipping 'cleanup' in nothrow functions concretely affect your error log?

But the responses are mostly about:

  • It's unexpected behavior
  • I don't need performance
  • Some programs want to restore global system state
  • Contract inheritance catches AssertError
  • The stack trace generation function isn't configurable
  • Not getting a trace on segfaults/null dereference is bad
  • Removing Error is a breaking change
  • Differences between destructors/scope(exit)/finally are bad
  • Separate crash handler programs are over-engineered

Which are all interesting points! But if I address all of that the discussion becomes completely unmanageable. However, at this point I give up on this question and might as well take the plunge. 😝

>

It's a breaking change. When I propose these, they are rejected on the grounds of being breaking changes.

I might have been too enthusiastic in my wording :) I'm not actually proposing breaking everyone's code by removing Error tomorrow, I was just explaining to Jonathan that it's not how I'd design it from the ground up. If we get rid of it long term, there needs to be something at least as good in place.

>

A nice thing about stack unwinding is that you can collect data in places where it is in scope. In some assert handler that is devoid of context you can only collect things you have specifically and manually deposited in some global variable prior to the crash.

That's a good point. Personally I don't mind using global variables for a crash handler too much, but that is a nice way to access stack variables indeed.

>

It's an inadmissible conflation of different abstraction levels and it is really tiring fallacious reasoning that basically goes: Once one thing went wrong, we are allowed to make everything else go wrong too.

Let's make 2+2=3 within a catch(Throwable){ ... } handler too, because why not, nobody whose program has thrown an error is allowed to expect any sort of remaining sanity.

Yes, I wouldn't want the compiler to deliberately make things worse than they need to be, but the compiler is allowed to do 'wrong' things if you break its assumptions. Consider this function:

__gshared int x;
void f()
{
    assert(x == 2);
    return x + 2;
}

LDC optimizes that to return 4;, but what if through some thread/debugger magic I change x to 1 right after the assert check, making 2+2=3. Is LDC insane to constant fold it instead of just computing x+2, because how many CPU cycles is that addition anyway?

Similarly, when I explicitly tell 'assume nothing will be thrown from this function' by adding nothrow, is it insane that the code is structured in such a way that finally blocks will be skipped when the function in fact, does throw?

I grant you that nothrow is inferred in templates/auto functions, and there's no formal definition of D's semantics that explicitly justifies this, but skipping cleanup doesn't have to be insane behavior if you consider nothrow to have that meaning.

>

Adding a subtle semantic difference between destructors and other scope guards I think is just self-evidently bad design, on top of breaking people's code.

Agreed, but that's covered: they are both lowered to finally blocks, so they're treated the same, and no-one is suggesting to change that. Just look at the -vcg-ast output of this:

void start() nothrow;
void finish() nothrow;

void normal() {
    start();
    finish();
}

struct Finisher { ~this() {finish();} }
void destructor() {
    Finisher f;
	start();
}

void scopeguard() {
    scope(exit) finish();
    start();
}

void finallyblock() {
    try {
        start();
    } finally { finish(); }
}

When removing nothrow from start, you'll see finally blocks in all function except (normal), but with nothrow, they are all essentially the same as normal(): two consecutive function calls.

>

I am talking about actual pain I have experienced, because there are some cases where unwinding will not happen, e.g. null dereferences.

That's really painful, I agree! Stack overflows are my own pet peeve, which is why I worked on improving the situation by adding a linux segfault handler: https://github.com/dlang/dmd/pull/15331
I also have a WIP handler for Windows, but couldn't get it to work with stack overflows yet. Either way, this has nothing to with how the compiler treats nothrow or throw Error(), but with code generation of pointer dereferencing operations, so I consider that a separate discussion.

>

You are talking about pie-in-the-sky overengineered alternative approaches that I do not have any time to implement at the moment.

Because there seems to be little data from within the D community, I'm trying to learn how real-world UI applications handle this problem. I'm not asking you to implement them, ideally druntime provides all the necessary tools to easily add appropriate crash handling to your application. My question is whether always executing destructors even in the presence of nothrow attributes is a necessary component for this, because this whole discussion seems weirdly specific to D.

>

We can, make unsafe cleanup elision in nothrow a build-time opt-in setting. This is a niche use case.

The frontend makes assumptions based on nothrow. For example, when a constructor calls a nothrow function, it assumes the destructor doesn't need to be called, which affects the AST as well as attribute inference (for example, the constructor can't be @safe if it might call a @system field destructor because of an Exception).

But also, I thought the whole point of nothrow was better code generation. If it doesn't do that, it can be removed as far as I'm concerned.

>

it is somehow in the name of efficiency.

It's an interesting question of course to see how much it actually matters for performance. I tried removing nothrow from dmd itself, and the (-O3) optimized binary increased 54 KiB in size, but I accidentally also removed a "nothrow" string somewhere causing some errors so I haven't benchmarked a time difference yet. It would be interesting to get some real world numbers here.

I hope that clarifies some things, tell me if I missed something important.

20 hours ago
On Sunday, 6 July 2025 at 15:34:37 UTC, Timon Gehr wrote:
>
> Contracts rely on catching assert errors. Therefore, a custom handler may break dependencies and is not something I will take into account in any serious fashion.

Given the spec for contracts, isn't this just an unspecified implementation detail which could be changed?

i.e. although the in and out expressions are 'AssertExpressions', they do not need to be implemented by calling assert().  So one could define a new form of Throwable, say ContractFail, and have that thrown from the contract failure cases.

Similarly with the asserts within class invariant blocks, and for asserts within unittest blocks; but here there is a weaker argument, as those use explicit calls to assert.  However even then, the spec says that they have different semantics in unittest and in contracts.

Each of these could then have its own definition for if the call stack is unwound. Which allows for raw asserts to then have a different behaviour if desired.

Otherwise the only other apparent way to have 'traditional' assert would be for it to be a library routine (of whatever new name) which simply does the message followed by abort(). It strikes me that this is essentially the only way at the moment to guarantee that.


20 hours ago
On 07/07/2025 3:48 AM, Dennis wrote:
> On Sunday, 6 July 2025 at 14:04:46 UTC, Richard (Rikki) Andrew Cattermole wrote:
>> And then there is contracts that apparently need to catch AssertError.
> 
> That's an anomaly that should be solved on its own. It doesn't work with -checkaction=C, and there's a preview switch for new behavior requiring you to explicitly create in contracts that are more lenient than the parent: https://dlang.org/changelog/2.095.0.html#inclusive-incontracts

Not possible due to function calls.

I did suggest to Walter that we might be able to change how contracts work for this in an edition, but it will be breaking to try to fix this.

> Perhaps we can open a new thread if there's more to discuss about that, since this thread is already quite big and discussing multiple things at the same time isn't making it easier to follow ;-)
> 
>> Why would it effect inference?
>>
>> Leave the frontend alone.
>>
>> Do this in the glue layer. If flag is set and compiler flag is set to a specific value don't add unwinding.
> 
> The frontend produces a different AST based on nothrow. Without any changes, field destructors are still skipped when an error bubbles through a constructor that inferred nothrow based on the assumption that range errors / assert errors are nothrow.

```d
void unknown() nothrow;
void function(int) nothrow dtorCheck2b = &dtorCheck!();

void dtorCheck()(int i)
{
    static struct S
    {
        this(int i) nothrow {
            assert(0);
        }

        ~this()
        {
        }
    }

    S s;
    s = S(i);
    unknown;
}
```

Disable the if statement at: https://github.com/dlang/dmd/blob/3d06a911ac442e9cde5fd5340624339a23af6eb8/compiler/src/dmd/statementsem.d#L3428

Took me two hours to find what to disable. Inference remains in place.

Should be harmless to disable this rewrite, even if -betterC is on (which supports finally statements).

Do note that this rewrite doesn't care about ``nothrow``, or unwinding at all. This works in any function, which makes this rewrite a lot more worrying than I thought it was.

A better way to handle this would be to flag the finally statement as being able to be sequential. Then let glue layer decide if it wants to make it sequential or keep the unwinding.

On this note, I've learned that dmd is doing a subset of control flow graph analysis for Error/Exception handling, its pretty good stuff.
16 hours ago
On 7/6/25 21:17, Dennis wrote:
> On Sunday, 6 July 2025 at 15:34:37 UTC, Timon Gehr wrote:
>> Also it seems you are just ignoring arguments about rollback that resets state that is external to your process.
> 
> I deliberately am, but not in bad faith. I'm just looking for an answer to a simple question to anyone with a custom error handler: how does the compiler skipping 'cleanup' in nothrow functions concretely affect your error log?
> ...

I think the most likely thing that may happen is e.g. a segfault during unrolling, so that the log is not recorded in the first place.

The second most likely issue is some part of the error collection logic not being evaluated because `nothrow` was inferred.

> But the responses are mostly about:
> 
> - It's unexpected behavior
> - I don't need performance
> - Some programs want to restore global system state
> - Contract inheritance catches AssertError
> - The stack trace generation function isn't configurable
> - Not getting a trace on segfaults/null dereference is bad
> - Removing Error is a breaking change
> - Differences between destructors/scope(exit)/finally are bad
> - Separate crash handler programs are over-engineered
> 
> Which are all interesting points! But if I address all of that the discussion becomes completely unmanageable. However, at this point I give up on this question and might as well take the plunge. 😝
> ...

Well, it's similarly a bit hard to know exactly what would happen to all my programs that I have written and will write in the future if `2+2` were to evaluate to `3`, but I do know that it will be bad.

>> It's a breaking change. When I propose these, they are rejected on the grounds of being breaking changes.
> 
> I might have been too enthusiastic in my wording :) I'm not actually proposing breaking everyone's code by removing Error tomorrow, I was just explaining to Jonathan that it's not how I'd design it from the ground up. If we get rid of it long term, there needs to be something at least as good in place.
> ...

Probably, though I think it is tricky to get equivalently useful behavior without unrolling. This is true in general, but I think especially in a multi-threaded setting.

Note that now you make your code properly report errors when exceptions reach your main function and it just works the same way also with other throwables.

>> A nice thing about stack unwinding is that you can collect data in places where it is in scope. In some assert handler that is devoid of context you can only collect things you have specifically and manually deposited in some global variable prior to the crash.
> 
> That's a good point. Personally I don't mind using global variables for a crash handler too much, but that is a nice way to access stack variables indeed.
> ...

Well, it's global variables that would need to be kept updated at all times so they contain current data in case of a crash. I don't really want to design my programs around the possibility of a crash. I don't actually want or need crashes to happen.

>> It's an inadmissible conflation of different abstraction levels and it is really tiring fallacious reasoning that basically goes: Once one thing went wrong, we are allowed to make everything else go wrong too.
>>
>> Let's make 2+2=3 within a `catch(Throwable){ ... }` handler too, because why not, nobody whose program has thrown an error is allowed to expect any sort of remaining sanity.
> 
> Yes, I wouldn't want the compiler to deliberately make things worse than they need to be, but the compiler is allowed to do 'wrong' things if you break its assumptions. Consider this function:
> 
> ```D
> __gshared int x;
> void f()
> {
>      assert(x == 2);
>      return x + 2;
> }
> ```
> 
> LDC optimizes that to `return 4;`, but what if through some thread/ debugger magic I change `x` to 1 right after the assert check, making 2+2=3. Is LDC insane to constant fold it instead of just computing x+2, because how many CPU cycles is that addition anyway?
> ...

Well, this is my issue. An assert failing is not supposed to break the compiler's assumptions, neither is throwing any other error. This does not need to be UB, the whole point of e.g. bounds checks is to catch issues before they lead to UB.

> Similarly, when I explicitly tell 'assume nothing will be thrown from this function' by adding `nothrow`, is it insane that the code is structured in such a way that finally blocks will be skipped when the function in fact, does throw?
> ...

The insane part is that we are in fact allowed to throw in `@safe nothrow` functions without any `@trusted` shenanigans. Such code should not be allowed to break any compiler assumptions.

The compiler should not be immediately breaking its own assumptions. In `@safe` code, no less. Yes, this is insane.

> I grant you that `nothrow` is inferred in templates/auto functions, and there's no formal definition of D's semantics that explicitly justifies this, but skipping cleanup doesn't have to be insane behavior if you consider nothrow to have that meaning.
> ...

`nothrow` does not actually have this meaning, otherwise it would need to be `@system`, or it would need to enforce its own semantics via the type system.

>> Adding a subtle semantic difference between destructors and other scope guards I think is just self-evidently bad design, on top of breaking people's code.
> 
> Agreed, but that's covered: they are both lowered to finally blocks, so they're treated the same, and no-one is suggesting to change that.

Well, at that point you are left with the following options:

- to not let me collect data in `scope(failure)`. Not a fan.

- to run cleanup consistently, whether it is a destructor, finally, or some other scope guard. This is what I want.

> Just look at the `-vcg-ast` output of this:
> 
> ```D
> void start() nothrow;
> void finish() nothrow;
> 
> void normal() {
>      start();
>      finish();
> }
> 
> struct Finisher { ~this() {finish();} }
> void destructor() {
>      Finisher f;
>      start();
> }
> 
> void scopeguard() {
>      scope(exit) finish();
>      start();
> }
> 
> void finallyblock() {
>      try {
>          start();
>      } finally { finish(); }
> }
> ```
> 
> When removing `nothrow` from `start`, you'll see finally blocks in all function except (normal), but with `nothrow`, they are all essentially the same as `normal()`: two consecutive function calls.
> ...

Here's the output with opend-dmd:

```d
import object;
nothrow void start();
nothrow void finish();
void normal()
{
	start();
	finish();
}
struct Finisher
{
	~this()
	{
		finish();
	}
	alias __xdtor = ~this()
	{
		finish();
	}
	;
	ref @system Finisher opAssign(Finisher p) return
	{
		(Finisher __swap2 = void;) , __swap2 = this , (this = p , __swap2.~this());
		return this;
	}
}
void destructor()
{
	Finisher f = 0;
	try
	{
		start();
	}
	finally
		f.~this();
}
void scopeguard()
{
	try
	{
		start();
	}
	finally
		finish();
}
void finallyblock()
{
	try
	{
		{
			start();
		}
	}
	finally
	{
		finish();
	}
}
RTInfo!(Finisher)
{
	enum immutable(void)* RTInfo = null;

}
NoPointersBitmapPayload!1LU
{
	enum ulong[1] NoPointersBitmapPayload = [0LU];

}
```

>> I am talking about actual pain I have experienced, because there are some cases where unwinding will not happen, e.g. null dereferences.
> 
> That's really painful, I agree! Stack overflows are my own pet peeve, which is why I worked on improving the situation by adding a linux segfault handler: https://github.com/dlang/dmd/pull/15331

This reboot also adds proper stack unwinding: https://github.com/dlang/dmd/pull/20643

This is great stuff! But it's not enabled by default, so there will usually be at least one painful crash. :( Also, the only user who is running my application on linux is myself and I actually do consistently run it with a debugger attached, so this is less critical for me.

> I also have a WIP handler for Windows, but couldn't get it to work with stack overflows yet.

Nice! Even just doing unrolling with cleanup for other segfaults would be useful, particularly null pointer dereferences, by far the most common source of segfaults.

> Either way, this has nothing to with how the compiler treats `nothrow` or `throw Error()`, but with code generation of pointer dereferencing operations, so I consider that a separate discussion.
> ...

Well, you were asking for practical experience. Taking error reporting that just works and turning it into a segfault outright or even just requiring some additional hoops to be jumped through that were not previously necessary to get the info is just not what I need or want, it's most similar to the current default segfault experience.

FWIW, opend-dmd:

```d
import std;

class C{ void foo(){} }
void main(){
    scope(exit) writeln("important info");
    C c;
    c.foo();
}
```

```
important info
core.exception.NullPointerError@test_null_deref.d(7): Null pointer error
----------------
  ??:? onNullPointerError [0x6081275c6bda]
  ??:? _d_nullpointerp [0x6081275a0479]
  ??:? _Dmain [0x608127595b56]
```

>> You are talking about pie-in-the-sky overengineered alternative approaches that I do not have any time to implement at the moment.
> 
> Because there seems to be little data from within the D community, I'm trying to learn how real-world UI applications handle this problem. I'm not asking you to implement them, ideally druntime provides all the necessary tools to easily add appropriate crash handling to your application. My question is whether always executing destructors even in the presence of `nothrow` attributes is a necessary component for this, because this whole discussion seems weirdly specific to D.
> ...

I'd be happy enough to not use any `nothrow` attributes ever, but the language will not let me do that easily, and it may hide in dependencies.

>> We can, make unsafe cleanup elision in `nothrow` a build-time opt-in setting. This is a niche use case.
> 
> The frontend makes assumptions based on nothrow.

Yes, it infers `nothrow` by itself with no way to turn it off and then makes wrong assumptions based on it. It's insane. opend does not do this.

> For example, when a constructor calls a nothrow function, it assumes the destructor doesn't need to be called, which affects the AST as well as attribute inference (for example, the constructor can't be @safe if it might call a @system field destructor because of an Exception).
> 
> But also, I thought the whole point of nothrow was better code generation. If it doesn't do that, it can be removed as far as I'm concerned.
> ...

I don't really need it, but compiler-checked documentation that a function has no intended exceptional control path is not the worst thing in the world.

>> it is somehow in the name of efficiency.
> 
> It's an interesting question of course to see how much it actually matters for performance. I tried removing `nothrow` from dmd itself, and the (-O3) optimized binary increased 54 KiB in size, but I accidentally also removed a "nothrow" string somewhere causing some errors so I haven't benchmarked a time difference yet. It would be interesting to get some real world numbers here.
> 
> I hope that clarifies some things, tell me if I missed something important.
> 

My binaries are megabytes in size. I really doubt lack of `nothrow` is a significant culprit, and the binary size is not even a problem. In any case, doing this implicitly anywhere is the purest form of premature optimization. Correctness trumps minor performance improvements.
6 hours ago
On Sunday, 6 July 2025 at 16:21:59 UTC, Walter Bright wrote:
> It's not reasonable if the software is controlling the radiation dosage on a Therac-25, or is empowered to trade your stocks, or is flying a 747.

The amount of software written by volume that falls into this category is minuscule. At best.

> Executing code after the program crashes is always a risk, and the more code that is executed, the more risk. If your software is powering the remote for a TV, there aren't any consequences for failure.

This is the vast majority of software written in general and D in specific.

In general, there is no value in enforcing the strictures of the former on to the later. It's a business decision for them and they simply don't need to pay that cost.

You won't make that software any better because nobody will use your language to write code with it, they'll use something that does the sane default (for them).

You can't improve the world's overall software quality by throwing a hissy-fit when they won't do it your perfect way, because they'll just walk away and use something else altogether.

Is it not better to get some improvements out there in general use, even if the result is less than perfect?

You have just discovered another one of D's "big back doors" in terms of adoption. You're being unreasonable and people just quietly leave to find something reasonable.
1 hour ago

On Sunday, 6 July 2025 at 16:21:59 UTC, Walter Bright wrote:

>

Timon's method is reasonable as his particular situation requires it, and is an example of how flexible D's response to failures can be customized.

It's not reasonable if the software is controlling the radiation dosage on a Therac-25, or is empowered to trade your stocks, or is flying a 747.

Executing code after the program crashes is always a risk, and the more code that is executed, the more risk. If your software is powering the remote for a TV, there aren't any consequences for failure.

The optimal behavior varies with the context so the programmer should decide. The default behavior should, IMO, favor safety.

1 2 3 4 5 6 7 8 9
Next ›   Last »