3 days ago
On Saturday, 5 July 2025 at 06:57:21 UTC, Jonathan M Davis wrote:
> On Friday, July 4, 2025 5:09:27 PM Mountain Daylight Time Timon Gehr via Digitalmars-d wrote:
>> A destructor can do anything, not just call `free`. Not calling them is way more likely to leave behind an unexpected state than even the original error condition. The state can be perfectly fine, it's just that the code that attempted to operate on it may be buggy.
>
> [...]
>
> So, yeah, there's no reason to assume that destructors have anything to do with allocating or freeing anything. They're just functions that are supposed to be guaranteed to be run when a variable of that type is destroyed. They can be thought of as just being another form of scope(exit) except that they're tied to the type itself and so every object of that type gets that code instead of the programmer having to type it out wherever they want it.
>
> - Jonathan M Davis

Absolutely. In today's distributed world that hourglass could also be something remote and leading to downstream issues.

For example, it is not uncommon for key-value stores to support a lock operation. You will want it to try unlocking during shutdown.
3 days ago
On 05/07/2025 6:57 PM, Jonathan M Davis wrote:
> On Friday, July 4, 2025 5:09:27 PM Mountain Daylight Time Timon Gehr via Digitalmars-d wrote:
> 
> 
>     A destructor can do anything, not just call |free|. Not calling them
>     is way more likely to leave behind an unexpected state than even the
>     original error condition. The state can be perfectly fine, it's just
>     that the code that attempted to operate on it may be buggy.
> 
> 
> This is particularly true if RAII is used. For instance, the way that MFC implemented turning the cursor into an hourglass was with RAII, so that you just declared the thing, so when the variable was created, the cursor turned into an hourglass, and when the scope exited, the variable was destroyed, and the cursor went back to normal.
> 
> 
> RAII is used less in D than in C++ (if nothing else, because we have scope statements), but it's a design pattern that D supports, and programmers can use it for all kinds of stuff that has absolutely nothing to do with memory allocations.

Don't forget there is also COM, which if not cleaned up properly will affect other processes including the Windows shell itself.
3 days ago

On Friday, 4 July 2025 at 07:13:35 UTC, Adam Wilson wrote:

>

On Friday, 4 July 2025 at 06:58:30 UTC, Walter Bright wrote:

>

Malware continues to be a problem. I wound up with two on my system last week. Ransomware seems to be rather popular. How does it get on a system?

Ahem. You run Windows 7. That is the sum total of information required to answer your own question. I haven't had a malware attack on my system since Window 8.1 came out, but I keep my systems running current builds. Yea, I may have to deal with a bit of Graphics driver instability, but I don't get my files locked up for ransom. This has been a solved problem for a decade now.

Currently ransomware is installed in corporate environment through domain policy deployment. Compatible with all versions of windows. Also this attack vector is unfixable, because it's a feature, not a bug.

I suspect graphics driver bugs were introduced by windows 10 2022, earlier versions don't have it.

3 days ago

On Friday, 4 July 2025 at 23:09:27 UTC, Timon Gehr wrote:

>

In principle it can happen. That's a drawback.

Well, in principle the opposite can also happen when we enter a state that's by definition unexpected. I'm looking for real world data that shows which strategy is best.

>

I'd prefer to be able to make this call myself.
(...)
There is no upside for me. Whatever marginal gains are achievable by e.g. eliding cleanup in nothrow functions, I don't need them.

I understand, and I'd gladly grant you an easy way to disable nothrow inference or nothrow optimizations. But having D users split up in two camps, those who want 'consistent' finally blocks and those who want efficient finally blocks, is a source of complexity that hurts everyone in the long run.

>

Anyway, it is easy to imagine a situation where you e.g. have a central registry of all instances of a certain type that you need to update in the destructor so no dangling pointer is left behind.
(...)
Not calling them is way more likely to leave behind an unexpected state than even the original error condition.

And this is still where I don't get why your error handler would even want an 'expected' state. I'd design it as defensive as possible, I'm not going to traverse all my program's data structures under the assumption that invariants hold and pointers are still valid. The error could have happened in the middle of rehashing a hash table or rebalancing a tree for all I know. And if that's the case, I'd rather have my crash reporter show me the broken data structure right before the crash, than a version that has 'helpfully' been corrected by scope guards or destructors.

3 days ago

On Saturday, 5 July 2025 at 07:07:00 UTC, Jonathan M Davis wrote:

>

If a mutex is locked and freed using RAII (or scope statements are used, and any of those are skipped), then you could get into a situation where a lock is not released like it was supposed to be, and then code higher up the stack which does run while the stack is unwinding attempts to get that lock

Why would your crash handler infinitely wait on one of your program's mutexes? I'd design a crash reporter for a UI application as follows:

  • Defensively collect traces/logs up to the point of the crash
  • Store it somewhere
  • Launch a separate process that lets the user easily send the data to the developer
  • Exit the crashed program

I think that's how most work on https://en.wikipedia.org/wiki/Crash_reporter
Except that they don't even collect the data inside the crashed program, but let the crash handler attach a debugger like gdb to the process and collect it that way, which is even more defensive.

I still don't see how a missed scope(exit)/destructor/finally block (they're interchangable in D) not putting the hourglass cursor back to a normal cursor on the crashed window would hurt the usability of a crash handler, or the quality of the log.

2 days ago
On 7/4/25 23:21, Timon Gehr wrote:
> On 7/4/25 09:24, Walter Bright wrote:
>>
>> 2. code that is not executed is not vulnerable to attack
> 
> ```d
> void foo(){
>      openDoor();
>      performWork();
>      scope(exit){
>          closeDoor();
>          lockDoor();
>      }
> }
> ```

Should have been:

```d
void foo(){
    scope(exit){
        closeDoor();
        lockDoor();
    }
    openDoor();
    performWork();
}
```
2 days ago
On Saturday, July 5, 2025 7:30:01 AM Mountain Daylight Time Dennis via Digitalmars-d wrote:
> On Saturday, 5 July 2025 at 07:07:00 UTC, Jonathan M Davis wrote:
> > If a mutex is locked and freed using RAII (or scope statements are used, and any of those are skipped), then you could get into a situation where a lock is not released like it was supposed to be, and then code higher up the stack which does run while the stack is unwinding attempts to get that lock
>
> Why would your crash handler infinitely wait on one of your program's mutexes? I'd design a crash reporter for a UI application as follows:
>
> - Defensively collect traces/logs up to the point of the crash
> - Store it somewhere
> - Launch a separate process that lets the user easily send the
> data to the developer
> - Exit the crashed program
>
> I think that's how most work on
> https://en.wikipedia.org/wiki/Crash_reporter
> Except that they don't even collect the data inside the crashed
> program, but let the crash handler attach a debugger like gdb to
> the process and collect it that way, which is even more defensive.
>
> I still don't see how a missed scope(exit)/destructor/finally block (they're interchangable in D) not putting the hourglass cursor back to a normal cursor on the crashed window would hurt the usability of a crash handler, or the quality of the log.

Arbitrary D programs aren't necessarily using crash handlers, and the way that Errors work affect all D programs. Also, the fact that Errors unwind the stack at all actually gets in way of crash handlers, because it throws away program state. For instance, a core dump won't give you where the program was when the error condition was hit like it would with a segfault, and a D program that throws an Error doesn't even give you a core dump, because it still exits normally - just with a non-zero error code.

Honestly, I don't think that it makes any sense whatsoever for Errors to be Throwables and yet not have all of the stack unwinding code run properly.

If an Error is such a terrible condition that we don't even want the stack unwinding code to be run properly, then instead of throwing anything, the program should have just printed out a stack trace and aborted right then and there, which would avoid running any code that might cause any problems while shutting down and giving crash handlers the best opportunity to get information about the state of the process at the point of the error, because the program would have terminated at that point.

On the other hand, unwinding the stack and running all of the cleanup code gives the program a chance to terminate more gracefully as well as to get information about the state of the program as it unwinds, which can help programmers debug what went wrong and get information on how the program got to where it was when the error condition occurred. And for that to work at all safely, the cleanup code needs to be run.

The logic of the language rules potentially falls apart if the cleanup code is skipped, and the logic that the programmer intended _definitely_ falls apart at that point, because the language rules are written around the idea that the cleanup code is run, and code in general is going to have been written with the assumption that the cleanup code will all have been run properly. And that could affect whether code is memory safe, because code that's normally guaranteed to run wouldn't run. It would be very easy for a decision to have been made about whether something was memory safe based on the assumption that all of the code that's normally guaranteed to run would have run (be it an assumption built into the language itself and @safe or an assumption that the programmer relied on to ensure that it was reasonable to mark their code as @trusted).

If we skip _any_ cleanup mechanisms while unwinding the stack, we're throwing normal language guarantees out the window and skipping code that could have been doing just about anything that that program relied on for proper operations (be it logging, cleaning up files, communicating with another service about it shutting down, etc.). We don't know what programmers decided to do in any of that code, but it was code that they wanted run when the stack was unwound, because that's what that code was specifically written for. Sure, maybe in some cases, if they'd thought about at, the programmer would have preferred that some of it be skipped with an Error as opposed to an Exception, but aside from catch(Exception) vs catch(Error), we don't have a way to distinguish that. And I think that in the general case, code is simply written with the idea that cleanup code will be run whenever the stack is unwound, since that's the point of it.

Either way, by skipping any cleanup code, we're putting the program into an invalid state and risking that whatever code does run during shutdown then behaves incorrectly. And just because an Error was thrown doesn't even necessarily mean that any of that code was in an invalid state. It could have simply been that there was a bug which resulted in a bad index, and then a RangeError was thrown before anything bad could actually happen. So, the Error actually prevented a problem from happening, and then if the clean up code is skipped, it proceeds to cause problems by skipping code that's supposed to run when the stack unwinds.

I can understand not wanting any stack unwinding code to run if an Error occurs on the theory that the condition is bad enough that there's a risk that some of what the stack unwinding code would do would make the situation worse, but IMHO, then we shouldn't even have Errors. We should have just terminated the program and thrown nothing, both avoiding running any of that code and giving crash handlers their best chance at getting information on the program's state. But since we do have Errors, and they're Throwables, the program should actually run the cleanup code properly and attempt to shutdown as cleanly as it can. Trying to both throw Errors and skip the cleanup code is the worst of both worlds, and I don't see how it makes any sense whatsoever.

And maybe we should make the behavior configurable so that programemrs can choose which they want rather than mandating that it work one way or the other, but what we have right now is stuck in a very bizarre place in the middle where we throw Errors and run _most_ of the cleanup code, but we don't run all of it.

- Jonathan M Davis




2 days ago

On Sunday, 6 July 2025 at 02:08:43 UTC, Jonathan M Davis wrote:

>

If an Error is such a terrible condition that we don't even
want the stack unwinding code to be run properly, then instead of throwing anything, the program should have just printed out a stack trace and aborted right then and there (...)

Yes! Hence the proposal in the opening thread.

>

I can understand not wanting any stack unwinding code to run if an Error occurs on the theory that the condition is bad enough that there's a risk that some of what the stack unwinding code would do would make the situation worse, but IMHO, then we shouldn't even have Errors.

Yes! If it were up to me, Error was removed from D yesterday. But there's push back because users apparently rely on it, and I can't figure out why. From my perspective, a distilled version of the conversation here is:

Proposal: make default assert handler 'log + exit'

>

That's bad! In UI applications users can't report the log when the program exits

Then use a custom assert handler?

>

I do, but the compiler needs to ignore nothrow for it to work

What is your handler doing that it needs that?

>

log + system("pause") + exit

Why does that depend on cleanup code being run?

>

...

And this is where I get nothing concrete, only that 'in principle' it's more correct to run the destructors because that's what the programmer intended. I find this unconvincing because we're talking about unexpected error situations, appealing to 'the correct intended code path according to principle' is moot because we're not in an intended situation.

What would be convincing is if someone came forward with a real example "this is what my destructors and assert handler do, because the cleanup code was run the error log looked like XXX instead of YYY, which saved me so many hours of debugging!". But alas, we're all talking about vague, hypothetical scenarios which you can always create to support either side.

>

And maybe we should make the behavior configurable so that programemrs can choose which they want rather than mandating that it work one way or the other

Assert failures and range errors just call a function, and you can already swap that function out for whatever you want through various means. The thing that currently isn't configurable is whether the compiler considers that function nothrow.

The problem with making that an option is that this affects nothrow inference, which affects mangling, which results in linker errors. In general, adding more and more options like that just explodes the complexity of the compiler and ruins compatibility. I'd like to avoid it if we can.

2 days ago
On 07/07/2025 1:54 AM, Dennis wrote:
> On Sunday, 6 July 2025 at 02:08:43 UTC, Jonathan M Davis wrote:
>> If an Error is such a terrible condition that we don't even
>> want the stack unwinding code to be run properly, then instead of throwing anything, the program should have just printed out a stack trace and aborted right then and there (...)
> 
> Yes! Hence the proposal in the opening thread.
> 
>> I can understand not wanting any stack unwinding code to run if an Error occurs on the theory that the condition is bad enough that there's a risk that some of what the stack unwinding code would do would make the situation worse, but IMHO, then we shouldn't even have Errors.
> 
> Yes! If it were up to me, Error was removed from D yesterday. But there's push back because  users apparently rely on it, and I can't figure out why. From my perspective, a distilled version of the conversation here is:
> 
> Proposal: make default assert handler 'log + exit'

And then there is contracts that apparently need to catch AssertError.

Which killed that particular idea.

Its one thing to break code, its another to break a language feature.

>> That's bad! In UI applications users can't report the log when the program exits
> 
> Then use a custom assert handler?
> 
>> I do, but the compiler needs to ignore `nothrow` for it to work
> 
> What is your handler doing that it needs that?
> 
>> log + system("pause") + exit
> 
> Why does that depend on cleanup code being run?
> 
>> ...
> 
> And this is where I get nothing concrete, only that 'in principle' it's more correct to run the destructors because that's what the programmer intended. I find this unconvincing because we're talking about unexpected error situations, appealing to 'the correct intended code path according to principle' is moot because we're not in an intended situation.
> 
> What would be convincing is if someone came forward with a real example "this is what my destructors and assert handler do, because the cleanup code was run the error log looked like XXX instead of YYY, which saved me so many hours of debugging!". But alas, we're all talking about vague, hypothetical scenarios which you can always create to support either side.

The only way to get this is to implement the changes needed.

However we can't change the default without evidence that it is both ok to do and preferable.

>> And maybe we should make the behavior configurable so that programemrs can choose which they want rather than mandating that it work one way or the other
> 
> Assert failures and range errors just call a function, and you can already swap that function out for whatever you want through various means. The thing that currently isn't configurable is whether the compiler considers that function `nothrow`.

You also can't configure how the unwinder works. It isn't just "one function". There are multiple implementations and its entire modules, and not just ones in core.

> The problem with making that an option is that this affects nothrow inference, which affects mangling, which results in linker errors. In general, adding more and more options like that just explodes the complexity of the compiler and ruins compatibility. I'd like to avoid it if we can.

Why would it effect inference?

Leave the frontend alone.

Do this in the glue layer. If flag is set and compiler flag is set to a specific value don't add unwinding.
2 days ago
On 7/6/25 15:54, Dennis wrote:
> On Sunday, 6 July 2025 at 02:08:43 UTC, Jonathan M Davis wrote:
>> If an Error is such a terrible condition that we don't even
>> want the stack unwinding code to be run properly, then instead of throwing anything, the program should have just printed out a stack trace and aborted right then and there (...)
> 
> Yes! Hence the proposal in the opening thread.
> ...

It's a breaking change. When I propose these, they are rejected on the grounds of being breaking changes.

I think unwinding is the only sane approach for almost all use cases.

The least insane alternative approach is indeed to abort immediately when an error condition occurs (with support for a global hook to run before termination no matter what causes it), but that is just not a panacea.

>> I can understand not wanting any stack unwinding code to run if an Error occurs on the theory that the condition is bad enough that there's a risk that some of what the stack unwinding code would do would make the situation worse, but IMHO, then we shouldn't even have Errors.
> 
> Yes! If it were up to me, Error was removed from D yesterday. But there's push back because  users apparently rely on it, and I can't figure out why.

Because it actually works and it is the path of least resistance. It's always like this. You can propose alternative approaches all you like, the simple fact is that this is not what the existing code is doing.

> From my perspective, a distilled version of the conversation here is:
> 
> Proposal: make default assert handler 'log + exit'
> 
>> That's bad! In UI applications users can't report the log when the program exits
> 
> Then use a custom assert handler?
> ...

Asserts are not the only errors. I should not have to chase down all different and changing ways that the D language and runtime will try to ruin my life, where for all I know some may not even have hooks. `Throwable` is a nice generic indicator of "something went wrong".

Contracts rely on catching assert errors. Therefore, a custom handler may break dependencies and is not something I will take into account in any serious fashion.

Also, having to treat uncaught exceptions and errors differently by default is busywork. I have never experienced a situation where unwinding caused additional issues, but I have experienced multiple instances where lack of unwinding caused a lot of pain.

A nice thing about stack unwinding is that you can collect data in places where it is in scope. In some assert handler that is devoid of context you can only collect things you have specifically and manually deposited in some global variable prior to the crash.

I don't want to write my program such that it has to do additional bookkeeping for something that happens at most once every couple of months across all users and be told it is somehow in the name of efficiency.

Also it seems you are just ignoring arguments about rollback that resets state that is external to your process.

>> I do, but the compiler needs to ignore `nothrow` for it to work
> 
> What is your handler doing that it needs that?
> 
>> log + system("pause") + exit
> 
> Why does that depend on cleanup code being run?
> ...

Because it is itself in an exception handler that catches Throwable, or in a scope guard. Whatever variables are referenced in "log" are likely not available in a hook function.

>> ...
> 
> And this is where I get nothing concrete, only that 'in principle' it's more correct to run the destructors because that's what the programmer intended. I find this unconvincing because we're talking about unexpected error situations, appealing to 'the correct intended code path according to principle' is moot because we're not in an intended situation.
> ...

The language does not have to give up on its own promises just because the user made an error. It's an inadmissible conflation of different abstraction levels and it is really tiring fallacious reasoning that basically goes: Once one thing went wrong, we are allowed to make everything else go wrong too.

Let's make 2+2=3 within a `catch(Throwable){ ... }` handler too, because why not, nobody whose program has thrown an error is allowed to expect any sort of remaining sanity.

> What would be convincing is if someone came forward with a real example "this is what my destructors and assert handler do, because the cleanup code was run the error log looked like XXX instead of YYY, which saved me so many hours of debugging!".

It's not about saving hours of debugging, it's getting information that allows reproducing the crash in the first place.

I don't actually write programs that crash every time, or even crash frequently. I want to reduce crashes from almost never to never, not from frequently to a bit less frequently.

As things stand, I just save the interaction log in a `scope(failure)` statement, on the level of unwound stack where that interaction log is in scope.

It is indeed the case that I have not ran into an issue with destructors (but not `scope(exit)`/`scope(failure)`/`finally`) being skipped in practice, but this is because they were not skipped. It caused zero issues for them to not be skipped.

Adding a subtle semantic difference between destructors and other scope guards I think is just self-evidently bad design, on top of breaking people's code.

> But alas, we're all talking about vague, hypothetical scenarios which you can always create to support either side.
> ...

I am talking about actual pain I have experienced, because there are some cases where unwinding will not happen, e.g. null dereferences.

You are talking about pie-in-the-sky overengineered alternative approaches that I do not have any time to implement at the moment.

Like, do you really want me to have to start a separate process, somehow dump the process memory and then try to reconstruct my internal data structures and data on the stack from there? It's a really inefficient workflow and just doing the unrolling _works now_ and gives me everything I need.

For all I know, Windows Defender will interfere with this and then I get nothing again.

>> And maybe we should make the behavior configurable so that programemrs can choose which they want rather than mandating that it work one way or the other
> 
> Assert failures and range errors just call a function, and you can already swap that function out for whatever you want through various means. The thing that currently isn't configurable is whether the compiler considers that function `nothrow`.
> ...

Yes, these are functions. They have no context.

> The problem with making that an option is that this affects nothrow inference, which affects mangling, which results in linker errors. In general, adding more and more options like that just explodes the complexity of the compiler and ruins compatibility. I'd like to avoid it if we can.
> 

We can, make unsafe cleanup elision in `nothrow` a build-time opt-in setting. This is a niche use case.