| |
|
Arafel 
| On 14.04.25 16:22, Richard (Rikki) Andrew Cattermole wrote:
> On 15/04/2025 1:51 AM, Atila Neves wrote:
>>
>> I would like to know why one would want this.
>
> Imagine you have a web server that is handling 50k requests per second.
>
> It makes you $1 million dollars a day.
>
> In it, you accidentally have some bad business logic that results in a null dereference or indexing a slice out of bounds.
>
> It kills the entire server losing you potentially the full 1 million dollars before you can fix it.
>
> How likely are you to keep using D, or willing to talk about using D positively afterwards?
>
> ASP.net guarantees that this will kill the task and will give the right response code. No process death.
>
I won't get into the merits of the feature itself, but I have to say that this example is poorly chosen, to say the least. In fact, it looks to me like a case of "when you only have a hammer, everything looks like a nail": not everything should be handled by the application itself.
As somebody coming rather from the "ops" side of "devops", let me tell you that there is a wide range of tools that you should be using **on top of your application** if you have an app that makes you 1M$ a day, including but not restricted to:
* A monitoring process to make sure the server is running (and healthy). Among this process's tasks are making sure that in case of a failure the main process is fully stopped, killing any leftover tasks, removing lock files, ensuring data sanity, etc., and then restarting the main server again.
* A HA system routing queries to a pool of several servers that are regularly polled for health status, assuming that the failure happens seldom enough that it's very unlikely to affect several backend servers at the same time.
* Some meatbag on-call 24/7 (or even on-site) who can at the very least restart the affected server (including the hardware) if it comes to that.
I mean, a service can fail for a number of reasons, including hardware issues, among which dereferencing a null pointer should be quite low in the scale of probabilities.
Having a 1M$/day operation depend on your application's continued run after dereferencing a null pointer would seem to me... rather risky and sort-sighted.
On top of that, there's the "small" issue that you can't really be sure what state the application has been left in. I certainly wouldn't want to risk any silent data corruption and would rather kill the process ASAP to start it again from a known good state.
Again, I'm not arguing for or against the feature itself, but I just think this example doesn't do it any help.
|