June 04, 2022

On 6/4/22 9:43 PM, Steven Schveighoffer wrote:

>

But I think you are still not supposed to continue execution. I'm not sure what a compiler might assume at this point, and I unfortunately can't find in the language specification where it states this. It might not be in there at all, the spec is sometimes lacking compared to the implementation.

BTW, I think this is the main reason why it keeps coming up for D learners, and why I wrote the article in the first place. It would be good (if it's not already in the spec) to have something mentioned about the pitfalls of catching Errors.

-Steve

June 05, 2022

On Saturday, 4 June 2022 at 22:03:08 UTC, kdevel wrote:

>

On Saturday, 4 June 2022 at 14:05:14 UTC, Paul Backus wrote:

>

This is entirely a question of API design. If it should be the caller's responsibility to check for some condition before calling the function, then you can throw an Error when that condition does not hold (or more likely, use an assert or an in contract to check for it).

Provided one does not catch Errors this means one has to isolate such an API design by using a subprocess. This is what one usually tries to avoid.

See here:

http://joeduffyblog.com/2016/02/07/the-error-model/#bugs-arent-recoverable-errors

And also the following section:

http://joeduffyblog.com/2016/02/07/the-error-model/#reliability-fault-tolerance-and-isolation

> >

If it should be the callee's responsibility to check, you should throw an Exception (or use enforce).

If the library always throws exceptions it can be used in both API "designs". In the case that the implementor of the caller expects Errors instead of Exceptions she could use a small wrapper which catches the Exceptions and rethrows them as Errors. Likewise for error codes.

Using contracts and invariants impedes this approach.

See here:

https://bloomberg.github.io/bde-resources/pdfs/Contracts_Undefined_Behavior_and_Defensive_Programming.pdf

June 05, 2022

On Sunday, 5 June 2022 at 00:40:26 UTC, Ali Çehreli wrote:

>

Errors are thrown when the program is discovered to be in an invalid state. We don't know what happened and when. For example, we don't know whether the memory has been overwritten by some rogue code.

That is not very probable in 100% @safe code. You are basically saying that D cannot compete with Go and other «safe» languages. Dereferencing a null pointer usually means that some code failed to create an instance and check for it.

My code can detect that the failure is local under the assumption thay the runtime isnt a piece of trash.

>

What happened? What can we assume. We don't know and we cannot assume any state.

So D will never be able to provide actors and provide fault tolerance.

>

Is the service in a usable state?

Yes, the actor code failed. The actor code change frequently, not the runtime kernel.

>

Possibly. Not shutting down might produce incorrect results. Do we prefer up but incorrect or dead?

I prefer that service keeps running: chat service, game service, data delivered with hashed «checksum». Not all software are database engines where you have to pessimize about bugs in the runtime kernel.

If the data delivered is good enough for the client and better than nothing then the service should keep running!!!

>

I hope there is a way of aborting the program when there are invariant

Invariants are USUALLY local. I dont write global spaghetti code. As a programmer you should be able to distinguish between local and global failure.

You are assuming that the programmer is incapable of making judgements. That is assuming way too much.

June 05, 2022

On Sunday, 5 June 2022 at 03:43:16 UTC, Paul Backus wrote:

>

See here:

https://bloomberg.github.io/bde-resources/pdfs/Contracts_Undefined_Behavior_and_Defensive_Programming.pdf

Not all software is banking applications. If an assert fails that means that the program logic is wrong, not that the program is in an invalid state. (Invalid state is a stochastic consequence and detection can happen mich later).

So that means that you should just stop the program. It means that you should shut down all running instances of that program on all computers across the globe. That is the logical consequence of this perspective, and it makes sense for a banking database.

It does not make sense for the constructor of Ants in a computer game service.

It is better to have an enjoyable game delivered with fewer ants than a black screen all weekend.

You can make the same argument for an interpreter: if an assert fails in the intrrpreter code then that could be a fault in the interpreter therefore you should shut down all programs being run by that interpreter.

The reality is that software is layered. Faults at different layers should have different consequences at the discretion of a capable programmer.

June 05, 2022

On Sunday, 5 June 2022 at 06:31:42 UTC, Ola Fosheim Grøstad wrote:

>

On Sunday, 5 June 2022 at 00:40:26 UTC, Ali Çehreli wrote:
That is not very probable in 100% @safe code. You are basically saying that D cannot compete with Go and other «safe» languages.

Go has panic. Other languages have similar constructs.

>

So D will never be able to provide actors and provide fault tolerance.

I guess it depends on the programmer.

> >

Is the service in a usable state?

Yes, the actor code failed. The actor code change frequently, not the runtime kernel.

The actor code is free to call anything, including @system code.

How would the actor framework know when an error is due to a silly bug in an isolated function or if it is something more serious?

June 05, 2022

On Sunday, 5 June 2022 at 07:21:18 UTC, Ola Fosheim Grøstad wrote:

>

You can make the same argument for an interpreter: if an assert fails in the intrrpreter code then that could be a fault in the interpreter therefore you should shut down all programs being run by that interpreter.

Typo: if an assert fails in the interpreted code, then that could be a sign that the interpreter itself is flawed. Should you then stop all programs run by that interpreter?

The point: in the real world you need more options than the nuclear option. Pessimizing everywhere is not giving higher satisfaction for the end user.

(Iphone keyboard)

June 05, 2022

On Sunday, 5 June 2022 at 07:28:52 UTC, Sebastiaan Koppe wrote:

>

Go has panic. Other languages have similar constructs.

And recover.

> >

So D will never be able to provide actors and provide fault tolerance.

I guess it depends on the programmer.

But it isn’t if you cannot prevent Error from propagating.

> > >

Is the service in a usable state?

Yes, the actor code failed. The actor code change frequently, not the runtime kernel.

The actor code is free to call anything, including @system

@trusted code. How is this different from FFI in other languages? As a programmer you make a judgment. The D argument is to prevent the programmer from making a judgment?

>

How would the actor framework know when an error is due to a silly bug in an isolated

How can I know that a failure in Python code isn’t caused by C

There is no difference in the situation. I make a judgment as a programmer.

June 05, 2022

On Sunday, 5 June 2022 at 00:18:43 UTC, Adam Ruppe wrote:

>

Run it in a separate process with minimum shared memory.

That is a workaround that makes other languages more attractive. It does not work when I want to have 1000+ actors in my game server (which at this point most likely will be written in Go, sadly).

So a separate process is basically a non-solution. At this point Go seems to be the best technology of all the bad options! A pity, as it is not an enjoyable language IMO, but the goals are more important than the means…

The problem here is that people are running an argument as if most D software is control-software for chemical processes or database kernels. Then people quote writings on safety measures that has been evolved in the context/mindset of control-software in the 80s and 90s. And that makes no sense, when only Funkwerk (and possibly 1 or 2 others) write such software in D.

The reality is, most languages call C-libraries and have C-code in their runtime, under the assumption that those C-libaries and runtimes have been hardened and proven to be reliable with low probability of failure.

Correctness is probabilistic. Even in the case of 100% verified code, as there is a possibility that the spec is wrong.

Reliability measures are dependent on the used context. What «reliable» means depends on skilled judgment utilized to evaluate the software in the use context. «reliable» is not a context independent absolute.

June 05, 2022
On Sunday, 5 June 2022 at 10:38:44 UTC, Ola Fosheim Grøstad wrote:
> That is a workaround that makes other languages more attractive.

It is what a lot of real world things do since it provides additional layers of protection while still being pretty easy to use.

> *Correctness **is** probabilistic.* Even in the case of 100% verified code, as there is a possibility that the spec is wrong.

I'm of the opinion that the nothrow implementation right now is misguided. It is a relatively recent change to dmd (merged December 2017).

My code did and still does simply catch Error and proceed. Most Errors are completely harmless; RangeError, for example, is thrown *before* the out-of-bounds write, meaning it prevented the damage, not just notified you of it. It was fully recoverable in practice before that silly Dec '17 dmd change, and tbh still is after it in a lot of code.

If it was up to me, that patch would be reverted and the spec modified to codify the old status quo. Or perhaps redefine RangeError into RangeException, OutOfMemoryError as OutOfMemoryException, and such for the other preventative cases and carry on with joy, productivity, and correctness.
June 05, 2022

On Sunday, 5 June 2022 at 11:13:48 UTC, Adam D Ruppe wrote:

>

On Sunday, 5 June 2022 at 10:38:44 UTC, Ola Fosheim Grøstad wrote:

>

That is a workaround that makes other languages more attractive.

It is what a lot of real world things do since it provides additional layers of protection while still being pretty easy to use.

Yes, it is not a bad solution in many cases. It is a classic solution for web servers, but web servers typically don't retain a variety of mutable state of many different types (a webserver can do well with just a shared memcache).

>

My code did and still does simply catch Error and proceed. Most Errors are completely harmless; RangeError, for example, is thrown before the out-of-bounds write, meaning it prevented the damage, not just notified you of it. It was fully recoverable in practice before that silly Dec '17 dmd change, and tbh still is after it in a lot of code.

Yes, this is pretty much how I write validators of input in Python web services. I don't care if the validator failed or if the input failed, in either case the input has to be stopped, but the service can continue. If there is a suspected logic failure, log and/or send notification to the developer, but for the end user it is good enough that they "for now" send some other input (e.g. avoiding some unicode letter or whatever).

>

Or perhaps redefine RangeError into RangeException, OutOfMemoryError as OutOfMemoryException, and such for the other preventative cases and carry on with joy, productivity, and correctness.

For a system level language such decisions ought to be in the hand of the developer so that he can write his own runtime. Maybe some kind of transformers so that libraries can produce Errors, but have them transformed to something else at boundaries.

If I want to write an actor-based runtime and do all my application code as 100% @safe actors that are fully «reliable», then that should be possible in a system level language.

The programmer's definition and evaluation of «reliability» in the context of a «casual game server» should carry more weight than some out-of-context-reasoning about «computer science» (it isn't actually).