| |
 | Posted by Timon Gehr in reply to Walter Bright | Permalink Reply |
|
Timon Gehr 
Posted in reply to Walter Bright
| On 1/1/23 19:18, Walter Bright wrote:
> On 12/31/2022 7:06 PM, Timon Gehr wrote:
>> No, it absolutely, positively does not... It only ensures no null dereference takes place on each specific run. You can have screwed it up and only notice once the program is published. I know this happens because I have been a _user_ of software with this kind of problem. Notably this kind of thing happens in released versions of DMD sometimes...
>
> You're absolutely right. And if I do a pattern match to create a non-nullable pointer, where the null arm does a fatal error if it can't deal with the null, it's the same thing.
> ...
_IF_. It's a very big IF. If you can't deal with the null, it should have been a non-null pointer in the first place. While, without even more powerful language features, this is not absolutely _always_ possible, it is _usually_ possible. This should be very easy to understand from an "engineering point of view".
> But we've both stated this same thing several times now.
>
>
>> That's great. However, it's somewhat aggravating to me that I am currently not actually convinced you understand what's needed to achieve that. This is because you are making statements that equate nonnull pointers in the type system to runtime hardware checking with segmentation faults.
>
> Yes, I am doing just that.
>
> Perhaps I can state our difference thusly. You are coming from a type theory point of view, and your position is quite right from that point of view.
> ...
I am approaching this from a practical angle, as a user and creator of software.
> I'm not saying you are wrong. You are right. But I am coming from an engineering point of view, saying that for practical purposes, the hardware check produces the same result.
> ...
This is wrong from any point of view that includes occasionally running software. I have suffered as a user from many bugs whose underlying cause is exceedingly easy to guess as just being "somebody forgot about null" and a proper type system would have obviously prevented most of those _during the initial design of the system when everyone's memory of the code base was very fresh_.
If your software crashes with a fatal segmentation fault, that's an engineering failure. Using the right tools, such as type systems, is _part_ of software engineering.
Let me translate the previous discussion to the bridge setting:
TG: I observe bridges collapsing. Maybe we should actually calculate statics during the planing phase, before building them?
WB: From a computational standpoint you are right. However, from an engineering standpoint, you can just keep building bridges. You will then learn their flaws as they collapse, which gives you exactly the same end result.
I just don't think good engineers argue in this fashion. This is exactly the kind of out-of-touch ivory tower reasoning that theorists are sometimes accused of.
> If the hardware check wasn't there, I'd be all in on your approach. Which is why I'm excited about sumtypes being used for error states.
Sure. And on some targets there is not even a hardware check. (WASM in particular would be useful to me.)
|