December 29, 2022
On 12/29/2022 12:46 PM, Timon Gehr wrote:
> The bad thing is allowing programs to enter unanticipated, invalid states in the first place...

We both agree on that one. But invalid states happen in the real world.


> Not all disasters are silent. Maybe you are biased because you only write batch programs that are intended to implement a very precise spec.

I'm biased from my experience designing aircraft systems. You never, ever want an avionics program to proceed if it has entered an invalid state. It must fail instantly, fail hard, and allow the backup to take over.

The idea that a program should soldier on once it is in an invalid state is very bad system design. Perfect programs cannot be made. The solution is to not pretend that the program is perfect, but be able to tolerate its failure by shutting it down and engaging the backup.

I think the Chrome was the first browser to do this. It's an amalgamation of independent processes. The processes do not share memory, they communicate with interprocess protocols. If one process fails, its failure is isolated, it is aborted, and a replacement is spun up.

The hubris of "can't be allowed to fail" software is what allowed hackers to manipulate a car's engine and brakes remotely by hacking in via the keyless door lock. (Saw this on an episode of "60 Minutes".)
December 29, 2022
All that said, we'll get non-nullable pointers with sum types. How well that will work wrt bugs, we'll see!
December 30, 2022
On Friday, 30 December 2022 at 02:17:58 UTC, Walter Bright wrote:
>
> The hubris of "can't be allowed to fail" software is what allowed hackers to manipulate a car's engine and brakes remotely by hacking in via the keyless door lock. (Saw this on an episode of "60 Minutes".)

Id blame that more on the hubris of adding computers to a physical system that could be simple.

I dont understand why its such a rare opinion to think about software as fail safe or fail dangerous depending on context; most software that exists should be fail safe, where every attempt is made to make it to keep going. Airplanes, nasa and maybe even hard drive drivers; write triple check every line of code, turn on every safety check and have meetings about each and every type; fine. Code I realistically will write? Nah
December 30, 2022
On 12/30/22 04:02, Walter Bright wrote:
> All that said, we'll get non-nullable pointers with sum types. How well that will work wrt bugs, we'll see!

Well, if we get compile-time checking for them it will work better, otherwise it won't.
December 30, 2022
On 12/30/22 03:17, Walter Bright wrote:
> On 12/29/2022 12:46 PM, Timon Gehr wrote:
>> The bad thing is allowing programs to enter unanticipated, invalid states in the first place...
> 
> We both agree on that one. But invalid states happen in the real world.
> ...

That's certainly not a reason to introduce even _more_ opportunities for bad things to happen...

> 
>> Not all disasters are silent. Maybe you are biased because you only write batch programs that are intended to implement a very precise spec.
> 
> I'm biased from my experience designing aircraft systems. You never, ever want an avionics program to proceed if it has entered an invalid state. It must fail instantly, fail hard, and allow the backup to take over.
> ...

That's context-specific and for the programmer to decide. You can't have the backup take over if you blow up the plane.

> The idea that a program should soldier on once it is in an invalid state is very bad system design.

Well, here it's the language that is encouraging people to choose a design that allows invalid states.

> Perfect programs cannot be made. The solution is to not pretend that the program is perfect, but be able to tolerate its failure by shutting it down and engaging the backup.
> ...

Great, so let's just give up I guess. All D programs should just segfault on startup. They were not perfect anyway.

> I think the Chrome was the first browser to do this. It's an amalgamation of independent processes. The processes do not share memory, they communicate with interprocess protocols. If one process fails, its failure is isolated, it is aborted, and a replacement is spun up.
> 
> The hubris of "can't be allowed to fail" software is what allowed hackers to manipulate a car's engine and brakes remotely by hacking in via the keyless door lock. (Saw this on an episode of "60 Minutes".)

I am not saying software can't be allowed to fail, just that it should fail compilation, not at runtime.

December 30, 2022
On 12/30/22 03:03, Walter Bright wrote:
> On 12/29/2022 12:45 PM, Adam D Ruppe wrote:
>> The alternative is the language could have prevent this state from being unanticipated at all, e.g. nullable vs not null types.
> 
> It can't really prevent it. What happens is people assign a value, any value, just to get it to compile.

No, if they want a special state, they just declare that special state as part of the type. Then the type system makes sure they don't dereference it.

> I've seen it enough to not encourage that practice.
> ...

There is ample experience with that programming model and languages are generally moving in the direction of not allowing null dereferences. This is because it works. You can claim otherwise, but you are simply wrong.

> If there are no null pointers, what happens to designate a leaf node in a tree?

E.g. struct Node{ Node[] children; }

> An equivalent "null" object is invented.

No, probably it would be a "leaf" object.

E.g.:

data BinaryTree = Inner BinaryTree BinaryTree | Leaf

Now none of the two cases are special. You can pattern match on BinaryTrees to figure out whether it is an inner node or a leaf. The compiler checks that you cover all cases. This is not complicated.

size tree = case tree of
   Inner t1 t2 -> size t1 + size t2 + 1
   Leaf -> 1

No null was necessary.

> size (Inner Leaf (Inner Leaf Leaf))
5

> Nothing is really gained.
> ...

Nonsense. Compile-time checking is really gained. This is just a question of type safety.

> Null pointers are an excellent debugging tool. When a seg fault happens, it leads directly to the mistake with a backtrace. The "go directly to jail, do not pass go, do not collect $200" nature of what happens is good. *Hiding* those errors happens with non-null pointers.
> ...

Not at all. You simply get those errors at compile time. As you say, it's an excellent debugging tool.

> Initialization with garbage is terrible.

Of course.

> I've spent days trying to find the source of those bugs.
> 
> Null pointer seg faults are as useful as array bounds overflow exceptions.
> ...

Even array bounds overflow exceptions would be better as compile-time errors. If you don't consider that practical, that's fine, I guess it will take a couple of decades before people accept that this is a good idea, but it's certainly practical today for null dereferences.

> NaNs are another excellent tool. They enable, for example, dealing with a data set that may have unknown values in it from bad sensors. Replacing that missing data with "0.0" is a very bad idea.

This is simply about writing code that does not lie.

Current way:

Object obj; // <- this is _not actually_ an Object

Much better:

Object? obj; // <- Object or null made explicit

if(obj){
    static assert(is(typeof(obj)==Object)); // ok, checked
    // can dereference obj here
}

obj.member(); // error, obj could be null

The same is true for floats. It would in principle make sense to have an additional floating-point type that does not allow NaN. This is simply about type system expressiveness, you can still do everything you were able to do before, but the type system will be able to catch your mistakes early because you are making your expectations explicit across function call boundaries.

It just makes no sense to add an additional invalid state to every type and defer to runtime where it may or may not crash, when instead you could have just given a type error.
December 30, 2022
On Tuesday, 27 December 2022 at 22:53:45 UTC, Walter Bright wrote:
> On 12/27/2022 1:41 AM, Max Samukha wrote:
>> If T.init is supposed to be an invalid value useful for debugging, then variables initialized to that value... are not initialized.
>
> It depends on the designed of the struct to decide on an initialized value that can be computed at compile time. This is not a failure, it's a positive feature. It means struct instances will *never* be in a garbage state.

Yes, they will be in an invalid state.

>
> C++ does it a different way, not a better way.

C++ is looking for a principled solution, as that presentation by Herb Sutter suggests. By saying "no dummy values", he must be referring to T.init :)

If you don't want a proper fix as Timon and others are proposing, can we at least have nullary constructors for structs to be consistent with the "branding" vs construction ideology?

struct S { this(); }
S x; // S x = S.init;
S x = S(); // S x = S.init; x.__ctor();

There is no reason to require the use of factory functions for this. Constructors *are* "standard" factory functions.

People have resorted to all kinds of half-working hacks to work around this in generic code. The latest one I've seen is like:

mixin template Ctors()
{
    static typeof(this) opCall(A...)(auto ref A a) {
        import core.lifetime: forward;

        typeof(this) r;
        r.__init(forward!a);
        return r;
    }
}

struct S
{
    // fake ctors
    void __init() {}
    void __init(...) {}
    mixin Ctors;
}

void foo(T)() { T x = T(); } // no need to pass around a factory function anymore

December 30, 2022
On Friday, 30 December 2022 at 03:02:04 UTC, Walter Bright wrote:
> All that said, we'll get non-nullable pointers with sum types. How well that will work wrt bugs, we'll see!

So, although many things happened here in this post, I would like to say that although many people here didn't like the idea of import C, I believe it can be a great game changer by attracting people which want to integrate D in existing D code without requiring them to port anything, read that as less investment to do.

I would just like to say that I just would like to see import C working in real life before you start extending it do things it wasn't supposed to do. No one here wants more half assed features. Just wish to focus on import C finish and then work in fixing C biggest mistake.
December 30, 2022
On 12/29/2022 8:01 PM, Timon Gehr wrote:
> Even array bounds overflow exceptions would be better as compile-time errors. If you don't consider that practical, that's fine, I guess it will take a couple of decades before people accept that this is a good idea,

The size of the array depends on the environment. I don't see how to do that at compile time.

> but it's certainly practical today for null dereferences.

Pattern matching inserts an explicit runtime check, rather than using the hardware memory protection to do the check. All you get with pattern matching is (probably) a better error message, and a slower program. You still get a fatal error, if the pattern match arm for the null pointer is fatal.

You can also get a better error message with a seg fault if you code a trap for that error.

Isn't it great that the hardware provides runtime null checking for you at zero cost?

If a seg fault resulted in memory corruption, then I agree with you. But it doesn't, it's at zero cost, your program runs at full speed.

P.S. in the bad old DOS days, a null pointer write would scramble DOS's interrupt table, which had unpredictable and often terrible effects. Fortunately, uP's have evolved since then into having hardware memory protection, so that is no longer an issue. As soon as I got a machine with memory protection, I switched all my development to that. Only as a last step did I recompile it for DOS.

December 30, 2022
On 12/29/2022 7:04 PM, monkyyy wrote:
> I dont understand why its such a rare opinion to think about software as fail safe or fail dangerous depending on context; most software that exists should be fail safe, where every attempt is made to make it to keep going.

Please reconsider your "every attempt" statement. It's a surefire way to disaster.


> Airplanes, nasa and maybe even hard drive drivers; write triple check every line of code, turn on every safety check and have meetings about each and every type; fine.

Sorry, but again, that is attempting to write perfect software. It is *impossible* to do. Humans aren't capable of doing it, and from what I read about the space shuttle software is it is terrifyingly expensive to do all that checking and so it does not scale.

The right way is not to imagine one can write perfect software. It is to have a plan for what to do *when* the software fails. Because it *will* fail.

For example, a friend of mine years ago told me he was using a password manager for his hundreds of passwords to keep them safe. I told him it that the PWM is a single point of failure, and when it failed it would compromise all of his passwords. He dismissed the idea, saying he trusted the password manager company.

Fast forward to today. LastPass, which is what he was relying on, failed. Now all his hundreds of passwords are compromised.