November 21, 2017
On Wednesday, November 22, 2017 00:19:51 codephantom via Digitalmars-d wrote:
> On Tuesday, 21 November 2017 at 20:02:06 UTC, Timon Gehr wrote:
> > I'm confident that you would be able to use null safe languages properly if that is what had been available for most of your career.
>
> You do realise, that all of the issues you mention can just be handled by coding correctly in the first place.

While I definitely don't think that it's generally very hard to avoid bugs with null pointers/references, telling someone to code correctly in the first place isn't very useful. Of course, it's better to do that, but people make mistakes all the time. The real question is whether the problem is big enough in general or bad enough when it happens to add something to the language to mitigate it - e.g. no one should be failing to initialize variables, but it happens sometimes, and default-initializing variables like D does helps prevent a certain class of bugs. The programmer still needs to make sure that they deal with initialization correctly, but the problems that they have when they screw it up are less drastic than they are in C/C++ where variables don't get default-initialized unless they're classes with default constructors.

Personally, I don't think that null pointer dereferencing is enough of a problem to start insisting on non-nullable pointers or references (especially at this point in D's development), and when it happens, it's very clear what went wrong, so you avoid subtle problems like you'd get with something like initializing a variable to garbage. So, I don't think that there's enough value in having non-nullable pointers or references to add them. In my experience, it just isn't hard to avoid problems with null. But at the same time, I think that it's perfectly legitimate to be looking to mitigate a source of bugs, and if you have a pointer or reference that really never should be null, having that guaranteed by the type system prevents mistakes, which is useful.

> Its seems to be, that you prefer to rely on the type system, during compilation, for safety. This is very unwise.

Any time the type system can prevent a bug, it's useful. I don't see why that would be a problem or unwise. That's part of why many of us prefer statically typed languages to dynamically typed languages. The compiler catches more bugs for us that way. The question isn't whether we should use the type system to prevent bugs. The question is which set of problems really make sense to prevent with the type system.

- Jonathan M Davis

November 22, 2017
On Wednesday, 22 November 2017 at 00:49:02 UTC, Jonathan M Davis wrote:
> While I definitely don't think that it's generally very hard to avoid bugs with null pointers/references, telling someone to code correctly in the first place isn't very useful.

Fair enough...perhaps I'm being too explicit with my argument.

However, my point is, that one should not overly rely on some magical compiler for telling you what is 'true'.

How can a compiler know that G is true if it cannot prove that G is true?

You need to take this into account during your coding. Otherwise the runtime system is your last line of defence.

November 22, 2017
On Wednesday, 22 November 2017 at 00:49:02 UTC, Jonathan M Davis wrote:
> The question isn't whether we should use the type system to prevent bugs. The question is which set of problems really make sense to prevent with the type system.
>
> - Jonathan M Davis


Those that can be proven.

November 22, 2017
On Wednesday, 22 November 2017 at 00:49:02 UTC, Jonathan M Davis wrote:
> While I definitely don't think that it's generally very hard to avoid bugs with null pointers/references, telling someone to code correctly in the first place isn't very useful.

By 'correct code', I mean code that assists the compiler, so that it can determine what the truth is (or is meant to be).


November 21, 2017
On Wednesday, November 22, 2017 01:25:48 codephantom via Digitalmars-d wrote:
> On Wednesday, 22 November 2017 at 00:49:02 UTC, Jonathan M Davis
>
> wrote:
> > The question isn't whether we should use the type system to prevent bugs. The question is which set of problems really make sense to prevent with the type system.
> >
> > - Jonathan M Davis
>
> Those that can be proven.

Sure. If it can't be proven that something is a bug, then the compiler shouldn't be giving an error in that case (and IMHO, it shouldn't be warning about it either, since any good programmer doesn't leave warnings in their project, effectively making warnings errors).

In the case of null, you _can_ prove it if you have non-nullable types. If it's not legal for a pointer or reference to be null, then the compiler can guarantee that it's not null. But then you either have the extra complication of having both nullable and non-nullable pointers/references in the language, or you force all pointers/references to use something like std.typecons.Nullable to treat them as nullable or use a construct in the language which does the same, and that arguably doesn't make a lot of sense given that underneath the hood, all pointers or references are going to be nullable, even if you're not allowed to make them null by the type system. But it would reduce the amount of code where you would have to worry about potentially having null values.

However, if you don't have non-nullable pointers/references, then you really can't prove that a pointer or reference is non-null in the general case. You can prove it under certain circumstances, but ultimately you're going to end up with an algorithm that only works part of the time. So, best case, it gives you an error when it definitively knows that you're trying to dereference null, but that would likely generally be in the cases where you would very quickly find it yourself as soon as you ran your code. So, while the compiler check might be useful, I doubt that it would ultimately help much with preventing bugs in practice.

- Jonathan M Davis

November 22, 2017
On Wednesday, 22 November 2017 at 01:48:55 UTC, Jonathan M Davis wrote:
> In the case of null, you _can_ prove it if you have non-nullable types.

True (well...you can at least 'assert' it anyway).

But if the intention is to 'assist the compiler towards knowing the truth/correctness about your statement', then this can be easily done without introducing a new nullable reference type -
i.e.
if(object != null)
 use it;

Either way, checks are made.

So I still don't see the point of adding a new nullable reference type to a language, unless one is asserting that it is ok to not already be checking for null (which seems to be the case for a large number of C# programmers - hence the proposal).

November 22, 2017
On Wednesday, 22 November 2017 at 00:49:02 UTC, Jonathan M Davis wrote:
> Any time the type system can prevent a bug, it's useful. I don't see why that would be a problem or unwise.

That is not unwise.

What is 'unwise' is what I said was unwise..that is, putting your trust in the compiler's capacity to always know what the truth is. That is unwise.

Consider the Goldbach Conjecture, that every even positive integer greater than 2 is the sum of two (not necessarily distinct) primes. According to the principle of bivalence, this should be either true or false.

But where is the proof that this is either true, or false?

There is a fundamental error in assuming that something can only be either true or false. Some things require too much effort to prove, or may simply be unprovable. How much time should the compiler spend trying to prove something?

> The question isn't whether we should use the type system to prevent bugs. The question is which set of problems really make sense to prevent with the type system.
>

No, the question should be, what can the compiler prove to be true/false, correct/incorrect about your code, and what effort have you made in your code to assist the compiler to make that determination.

If you've made no effort to provide the compiler with the context it needs to make a useful determination, then don't complain when the compiler gets it wrong. That is my first point.

My second point, is that it is already possible to provide such context to the compiler, without having to make reference types non nullable, and therefore having to introduce a new nullable reference type.

Which make more sense? Knowing that a reference type could potentially be null, and therefore check for null, or dealing with all the flow on conquences of making a reference type non nullable by default?

Even with such a change, the Goldbach Conjecture still cannot be resolved.

November 22, 2017
On Wednesday, 22 November 2017 at 00:19:51 UTC, codephantom wrote:
>
> btw. what was the last compiler you wrote?

https://github.com/eth-srl/psi
https://github.com/tgehr/d-compiler
November 22, 2017
On Wednesday, 22 November 2017 at 08:55:03 UTC, Petar Kirov [ZombineDev] wrote:
> On Wednesday, 22 November 2017 at 00:19:51 UTC, codephantom wrote:
>>
>> btw. what was the last compiler you wrote?
>
> https://github.com/eth-srl/psi
> https://github.com/tgehr/d-compiler

touché  ;-)

nonetheless. I stand by my arguments.

November 22, 2017
On Wednesday, November 22, 2017 09:28:47 codephantom via Digitalmars-d wrote:
> On Wednesday, 22 November 2017 at 08:55:03 UTC, Petar Kirov
>
> [ZombineDev] wrote:
> > On Wednesday, 22 November 2017 at 00:19:51 UTC, codephantom
> >
> > wrote:
> >> btw. what was the last compiler you wrote?
> >
> > https://github.com/eth-srl/psi https://github.com/tgehr/d-compiler
>
> touché  ;-)

LOL. I assumed that you were legitimately asking what the name of his compiler was, because I knew that he was writing a D compiler, whereas you were questioning his knowledge/credentials. Timon is a very smart guy. He knows a lot and has lots of great things to say. I certainly don't always agree with him, but he generally knows what he's talking about.

- Jonathan M Davis