November 22, 2017
On 22.11.2017 01:19, codephantom wrote:
> On Tuesday, 21 November 2017 at 20:02:06 UTC, Timon Gehr wrote:
>>
>> I'm confident that you would be able to use null safe languages properly if that is what had been available for most of your career.
>>
> 
> You do realise, that all of the issues you mention can just be handled by coding correctly in the first place.
> ...

Yes, just like everyone else, I realize that if correct code is written, we end up with correct code, but thanks for pointing it out.

BTW of course you must realize that you can make the compiler brutally obsolete by just quickly writing down the most efficient possible correct machine code in a hex editor, so I'm not too sure why you participate in a discussion on the forums of a compiled language at all.

> If your program calls 'std.math.log' with an argument of '-123.4', then that's probably NOT a bug. It's more likely to be incorrect code.

https://en.wikipedia.org/wiki/Software_bug

> Why not bounds-check the argument before passing it to the function?
> ...

Walter said NaN is underused, not me.

> If you access a field of an invalid instance of an object, that's probably NOT a bug. It's more likely to be incorrect code.

https://en.wikipedia.org/wiki/Software_bug

> Before you access a field of an object, check that the object is valid.
> ...

If I know that it is valid, I might not want to check it.
Then, if, let's say, you come along and read my code, I do not need you to point out that I didn't check the field access. If you still do, I can now either explain to you why it is unnecessary, which will waste my time and does not guarantee that you will buy it, or I can write the code in a language that requires me to provide the proof up front, such that you will not have to bother me. And even if you still doubt that the proof is actually correct, it will not be my problem, but instead you'll need to take it to the guy who wrote the compiler. This is one of the reasons why Walter does not like non-null types. ;o)

> Its seems to be,

Spelling mistakes can be avoided by just spelling correctly.

> that you prefer to rely on the type system, during compilation, for safety.

No, I ideally want the type system to point out when the code is not obviously correct. That does not mean I assume that the code is correct when it compiles (given that I'm using a language that does not require me to prove absence of all bugs, and even if it did I'd at most assume that either the language implementation is incorrect or my code is correct, with a certain margin of error due to undetected hardware failures).

> This is very unwise.
> ...

Thanks for pointing that out.

> btw. what was the last compiler you wrote?
> 

Embarrassing questions can be avoided by just coming up with the correct answer yourself.
November 22, 2017
On 22.11.2017 02:09, codephantom wrote:
> On Wednesday, 22 November 2017 at 00:49:02 UTC, Jonathan M Davis wrote:
>> While I definitely don't think that it's generally very hard to avoid bugs with null pointers/references, telling someone to code correctly in the first place isn't very useful.
> 
> Fair enough...perhaps I'm being too explicit with my argument.
> 
> However, my point is, that one should not overly rely on some magical compiler for telling you what is 'true'.
> ...

That is not the role of the compiler here. The task of the compiler in this circumstance is to tell you what is obvious, not what is true.

> How can a compiler know that G is true if it cannot prove that G is true?
> ...

Because you proved it to the compiler.

> You need to take this into account during your coding. Otherwise the runtime system is your last line of defence.
> 

You seem to assume that Rice's theorem applies to compilers, but not programmers. Why is that?
November 22, 2017
On 22.11.2017 05:55, codephantom wrote:
> ... >> The question isn't whether we should use the type system to prevent
>> bugs. The question is which set of problems really make sense to prevent with the type system.
>>
> 
> No, the question should be, what can the compiler prove to be true/false, correct/incorrect about your code, and what effort have you made in your code to assist the compiler to make that determination.
> 
> If you've made no effort to provide the compiler with the context it needs to make a useful determination, then don't complain when the compiler gets it wrong. That is my first point.
> 
> My second point, is that it is already possible to provide such context to the compiler, without having to make reference types non nullable, and therefore having to introduce a new nullable reference type.
> ...

It's really not.

> Which make more sense? Knowing that a reference type could potentially be null, and therefore check for null,

You are saying this as if there was always a reasonable thing to do if the reference is in fact null. This is just not the case. I.e. this option sometimes makes no sense. Also, if checking for null is always required, why wouldn't the compiler complain if it is missing?

> or dealing with all the flow on conquences of making a reference type non nullable by default?
> 
> Even with such a change, the Goldbach Conjecture still cannot be resolved.
> 

If the correctness of a program depends on the Goldbach Conjecture, that's still something one might want to know about. We could then just add the correctness of the Goldbach conjecture as an assumption, and then verify that under the given assumption, the program is actually correct. Once the Goldbach conjecture gets resolved, we can get rid of the assumption.
November 22, 2017
On Wednesday, 22 November 2017 at 13:47:19 UTC, Timon Gehr wrote:
> On 22.11.2017 05:55, codephantom wrote:
>> No, the question should be, what can the compiler prove to be true/false, correct/incorrect about your code, and what effort have you made in your code to assist the compiler to make that determination.
>> 
>> If you've made no effort to provide the compiler with the context it needs to make a useful determination, then don't complain when the compiler gets it wrong. That is my first point.
>> 
>> My second point, is that it is already possible to provide such context to the compiler, without having to make reference types non nullable, and therefore having to introduce a new nullable reference type.
>> ...
>
> It's really not.
>


Your arguments need a little more work.
November 22, 2017
On Wednesday, 22 November 2017 at 13:21:05 UTC, Timon Gehr wrote:
> On 22.11.2017 01:19, codephantom wrote:
>
> No, I ideally want the type system to point out when the code is not obviously correct. That does not mean I assume that the code is correct when it compiles (given that I'm using a language that does not require me to prove absence of all bugs, and even if it did I'd at most assume that either the language implementation is incorrect or my code is correct, with a certain margin of error due to undetected hardware failures).
>
>> This is very unwise.
>> ...
>
> Thanks for pointing that out.
>

You're welcome.

November 22, 2017
On Wednesday, 22 November 2017 at 13:21:05 UTC, Timon Gehr wrote:
>> 
>> You do realise, that all of the issues you mention can just be handled by coding correctly in the first place.
>> ...
>
> Yes, just like everyone else, I realize that if correct code is written, we end up with correct code, but thanks for pointing it out.

You're welcome.
November 22, 2017
On Wednesday, 22 November 2017 at 13:21:05 UTC, Timon Gehr wrote:
> BTW of course you must realize that you can make the compiler brutally obsolete by just quickly writing down the most efficient possible correct machine code in a hex editor, so I'm not too sure why you participate in a discussion on the forums of a compiled language at all.
>

I've participated in order to counter the proposition put forward in the subject of this thread.

The core language of D does NOT need what C# is proposing - that is my view.

If, over time, a large number of D programmers have the same laissez-faire approach towards checking for null, as C# programmers, then maybe they'll start demanding the same thing - but even then, I'll argue the same points I've argued thus far.

I also think that relying too much on sophisticated IDE's and AI like compilers, really changes the way you think about and write code. I don't rely on either. Perhaps that's why I've never considered nulls to be an issue. I take proactive steps to protect my code, before the compiler ever sees it. And actually, I cannot recall any null related error in any code I've deployed. It's just never been an issue.

And that's another reason why this topic interests me - why is it such an issue in the C# community? From Mads blog about it, it seems to be because they're just not doing null checks. And so the language designers are being forced to step in. If that's not the reason, then I've misunderstood, and await the correct explanation.

November 22, 2017
On Tuesday, 21 November 2017 at 09:12:25 UTC, Ola Fosheim Grostad wrote:
> Runtime checks are part of the type system though

I wouldn't say that, particularly if we are talking about a statically typed language (which Java is).

November 22, 2017
On Wednesday, 22 November 2017 at 14:51:02 UTC, codephantom wrote:
>
> The core language of D does NOT need what C# is proposing - that is my view.

"Need"?  Perhaps not.  But so far, I haven't seen any arguments that refute the utility of mitigating patterns of human error.

> If, over time, a large number of D programmers have the same laissez-faire approach towards checking for null, as C# programmers, then maybe they'll start demanding the same thing - but even then, I'll argue the same points I've argued thus far.

Null references have been a problem in every language that has them.  Just because D is much nicer than its predecessors (and contemporaries, IMO) doesn't mean the "bad old days" (still in progress) of C and C++ didn't happen or that we cannot or should not learn from the experience.  Tony Hoare doesn't call null his sin and "billion dollar mistake" as just a fit of pique.  In other words, "Well don't do that, silly human!" ends up being an appeal to tradition.

> Perhaps that's why I've never considered nulls to be an issue. I take proactive steps to protect my code, before the compiler ever sees it. And actually, I cannot recall any null related error in any code I've deployed. It's just never been an issue.

Oh, that explains it.  He's a _robot_! ;)

(The IDE thing is entirely irrelevant to this discussion; why did you bring that up?)

> And that's another reason why this topic interests me - why is it such an issue in the C# community? From Mads blog about it, it seems to be because they're just not doing null checks. And so the language designers are being forced to step in. If that's not the reason, then I've misunderstood, and await the correct explanation.

Again, it's never _not_ been a problem.  That C# is nearly old enough to vote in general elections but they're only just now finally doing this should be telling. (And I fully expect this conversation has been going for at least half of that time.)  It's probably galvanised by the recent proliferation of languages that hold safety to a higher standard and the community realising that the language can and _should_ share the burden of mitigating patterns of human error.

-Wyatt
November 22, 2017
On Wednesday, 22 November 2017 at 17:17:07 UTC, Mark wrote:
> On Tuesday, 21 November 2017 at 09:12:25 UTC, Ola Fosheim Grostad wrote:
>> Runtime checks are part of the type system though
>
> I wouldn't say that, particularly if we are talking about a statically typed language (which Java is).

Very few imperative programming languages are fully statically typed.