November 17, 2017
On Friday, 17 November 2017 at 10:45:13 UTC, codephantom wrote:
> Sounds to me, like too many people are writing incorrect code in the first place,

Also known as "writing code".

> and want to offload responsibility onto something other than themself.

That's the whole point of using a safe language, otherwise we'd be fine with C.

> This is why we have bloated runtime checks these days.

The post was talking about compile-time checks. I'll take bloated runtime checks over memory corruption any day of the week and twice on Sundays.

Atila
November 17, 2017
On Friday, 17 November 2017 at 01:47:01 UTC, Michael V. Franklin wrote:
> It peeked my interested, because when I first started studying D, the lack of any warning or error for this trivial case surprised me.

You wanna get freaked out?

Try that very same trivial example with the `-O` option to dmd.

$ dmd -O pp
pp.d(20): Error: null dereference in function _Dmain


Yes, the optimizer has a compile time null check... but the mail compiler doesn't. Walter has explained it is because the optimizer does some flow analysis that the semantic step doesn't. But still, sooooo weird.

November 17, 2017
On 17.11.2017 12:22, Jonathan M Davis wrote:
> On Friday, November 17, 2017 09:44:01 rumbu via Digitalmars-d wrote:
>> I know your aversion towards C#, but this not about C#, it's
>> about safety. And safety is one of the D taglines.
> 
> Completely aside from whether having the compile-time checks would be good
> or not, I would point out that this isn't actually a memory safety issue.

Memory safety is not the only kind of safety. Also, memory safety is usually formalized as (type) preservation which basically says that every memory location actually contains a value of the correct type. Hence, as soon as you have non-nullable pointers in the type system, this _becomes_ a memory safety issue.

> If
> you dereference a null pointer or reference, your program will segfault. No
> memory is corrupted, and no memory that should not be accessed is accessed.
> If dereferencing a null pointer or reference in a program were a memory
> safety issue, then we'd either have to make it illegal to dereference
> references or pointers in @safe code or add additional runtime null checks
> beyond what already happens with segfaults, since aside from having
> non-nullable pointers/references, in the general case, we can't guarantee
> that a pointer or reference isn't null.

There are type systems that do that, which is what is being proposed for C#. It's pretty straightforward: If I have a variable of class reference type C, it actually contains a reference to a class instance of type C.
November 17, 2017
On 17.11.2017 11:19, Jonathan M Davis wrote:
>   If the compiler can't guarantee that your code is
> wrong, then that check should be left up to a linter.

I.e., you think the following code should compile:

class C{}

void main(){
    size_t a = 2;
    C b = a;
    size_t c = b;
    import std.stdio;
    writeln(c); // "2"
}


November 17, 2017
On 17.11.2017 03:25, codephantom wrote:
> On Friday, 17 November 2017 at 01:47:01 UTC, Michael V. Franklin wrote:
>>
>> It peeked my interested, because when I first started studying D, the lack of any warning or error for this trivial case surprised me.
>>
>> // Example A
>> class Test
>> {
>>     int Value;
>> }
>>
>> void main(string[] args)
>> {
>>     Test t;
>>     t.Value++;  // No compiler error, or warning.  Runtime error!
>> }
> 
> 
> Also, if you start with nothing, and add 1 to it, you still end up with nothing, cause you started with nothing. That makes completed sense to me. So why should that be invalid?
> 

Because, for example, 'int' does not have a special null value, and we don't want it to have one.

The code starts with nothing, and tries to increment an 'int' Value that is associated to nothing. What is this value? There is no null in int. And anyway, the code does not say that t is nothing, it says that t is a Test. Then it does not say what kind of Test it is. The new features allow you to specify that t may be nothing, and they add a type int? that carries the cost of a special null value for those who are into that kind of thing.
November 17, 2017
On Friday, November 17, 2017 15:05:48 Timon Gehr via Digitalmars-d wrote:
> On 17.11.2017 12:22, Jonathan M Davis wrote:
> > On Friday, November 17, 2017 09:44:01 rumbu via Digitalmars-d wrote:
> >> I know your aversion towards C#, but this not about C#, it's about safety. And safety is one of the D taglines.
> >
> > Completely aside from whether having the compile-time checks would be good or not, I would point out that this isn't actually a memory safety issue.
> Memory safety is not the only kind of safety. Also, memory safety is usually formalized as (type) preservation which basically says that every memory location actually contains a value of the correct type. Hence, as soon as you have non-nullable pointers in the type system, this _becomes_ a memory safety issue.

This is definitely not how it is viewed in D. Walter has repeatedly stated that dereferencing a null pointer is considered @safe, because doing so will not corrupt memory or access memory that it should not access - and that's all that @safe cares about. Whether there's an object of the correct type at that location or not is irrelevant, because it's null. You do have a memory safety issue if you somehow make the pointer or reference refer to an object of a different type than the reference or pointer is allowed to point to, but doing that requires getting around the type system via casting, which would not be allowed in @safe code, and badly written @trusted code can always screw up @safe code. Regardless, given that dereferencing null will segfault, it does not present an @safety problem.

The only issue with dereferencing a null pointer in @safe code is that if the type is sufficiently large (larger than a page of memory IIRC), you don't actually get a segfault, and that hole does need to be plugged by having the compiler add runtime checks where needed. But most null pointers/references do not have that problem.

- Jonathan M Davis

November 17, 2017
On Friday, 17 November 2017 at 10:45:13 UTC, codephantom wrote:
>
> I've always thought writing the correct code was the better option anyway.

It is interesting that you mention this. Our product manager was talking to our senior developer about this very thing. He explained that it was a method of development that an employee at his previous company came up with at that the approach was very effective once implemented.

Our senior developer has really take charge on this and really pushing the other developers to just stop writing bugs into the program (it really hasn't been helping the company make money). Its been a little rocky start, but what new policy isn't. I really think this is going to be a savior to our company and that others should adopt it. codephantom, go forth and spread the knowledge that we should stop writing bugs into our programs and instead start with correct code, you won't lose your job over it.
November 17, 2017
On Friday, 17 November 2017 at 14:53:40 UTC, Jonathan M Davis wrote:
> [snip] Regardless, given that dereferencing null will segfault, it does not present an @safety problem.
>

@safe is really more of @memorysafe. Null safety is orthogonal to memory safety.

I don't really use null much in D currently, so this isn't all that important to me ATM. Regardless, I could imagine that if one were writing a language from scratch, you could have a default of @nullsafe where there is a compile-time error if you violate some null safety rules. You could then have a @nullunsafe where these compile-time errors are disabled and you throw an exception at run-time if you do. @nullunsafe is effectively the default in most languages.
November 18, 2017
On Friday, 17 November 2017 at 12:18:47 UTC, Atila Neves wrote:
> That's the whole point of using a safe language, otherwise we'd be fine with C.
>

Personally, I would prefer to teach new students to program in C first - precisely because it's an unsafe language - or at least, can be used unsafely.

(that's how i first learnt to program - and actually I taught myself).

Because of C, I 'had to' learn how to write code in a defensive manner.

These days people often start with a safe language instead, and often use it within an overly sophisticated IDE ( a bit like having your mother hold your hand everytime you cross the road). I think that encourages laziness, in terms of defensive programming/thinking. Programmers become complacent and leave too much stuff up to compile time checks.

I think people can write more correct code in the beginning, by simply changing the way they think about the code and how it might interact in the wider ecosystem...and, maybe even by not relying on sophisticated IDE's (at least at the early stages).

Of course compile time checks are needed. But they should not be at the expense of writing code correctly in the first place. They should come in at the latter stage of defensive programming, not the first stage.

If you check the validity of an object before going on to reference/modify it, then no compile time check is ever needed.

nice Dr Dobbs article about it here:

http://www.drdobbs.com/defensive-programming/184401915

November 18, 2017
On Friday, 17 November 2017 at 14:53:40 UTC, Jonathan M Davis wrote:
> Regardless, given that dereferencing null will segfault, it
> does not present an @safety problem.

"A notion of safety is always relative to some criterion".

If your code dereferences a null pointer, and the program segfaults, and that program is supplying me with the oxygen i need to survive...then its probably not safe ;-)