November 20, 2017
On Monday, 20 November 2017 at 08:55:54 UTC, Biotronic wrote:
> On Monday, 20 November 2017 at 08:49:41 UTC, rumbu wrote:
>> In fact, this is the introduction of a new operator "!", probably named "I know better" operator.
>
> It's called the "bang" operator, because of how things blow up when you're wrong.

aka the 'dig your own grave' operator.

November 20, 2017
On Sunday, 19 November 2017 at 04:04:04 UTC, Walter Bright wrote:
> On 11/18/2017 6:25 PM, Timon Gehr wrote:
>> I.e., baseClass should have type Nullable!ClassDeclaration. This does not in any form imply that ClassDeclaration itself needs to have a null value.
>
> Converting back and forth between the two types doesn't sound appealing.

Converting isn't necessary - one can instead map over nullable types, with the mapped function not actually being called when it is indeed null. i.e.

struct Nullable(T) {
   //...
    auto map(alias F)() {
        return isNull
            ? ReturnType!F.init
            : F(_value);
    }
}


>>> What should the default initializer for a type do?

There shouldn't be one - any usage of a non-nullable type that hasn't been initialised should be a compile-time error. Similar to using a non-initialised reference in C++, but relaxed to allow assignment at a place other than the declaration.


> Interestingly, `int` isn't nullable, and we routinely use rather ugly hacks to fake it being nullable, like reserving a bit pattern like 0, -1 or 0xDEADBEEF and calling it INVALID_VALUE, or carrying around some other separate flag that says if it is valid or not. These are often rich sources of bugs.

Nullable!int.

> As you can guess, I happen to like null, because there are no hidden bugs from pretending it is a valid value - you get an immediate program halt - rather than subtly corrupted results.

The problem with null as seen in C++/Java/D is that it's a magical value that different types may have. It breaks the type system.

> Yes, my own code has produced seg faults from erroneously assuming a value was not null. But it wouldn't have been better with non-nullable types, since the logic error would have been hidden and may have been much, much harder to recognize and track down.

No, it would have been a compile-time error instead.


Atila
November 20, 2017
On Sunday, 19 November 2017 at 04:04:04 UTC, Walter Bright wrote:
> Interestingly, `int` isn't nullable, and we routinely use rather ugly hacks to fake it being nullable, like reserving a bit pattern like 0, -1 or 0xDEADBEEF and calling it INVALID_VALUE, or carrying around some other separate flag that says if it is valid or not. These are often rich sources of bugs.
>
> As you can guess, I happen to like null, because there are no hidden bugs from pretending it is a valid value - you get an immediate program halt - rather than subtly corrupted results.

I don't deny these. Null is an excellent way to denote "empty" or "invalid". Thats just what std.typecons.Nullable!T is for. Granted, it is not quite as elegant as naturally nullable types.

But that does not mean nullables are always good. Consider:

struct TimeOfDay
{   byte hours
    byte minutes
    byte seconds
}

While it might make sense to make the TimeOfDay nullable as whole, you definitely do not want all the fields have a null value each. You know statically that if the struct is valid, then all it's members are valid.
It would be only a performance slowdown to check for null with them. You could skip those null-checks by convention but for sure you would not always remember, causing sub-optimal performance.

Ideally you would want to leave it up to the type user whether you have a null value or not, just like C# does. Whether it's worth it's weight is a different question through.

About the question what should be default-initialized value for an abstarct type were it non-nulllable, I think the type definer should decide that. A library solution here would sound credible to me. A type that wraps a reference type behaving like a value type. Default initialized value and what to do on copy would be passed as template parameters. Perhaps I should try...
November 20, 2017
On Monday, 20 November 2017 at 10:07:08 UTC, Atila Neves wrote:
> The problem with null as seen in C++/Java/D is that it's a magical value that different types may have. It breaks the type system.

Not sure if it breaks the type system, but it would be cleaner to construct types with null "int|null", "float|null" etc, but then you would have a high level language and there are many NaN values (two semantic Nan values, but many encodings that might be used for conveying extra information)

>> assuming a value was not null. But it wouldn't have been better with non-nullable types, since the logic error would have been hidden and may have been much, much harder to recognize and track down.
>
> No, it would have been a compile-time error instead.

Yes, but you don't need non-nullable types, you could have subtyping of nullable types instead. For floats that would be very useful. E.g. constraint a float to the range [0.0, 1.0> or integers or not-infinity/not-nan etc.

November 20, 2017
On Monday, 20 November 2017 at 10:45:20 UTC, Dukc wrote:
> A type that wraps a reference type behaving like a value type. Default initialized value and what to do on copy would be passed as template parameters. Perhaps I should try...

Just realized Unique!T is already pretty close. A few (non-breaking) modifications on it could do the trick.
November 20, 2017
On 20.11.2017 11:07, Atila Neves wrote:
> 
> 
>> As you can guess, I happen to like null, because there are no hidden bugs from pretending it is a valid value - you get an immediate program halt - rather than subtly corrupted results.
> 
> The problem with null as seen in C++/Java/D is that it's a magical value that different types may have. It breaks the type system.

In Java, quite literally so. The Java type system is /unsound/ because of null. (I.e. Java is only memory safe because it runs on the JVM.)
November 20, 2017
On Monday, 20 November 2017 at 11:27:15 UTC, Timon Gehr wrote:
> On 20.11.2017 11:07, Atila Neves wrote:
>> 
>> 
>>> As you can guess, I happen to like null, because there are no hidden bugs from pretending it is a valid value - you get an immediate program halt - rather than subtly corrupted results.
>> 
>> The problem with null as seen in C++/Java/D is that it's a magical value that different types may have. It breaks the type system.
>
> In Java, quite literally so. The Java type system is /unsound/ because of null. (I.e. Java is only memory safe because it runs on the JVM.)

Are you thinking about this?

https://dl.acm.org/citation.cfm?id=2984004

I don't think it says that it is unsound because of null, but that later features came in conflict with it?

November 20, 2017
On Sunday, 19 November 2017 at 22:54:38 UTC, Walter Bright wrote:
> There's also an issue of how to derive a class from a base class.

If you want null, use a nullable type:

Base b = ...;
Derived? d = cast(Derived?) base;
if (d !is null) d.method;

> This implies one must know all the use cases of a type before designing it.

Start off with a non-nullable reference. If later you need null, change to T?. T? is implicitly convertible to T where flow analysis can tell that it is not null (e.g. after it is assigned a non-nullable T).

>> It was your own design decision to hide the error.
>
> No, it was a bug. Nobody makes design decisions to insert bugs :-) The issue is how easy the bug is to have, and how difficult it would be to discover it.

The compiler would nag you when you try to dereference nullable types, you have to act to confirm you didn't forget to check null, a common mistake in reference heavy APIs.

>> No, it would have been better because you would have been used to the more explicit system from the start and you would have just written essentially the same code with a few more compiler checks in those cases where they apply, and perhaps you would have suffered a handful fewer null dereferences.
>
> I'm just not convinced of that.

Maybe you use nullable types a lot and rarely use references that aren't meant to have null as a valid value. Most programmers have plenty of statements where a reference is not supposed to be null, and would appreciate having the compiler enforce this. Popular new programming languages make nullable opt-in, not the default, to reduce the surface area for null dereference bugs.

>> I'm not fighting for explicit nullable in D by the way.
>
> Thanks for clarifying that.

To avoid breaking existing code, there is a way however. We would instead have a sigil (such as '$') for non-nullable references:

T nullable;
T$ nonNullable = new T;

typeof(new T) for full compatibility would still be T, but the compiler would know it safely converts to T$.

November 20, 2017
On 11/20/2017 3:27 AM, Timon Gehr wrote:
> On 20.11.2017 11:07, Atila Neves wrote:
>> The problem with null as seen in C++/Java/D is that it's a magical value that different types may have. It breaks the type system.
> 
> In Java, quite literally so. The Java type system is /unsound/ because of null. (I.e. Java is only memory safe because it runs on the JVM.)

I'm curious. Can you expand on this, please?

(In D, casting null to any other pointer type is marked as @unsafe.)
November 21, 2017
On Monday, 20 November 2017 at 22:56:44 UTC, Walter Bright wrote:
> On 11/20/2017 3:27 AM, Timon Gehr wrote:
>> On 20.11.2017 11:07, Atila Neves wrote:
>>> The problem with null as seen in C++/Java/D is that it's a magical value that different types may have. It breaks the type system.
>> 
>> In Java, quite literally so. The Java type system is /unsound/ because of null. (I.e. Java is only memory safe because it runs on the JVM.)
>
> I'm curious. Can you expand on this, please?
>
> (In D, casting null to any other pointer type is marked as @unsafe.)

This blog post seems to summarize the paper he linked to:
https://dev.to/rosstate/java-is-unsound-the-industry-perspective