January 19, 2014
On 01/19/2014 12:01 AM, "Ola Fosheim Grøstad" <ola.fosheim.grostad+dlang@gmail.com>" wrote:
> On Saturday, 18 January 2014 at 02:59:43 UTC, Walter Bright wrote:
>> On 1/17/2014 6:42 PM, "Ola Fosheim Grøstad"
>> <ola.fosheim.grostad+dlang@gmail.com>" wrote:
>>> But then you have to define "invalid state",
>>
>> An unexpected value is an invalid state.
>
> It is only an invalid state for a subsystem, if your code is written to
> handle it, it can contain it and recover (or disable that subsystem).
> Assuming that you know that it unlikely to be caused by memory corruption.
> ...

This is not a plausible assumption. What you tend to know is that the program is unlikely to fail because otherwise it would not have been shipped, being safety critical. I.e. when it fails, you don't know that it is unlikely to be caused by something. It could be hardware failure, and even a formal correctness proof does not help with that.

> The problem with being rigid on this definition is ...

He is not.

> What is the essential difference between insisting on stopping a program
> with bugs and insisting on not starting a program with bugs? There is no
> difference.
> ...

Irrelevant. He is arguing for stopping the system once it has _become clear_ that the _current execution_ might not deliver the expected results.
January 19, 2014
On Saturday, 18 January 2014 at 22:12:09 UTC, bearophile wrote:
> Walter Bright:
>
>> I don't think a new syntax is required. We already have the template syntax:
>>
>>   RangedInt!(0,10)
>>
>> should do it.
>
> Is this array literal accepted, and can D spot the out-of-range bug at compile time (Ada language allows both things)?
>
> RangedInt!(0, 10)[] arr = [1, 5, 12, 3, 2];

Even though the syntax would be less lean, D can already to this with templates and/or CTFE quite easily.
January 19, 2014
On 2014-01-19 00:34, "Ola Fosheim Grøstad" <ola.fosheim.grostad+dlang@gmail.com>" wrote:

> I have not argued for not having a null. I have argued for trapping
> null, instantiating a type specific default and recover if the type's
> meta says so. That default could be to issue a NullException. At that
> point you should be able to log null dereferences too.

I think nil (null) works quite nicely in Ruby. nil is a singleton instance of the NilClass class. Since it's an object you can call methods on it, like to_s, which returns an empty string. It works quite well when doing web development with Ruby on Rails. If you're trying to render something that's nil you'll get nothing, instead of crashing that whole page. Sure, there might be a small icon or similar that isn't rendered but that's usually a minor detail. If the page is working and main content is rendered that's preferable.

-- 
/Jacob Carlborg
January 19, 2014
Jacob Carlborg:

> I think nil (null) works quite nicely in Ruby. nil is a singleton instance of the NilClass class. Since it's an object you can call methods on it, like to_s, which returns an empty string. It works quite well when doing web development with Ruby on Rails. If you're trying to render something that's nil you'll get nothing, instead of crashing that whole page. Sure, there might be a small icon or similar that isn't rendered but that's usually a minor detail. If the page is working and main content is rendered that's preferable.

Walter is arguing against this solution in D.
While that can be OK for Ruby used for web development, for a statically typed language meant for safe coding styles, there are better type-based solution to that problem.

Bye,
bearophile
January 19, 2014
On 2014-01-19 11:30:00 +0000, "bearophile" <bearophileHUGS@lycos.com> said:

> Jacob Carlborg:
> 
>> I think nil (null) works quite nicely in Ruby. nil is a singleton instance of the NilClass class. Since it's an object you can call methods on it, like to_s, which returns an empty string. It works quite well when doing web development with Ruby on Rails. If you're trying to render something that's nil you'll get nothing, instead of crashing that whole page. Sure, there might be a small icon or similar that isn't rendered but that's usually a minor detail. If the page is working and main content is rendered that's preferable.
> 
> Walter is arguing against this solution in D.
> While that can be OK for Ruby used for web development, for a statically typed language meant for safe coding styles, there are better type-based solution to that problem.

It won't work in D's type system anyway. Ruby is dynamically typed, that's why it can work.

Interestingly, in Objective-C calling a method on a null object pointer just does nothing. That's a feature that is often useful when chaining calls that might return null (you don't have to do all these extra checks) or with weak pointers, but it can on occasion lead to subtle bugs.

-- 
Michel Fortin
michel.fortin@michelf.ca
http://michelf.ca

January 19, 2014
On 2014-01-19 07:56:06 +0000, Timon Gehr <timon.gehr@gmx.ch> said:

> On 01/18/2014 10:05 PM, Walter Bright wrote:
>> Aside from that, non-null is only one case of a universe of types that
>> are subsets of other types.
> 
> This is not true. The main rationale for "non-null" is to eliminate null dereferences. A? in his proposal is different from current nullable references in that the compiler does not allow them to be dereferenced.

Actually, 'A?' would implicitly convert to 'A' where the compiler can prove control flow prevents its value from being null. So you can dereference it in a branch that checked for null:

	class A { int i; void foo(); }
	void bar(A a); // non-nullable parameter

	void test(A? a, A? a2)
	{
		a.i++; // error, 'a' might be null
		a.foo(); // error, 'a' might be null
		bar(a); // error, 'a' might be null
	
		if (a)
		{
			a.i++; // valid, 'a' can't be null here
			a.foo(); // valid, 'a' can't be null here
			bar(a); // valid, 'a' can't be null here
		}
	}

Obviously, the compiler has to be pessimistic, which means that if your control flow is too complicated you might have to use a cast, or add an extra "if" or assert. Personally, I don't see that as a problem. If I have to choose between dynamic typing and static typing, I'll choose the later, even if some time the type system forces me to do a cast. Same thing here with null.


> If we just had a solution for arbitrary subset types, we'd _still_ be left with a type of references that might be null, but are not prevented to be dereferenced.

And I left out that point while converting the example to numeric ranges earlier. It's an important point.


-- 
Michel Fortin
michel.fortin@michelf.ca
http://michelf.ca

January 19, 2014
On Sunday, 19 January 2014 at 07:40:09 UTC, Walter Bright wrote:
> On 1/18/2014 6:33 PM, Walter Bright wrote:
>> You elided the qualification "If it is a critical system". dmd is not a safety critical application.
>
> And I still practice what I preach with DMD. DMD never attempts to continue running after it detects that it has entered an invalid state - it ceases immediately. Furthermore, when it detects any error in the source code being compiled, it does not generate an object file.

I think the whole "critical system" definition is rather vague. For safety critical applications you want proven implementation technology, proper tooling and a methodology to go with it. And it is very domain specific. Simple algorithms can be proven correct, some types of signal processing can be proven correct/stable, some types of implementations (like a FPGA) affords exhaustive testing (test all combination of input). In the case of D, I find that a somewhat theoretical argument. D is not a proven technology. D does not have tooling with a methodology to go with it. But yes, you want backups due to hardware failure even for programs that are proven correct. In a telephone central you might want to have a backup system to handle emergency calls.

If you take a theoretical position (which I think you do) then I also think you should accept a theoretical argument. And the argument is that there is no theoretical difference between allowing programs with known bugs to run and allowing programs with anticipated bugs to run (e.g. catching "bottom" in a subsystem). There is also no theoretical difference between allowing DMD to generate code that is not following the spec 100%, and allowing DMD to generate code if an anticipated "bottom" occurs. It all depends on what degree of deviance from the specified model you accept. It is quite acceptable to catch "bottom" for an optimizer and generate less optimized code for that function, or to turn off that optimizer setting. However, in a compiler you can defer to "the pilot" (compiler) so that is generally easier. In a server you can't.

January 19, 2014
On Sunday, 19 January 2014 at 10:32:36 UTC, Jacob Carlborg wrote:
> I think nil (null) works quite nicely in Ruby. nil is a singleton instance of the NilClass class. Since it's an object you can call methods on it, like to_s, which returns an empty string. It works quite well when doing web development with

I have no experience with Ruby, but Javascript also do this (undefined is an object). I don't think it is more work to debug Python and Javascript null exceptions than C-like code.

I think it was a mistake to let zero represent null, and it was probably done to make testing faster. But a different bit-pattern could prevent conflation between memory corruption and null. It is highly improbable that an address like $F1234324 is the result of memory corruption.

Another advantage with null-objects as a pattern and mechanisms to support it is that you can have multiple variants and differentiate between:

- undefined (not initialized)
- null (deliberately not having an attribute)
- application specific null values (like, try-again, lazy-evaluation) etc that depending on context evaluates to undefined, null or tries to fetch the values by computation.
- ++
January 19, 2014
On Sunday, 19 January 2014 at 12:29:14 UTC, Ola Fosheim Grøstad wrote:
> On Sunday, 19 January 2014 at 10:32:36 UTC, Jacob Carlborg wrote:
>
> I have no experience with Ruby, but Javascript also do this (undefined is an object). I don't think it is more work to debug Python and Javascript null exceptions than C-like code.

Having had heavy experience with Python, and having used D since D1, I would tell the contrary, that's easier to handle null exception with D. That's only experience based, naturally, I don't pretend this to be science.

Going down the hill, we should then love PHP, as it's mantra seems to be "go marching in", and we all know what an horrible mess it is.
---
Paolo
January 19, 2014
On Sunday, 19 January 2014 at 12:20:42 UTC, Ola Fosheim Grøstad wrote:
> On Sunday, 19 January 2014 at 07:40:09 UTC, Walter Bright wrote:
>> On 1/18/2014 6:33 PM, Walter Bright wrote:
>>> You elided the qualification "If it is a critical system". dmd is not a safety critical application.
>>
>> And I still practice what I preach with DMD. DMD never attempts to continue running after it detects that it has entered an invalid state - it ceases immediately. Furthermore, when it detects any error in the source code being compiled, it does not generate an object file.
>
> I think the whole "critical system" definition is rather vague. For safety critical applications you want proven implementation technology, proper tooling and a methodology to go with it. And it is very domain specific. Simple algorithms can be proven correct, some types of signal processing can be proven correct/stable, some types of implementations (like a FPGA) affords exhaustive testing (test all combination of input). In the case of D, I find that a somewhat theoretical argument. D is not a proven technology. D does not have tooling with a methodology to go with it. But yes, you want backups due to hardware failure even for programs that are proven correct. In a telephone central you might want to have a backup system to handle emergency calls.
>
> If you take a theoretical position (which I think you do) then I also think you should accept a theoretical argument. And the argument is that there is no theoretical difference between allowing programs with known bugs to run and allowing programs with anticipated bugs to run (e.g. catching "bottom" in a subsystem). There is also no theoretical difference between allowing DMD to generate code that is not following the spec 100%, and allowing DMD to generate code if an anticipated "bottom" occurs. It all depends on what degree of deviance from the specified model you accept. It is quite acceptable to catch "bottom" for an optimizer and generate less optimized code for that function, or to turn off that optimizer setting. However, in a compiler you can defer to "the pilot" (compiler) so that is generally easier. In a server you can't.

I'm trying to understand your motivations, but why in a server you can't? I still can't grasp that point.
--
Paolo