January 18, 2014
Walter Bright:

> I don't think a new syntax is required. We already have the template syntax:
>
>    RangedInt!(0,10)
>
> should do it.

Is this array literal accepted, and can D spot the out-of-range bug at compile time (Ada language allows both things)?

RangedInt!(0, 10)[] arr = [1, 5, 12, 3, 2];

Probably there are other semantic details that should be handled.

Bye,
bearophile
January 18, 2014
On 2014-01-18 21:57:21 +0000, Walter Bright <newshound2@digitalmars.com> said:

> On 1/18/2014 1:38 PM, Michel Fortin wrote:
>> The closest thing I can think of is range constrains. Here's an example
>> (invented syntax):
> 
> I don't think a new syntax is required. We already have the template syntax:
> 
>     RangedInt!(0,10)
> 
> should do it.

It works, up to a point.

	void foo(RangedInt!(0, 5) a);

	void bar(RangedInt!(0, 10) a)
	{
		if (a < 5)
			foo(a); // what happens here?
	}

In that "foo(a)" line, depending on the implementation of RangedInt you either get a compile-time error that RangedInt!(0, 10) can't be implicitly converted to RangedInt!(0, 5) and have to explicitly convert it, or you get implicit conversion with a runtime check that throws.

Just like pointers, not knowing about the actual control flow pushes range constrains enforcement at runtime in situations like this one. It's better than nothing since it'll throw immediately when passing an out of range value to a function  and thus the wrong value won't propagate further, but static analysis would make this much better.

In fact, even the most obvious case can't be caught at compile-time with the template approach:

	void baz()
	{
		foo(999); // runtime error here
	}

-- 
Michel Fortin
michel.fortin@michelf.ca
http://michelf.ca

January 18, 2014
Michel Fortin:

> In fact, even the most obvious case can't be caught at compile-time with the template approach:
>
> 	void baz()
> 	{
> 		foo(999); // runtime error here
> 	}

The constructor of the ranged int needs an "enum precondition":

http://forum.dlang.org/thread/ksfwgjqewmsxsribenzq@forum.dlang.org

Bye,
bearophile
January 18, 2014
On 1/18/2014 2:16 PM, Michel Fortin wrote:
> It works, up to a point.
>
>      void foo(RangedInt!(0, 5) a);
>
>      void bar(RangedInt!(0, 10) a)
>      {
>          if (a < 5)
>              foo(a); // what happens here?
>      }
>
> In that "foo(a)" line, depending on the implementation of RangedInt you either
> get a compile-time error that RangedInt!(0, 10) can't be implicitly converted to
> RangedInt!(0, 5) and have to explicitly convert it, or you get implicit
> conversion with a runtime check that throws.

Yes, and I'm not seeing the problem. (The runtime check may also be removed by the optimizer.)


> Just like pointers, not knowing about the actual control flow pushes range
> constrains enforcement at runtime in situations like this one.

With pointers, the enforcement only happens when converting a pointer to a nonnull pointer.

> In fact, even the most obvious case can't be caught at compile-time with the
> template approach:
>
>      void baz()
>      {
>          foo(999); // runtime error here
>      }

Sure it can. Inlining, etc., and appropriate use of compile time constraints.

January 18, 2014
The point being, there is a whole universe of subset types. You cannot begin to predict all the use cases, let alone come up with a special syntax for each.
January 18, 2014
Walter Bright:

> The point being, there is a whole universe of subset types. You cannot begin to predict all the use cases, let alone come up with a special syntax for each.

On the other hand there are some groups of types that are both very common, lead to a good percentage of bugs and troubles, and need a refined type semantic & management to be be handled well (so it's hard to implement them as library types).

So there are some solutions:
1) Pick the most common types, like pointers/class references, integral values, and few others, and hard code their handling very well in the language.
2) Try to implement some barely working versions using the existing language features.
3) Add enough tools to the language to allow the creation of "good enough" library defined features. (This is hard to do. Currently you can't implement "good enough" not-nullable reference types or ranged integers in D).
4) Give up and accept to use a simpler language, with a simpler compiler, that is more easy to create and develop.

Bye,
bearophile
January 18, 2014
> 1) Pick the most common types, like pointers/class references, integral values, and few others, and hard code their handling very well in the language.

This is what Ada usually does.


> 2) Try to implement some barely working versions using the existing language features.

This is what D often has done.


> 3) Add enough tools to the language to allow the creation of "good enough" library defined features. (This is hard to do. Currently you can't implement "good enough" not-nullable reference types or ranged integers in D).

This is what some language as ATS tries to do.


> 4) Give up and accept to use a simpler language, with a simpler compiler, that is more easy to create and develop.

This is what Go often does (in other cases it hard-codes a solution, like the built-in associative arrays).

Bye,
bearophile
January 18, 2014
On 1/18/2014 2:43 PM, bearophile wrote:
> Walter Bright:
>
>> The point being, there is a whole universe of subset types. You cannot begin
>> to predict all the use cases, let alone come up with a special syntax for each.
>
> On the other hand there are some groups of types that are both very common, lead
> to a good percentage of bugs and troubles, and need a refined type semantic &
> management to be be handled well (so it's hard to implement them as library types).

That would be a problem with D's ability to define library types, and we should address that rather than take the Go approach and add more magic to the compiler.


> Currently you can't implement
> "good enough" not-nullable reference types or ranged integers in D).

This is not at all clear.


> 4) Give up and accept to use a simpler language, with a simpler compiler, that
> is more easy to create and develop.

Applications are complicated. A simple language tends to push the complexity into the source code. Java is a fine example of that.

January 18, 2014
On Saturday, 18 January 2014 at 02:59:43 UTC, Walter Bright wrote:
> On 1/17/2014 6:42 PM, "Ola Fosheim Grøstad" <ola.fosheim.grostad+dlang@gmail.com>" wrote:
>> But then you have to define "invalid state",
>
> An unexpected value is an invalid state.

It is only an invalid state for a subsystem, if your code is written to handle it, it can contain it and recover (or disable that subsystem). Assuming that you know that it unlikely to be caused by memory corruption.

The problem with being rigid on this definition is that most non-trivial programs are constantly in an invalid state and therefore should not be allowed to even start. Basically you should stop making DMD available, it contains bugs, it is constantly in an invalid state vs the published model. State is not only variables. State is code too. (e.g. state machine).

What is the essential difference between insisting on stopping a program with bugs and insisting on not starting a program with bugs? There is no difference.

Still, most companies ship software with known non-fatal bugs.
January 18, 2014
Walter Bright:

>> Currently you can't implement
>> "good enough" not-nullable reference types or ranged integers in D).
>
> This is not at all clear.

A good Ranged should allow syntax like this, and it should catch this error at compile time (with an "enum precondition"):

Ranged!(int, 0, 10)[] arr = [1, 5, 12, 3, 2];

It also should use the CPU overflow/carry flags to detect efficiently enough integer overflows on a Ranged!(uint, 0, uint.max) type. It should handle the conversions nicely to the super-type and allow the usage of a ranged int as array index. And array bound tests should be disabled if you are using a ranged size_t that is statically known to be in the interval of the array, because this is one of the main purposes of ranged integrals.

And D arrays should have optional strongly-typed index types, as in Ada. Because this makes the code safer, easier to reason about, and even faster (thanks to disabling some now unnecessary array bound tests).

Similarly not-nullable pointers and class references have some semantic requirements that are not easy to implement in D today.

Bye,
bearophile