June 07, 2011
KennyTM~ Wrote:

> On Jun 7, 11 20:11, foobar wrote:
> > I agree with Ary above and would also like to add that in the ML family of languages all the variables are also default auto typed:
> > E.g.:
> > fun add a b = a + b
> >
> > 'add' would have the type ('a, 'a) ->  'a and the type inference engine will also infer that 'a must provide the + operator.
> > I feel that this is more natural than having a dedicated function template syntax.
> > Better yet, instead of auto parameters, just make parameter types optional (as in ML) and let the compiler generate the template.
> >
> 
> I don't think HM type inference (ML, Haskell) support implicit cast, i.e. you can't write { int a; double b; a + b; } anymore.

As I've answered to Andrei, this is a good thing(tm) - it's a feature.
I don't know if it has anything to do with the HM algorithm itself, but it is one of ML core principals. ML is strictly typed unlike C (glorified assembly).

> 
> > foo(a, b) { return a + b; } // will be lowered to:
> > T foo(T) (T a, T b) { return a + b; }
> >
> > Types are already inferred for delegate literals so why not extend this to regular functions too?
> >
> 
> There is no type inference in delegate literal as powerful as HM, it's just "I call this function with parameters 'int' and 'float', so instantiate a new function with 'int' and 'float'." Also, type inference only work in template parameters
> 
>      struct F(alias f) {
>        ...
>      }
>      alias F!((a,b){return a+b;}) G;
> 
> but not regular delegate literals
> 
>      auto c = (a,b){return a+b;};
>      /* Error: undefined identifier a, b */

June 07, 2011
foobar wrote:
> KennyTM~ Wrote:
>
> > On Jun 7, 11 20:11, foobar wrote:
> > > I agree with Ary above and would also like to add that in the ML family of
languages all the variables are also default auto typed:
> > > E.g.:
> > > fun add a b = a + b
> > >
> > > 'add' would have the type ('a, 'a) ->  'a and the type inference engine will
also infer that 'a must provide the + operator.
> > > I feel that this is more natural than having a dedicated function template
syntax.
> > > Better yet, instead of auto parameters, just make parameter types optional
(as in ML) and let the compiler generate the template.
> > >
> >
> > I don't think HM type inference (ML, Haskell) support implicit cast, i.e. you can't write { int a; double b; a + b; } anymore.
>
> As I've answered to Andrei, this is a good thing(tm) - it's a feature.
> I don't know if it has anything to do with the HM algorithm itself, but it is
one of ML core principals. ML is strictly typed unlike C (glorified assembly).
> [snip.]

Assembly does not have implicit conversions ;). I wonder how much code would get broken if D changed to strict typing. I seldom rely on implicit casts.


Timon
June 07, 2011
Andrei:

> There are multiple issues. One is we don't have Hindley-Milner polymorphism. The D compiler doesn't really "infer" types as "propagate" them.

- Even Scala, that has a very powerful type system (far more complex than D) doesn't use H-M, I think because they prefer type inference inside methods, but explicit specification of interfaces.
- H-M is good but it's not perfect. Haskell is fighting since years against the limitations imposed by H-M. There are many extensions to Haskell, but they often overflow the inferencing capabilities of H-M.
- I am not expert but I think H-M doesn't work well with C++-style polymorphism. Haskell use type classes.
- My experience with Haskell is limited still, but while I like its type inference, I generally prefer to give some kind of types to functions. Even in Haskell code written by expert people I see several explicit type signatures. In the end I don't feel a need for full program type inferencing in D.
- I don't remember what ATS language uses, if it performs whole program type inference. I doubt it.


> Another is, such inference would make separate compilation difficult.

I think there are ways to solve this problem, introducing more powerful module interfaces. But such modules are not easy to use (see ML).

-------------------------------

foobar:

> We don't Hindley-Miler _yet_. I can hope, can't I?

I don't think you will see H-M in D, I think it goes against D templates.

Bye,
bearophile
June 07, 2011
On 2011-06-07 09:01, foobar wrote:
> Andrei Alexandrescu Wrote:
> > On 6/7/11 7:11 AM, foobar wrote:
> > > I agree with Ary above and would also like to add that in the ML family
> > > of languages all the variables are also default auto typed: E.g.:
> > > fun add a b = a + b
> > > 
> > > 'add' would have the type ('a, 'a) -> 'a and the type inference engine will also infer that 'a must provide the + operator. I feel that this is more natural than having a dedicated function template syntax.
> > 
> > I agree it would be nice to further simplify generic function syntax. One problem with the example above is that the type deduction didn't go all that well - it forces both parameter types to be the same so it won't work with adding values of different types (different widths, mixed floating point and integrals, user-defined +). In a language without overloading, like ML, things are a fair amount easier.
> 
> ML is strictly typed unlike C-like languages. This is a *good* thing and is a feature. While C's implicit casts are a horrible hole in the language. Also ML has only two types: integers and floating point. There is no short vs long problems. Yes, both arguments will have the same type but this is the correct default. When adding a floating point and an integral the user should be required to specify what kind of operation is being made, either the double is converted to an integral (how? floor, round, etc? ) or the integral is converted to a floating point which can cause a loss of precision. Although overloading complicates things it doesn't mean it's impossible.
> 
> ML is explicit but only in the correct places. C-like languages have shortcuts but those are in the wrong places where it hurts and it's verbose in other places. I prefer to let the compiler infer types for me but require me to be explicit about coercion which is type safe vs. the reverse which is both more verbose and less safe.
> 
> > > Better yet, instead of auto parameters, just make parameter types optional (as in ML) and let the compiler generate the template.
> > > 
> > > foo(a, b) { return a + b; } // will be lowered to:
> > > T foo(T) (T a, T b) { return a + b; }
> > > 
> > > Types are already inferred for delegate literals so why not extend this to regular functions too?
> > 
> > There are multiple issues. One is we don't have Hindley-Milner polymorphism. The D compiler doesn't really "infer" types as "propagate" them. Another is, such inference would make separate compilation difficult.
> > 
> > 
> > Andrei
> 
> We don't Hindley-Miler _yet_. I can hope, can't I?
> Again, difficult != impossible. AFAIK it is working in Nemerle, isn't it?

I don't think that it generally makes sense to _add_ Hindley-Milner type inference to a language. That's the sort of design decision you make when you initially create the language. It has _huge_ repercussions on how the language works. And D didn't go that route. That sort of choice is more typical of a functional language.

And while D might arguably allow too many implicit conversions, it allows fewer than C or C++. I actually would expect that bugs due to implicit conversions would be fairly rare in D. And requiring more conversions to be explicit might actually make things worse, because it would become more frequently necessary to use casts, which tend to hide various types of bugs. So, while it might be better to require casting in a few more places than D currently does, on the whole it works quite well. And certainly expecting a major paradigm shift at this point is unrealistic. Minor improvements may be added to the language, and perhaps major backwards-compatible features may be added, but for the most part, D is currently in the mode of stabilizing and completing the implementation of its existing feature set. It's _far_ too late in the game to introduce something like Hindley-Milner type inference, regardless of whether it would have been a good idea in the beginning (and honestly, given D's C and C++ roots, I very much doubt that it ever would have been a good idea to have Hindley-Milner type inference in it - that would have made for a _very_ different sort of language, which definitely wouldn't be D).

- Jonathan M Davis
June 07, 2011
Jonathan M Davis wrote:
> ...
> And while D might arguably allow too many implicit conversions, it allows
> fewer than C or C++. I actually would expect that bugs due to implicit
> conversions would be fairly rare in D. And requiring more conversions to be
> explicit might actually make things worse, because it would become more
> frequently necessary to use casts, which tend to hide various types of bugs.
> So, while it might be better to require casting in a few more places than D
> currently does, on the whole it works quite well. And certainly expecting a
> major paradigm shift at this point is unrealistic. Minor improvements may be
> added to the language, and perhaps major backwards-compatible features may be
> added, but for the most part, D is currently in the mode of stabilizing and
> completing the implementation of its existing feature set. It's _far_ too late
> in the game to introduce something like Hindley-Milner type inference,
> regardless of whether it would have been a good idea in the beginning (and
> honestly, given D's C and C++ roots, I very much doubt that it ever would have
> been a good idea to have Hindley-Milner type inference in it - that would have
> made for a _very_ different sort of language, which definitely wouldn't be D).
>
> - Jonathan M Davis


Widening implicit casts are very convenient.
What I think is annoying is that D allows implicit conversions that are narrowing.
Eg:
int -> float
long -> double
real -> double
double -> float

Especially questionable is the real -> double -> float chain. This and implicitly casting an integer value to a floating point type whose mantissa is too small should imho be disallowed.

BTW: You can have safe 'casts'. I think std.conv.to does/will allow only type conversions that are safe.

Timon
June 07, 2011
On 2011-06-07 10:20, Timon Gehr wrote:
> Jonathan M Davis wrote:
> > ...
> > And while D might arguably allow too many implicit conversions, it allows
> > fewer than C or C++. I actually would expect that bugs due to implicit
> > conversions would be fairly rare in D. And requiring more conversions to
> > be explicit might actually make things worse, because it would become
> > more frequently necessary to use casts, which tend to hide various types
> > of bugs. So, while it might be better to require casting in a few more
> > places than D currently does, on the whole it works quite well. And
> > certainly expecting a major paradigm shift at this point is unrealistic.
> > Minor improvements may be added to the language, and perhaps major
> > backwards-compatible features may be added, but for the most part, D is
> > currently in the mode of stabilizing and completing the implementation
> > of its existing feature set. It's _far_ too late in the game to
> > introduce something like Hindley-Milner type inference, regardless of
> > whether it would have been a good idea in the beginning (and honestly,
> > given D's C and C++ roots, I very much doubt that it ever would have
> > been a good idea to have Hindley-Milner type inference in it - that
> > would have made for a _very_ different sort of language, which
> > definitely wouldn't be D).
> > 
> > - Jonathan M Davis
> 
> Widening implicit casts are very convenient.
> What I think is annoying is that D allows implicit conversions that are
> narrowing. Eg:
> int -> float
> long -> double
> real -> double
> double -> float
> 
> Especially questionable is the real -> double -> float chain. This and implicitly casting an integer value to a floating point type whose mantissa is too small should imho be disallowed.

Hmmm. That's a bit odd given that narrowing conversion require casts for integral types. Casting from an integral type to a floating point type of the same size seems fine to me (floating point values aren't exactly exact anyway), but I would have thought that narrowing conversions between floating point types would have required casts.

> BTW: You can have safe 'casts'. I think std.conv.to does/will allow only type conversions that are safe.

Yes. But just because a type conversion is safe doesn't mean that it's the right thing to do (e.g. narrowing conversions are _always_ safe but often incorrect). Casts hide things, and while to does it less, that doesn't mean that it doesn't do it. So, it's not like to solves all of the problems that you might have with casting. And there are plenty of programmers who will just use casts when they should be using to, because casting is the normal thing to do in most C-based languages. So, introducing situations where casts are required but could reasonably not be can be problematic. The trick is determining which implicit casts ultimately make sense and which don't.

- Jonathan M Davis
June 07, 2011
On 6/7/2011 5:11 AM, foobar wrote:
> Types are already inferred for delegate literals so why not extend this to regular functions too?

Because of this:

int foo(T1 a, T2) { ... do something with a ... }

which is a common idiom for the second parameter existing but not being used in the function body.

In other words, existing practice holds that identifier T2 is presumed to be a type name, not a parameter name.

June 07, 2011
On 6/7/2011 9:01 AM, foobar wrote:
> Also ML has only two types: integers and floating point.

That changes everything. C has 11 integer types and 3 floating point types, which makes it ugly (and surprisingly buggy) to be forced to explicitly cast when doing mixed type expressions.
June 07, 2011
On 6/3/2011 8:19 AM, Matthew Ong wrote:
> Welcome to D forum, where new idea are squashed and maybe re-discussed later.
> Look up my name as Matthew Ong. Avoid asking the same questions.

Yes, I can understand you feeling that way, being new to the forum.

But consider the following:

1. Proposals for changing the language come in daily. Yes, I mean day after day, week after week, month after month, etc. An army of programmers could not possibly implement them, and simply reading about them is in itself a full time job. This pretty much forces us to be "Doctor No" to everything but the very, very best of them.

2. By comparison, C++0x took 10 years to settle on a couple dozen new features. It's the opposite extreme, sure, but is another point on the graph.

3. A constant barrage of implementation of new/incompatible/disruptive features makes the language unstable and unusable.

4. D is a large language, and it takes a while to grok the D way of doing things. People new to D naturally try to use it like the language they came from (and come up short because D isn't their old, comfortable language). Heck, my first year of Fortran programs looked like Basic programs. My first year of C programs looked like Fortran programs. My early C++ programs looked like C programs. And etc.

5. While sometimes someone with fresh eyes can see obvious improvements we all missed, it is a high bar for that to happen.

6. Going from a proposal a few lines long to implementing it is many hours of work - designing, interaction with other features, performance, backwards compatibility, rewriting the specification, writing a test suite, all in addition to actually coding up the change.

7. D's implementation is all up on github now. This means that anyone can fork D and try out new language features. Anyone can grab those forks and try the new feature out and provide real world feedback on it. Having such experience makes it easier to see if a feature is worth while or not.
June 07, 2011
On 6/7/11 12:20 PM, Timon Gehr wrote:
>
> Jonathan M Davis wrote:
>> ...
>> And while D might arguably allow too many implicit conversions, it allows
>> fewer than C or C++. I actually would expect that bugs due to implicit
>> conversions would be fairly rare in D. And requiring more conversions to be
>> explicit might actually make things worse, because it would become more
>> frequently necessary to use casts, which tend to hide various types of bugs.
>> So, while it might be better to require casting in a few more places than D
>> currently does, on the whole it works quite well. And certainly expecting a
>> major paradigm shift at this point is unrealistic. Minor improvements may be
>> added to the language, and perhaps major backwards-compatible features may be
>> added, but for the most part, D is currently in the mode of stabilizing and
>> completing the implementation of its existing feature set. It's _far_ too late
>> in the game to introduce something like Hindley-Milner type inference,
>> regardless of whether it would have been a good idea in the beginning (and
>> honestly, given D's C and C++ roots, I very much doubt that it ever would have
>> been a good idea to have Hindley-Milner type inference in it - that would have
>> made for a _very_ different sort of language, which definitely wouldn't be D).
>>
>> - Jonathan M Davis
>
>
> Widening implicit casts are very convenient.
> What I think is annoying is that D allows implicit conversions that are narrowing.
> Eg:
> int ->  float
> long ->  double

I'm a bit weary about these too.

> real ->  double
> double ->  float
>
> Especially questionable is the real ->  double ->  float chain. This and implicitly
> casting an integer value to a floating point type whose mantissa is too small
> should imho be disallowed.

I used to think the same but Walter convinced me otherwise. Automatic floating point conversions allow the compiler and libraries to store intermediate results at the optimal precision without impacting user code.

> BTW: You can have safe 'casts'. I think std.conv.to does/will allow only type
> conversions that are safe.

Currently to!T for implicitly-convertible types does not get in the way.


Andrei