Thread overview
Types A!1 and A!1u not considered equal?
Oct 21, 2011
Tobias Brandt
Oct 21, 2011
Don
Oct 21, 2011
kenji hara
Oct 21, 2011
Gor Gyolchanyan
Oct 21, 2011
Jonathan M Davis
Oct 21, 2011
Gor Gyolchanyan
Oct 21, 2011
Tobias Brandt
Oct 21, 2011
David Nadlinger
Oct 21, 2011
Jonathan M Davis
October 21, 2011
Consider the following program:

 class A(uint N) {}
 void foo(uint N)(A!N) {}

 void main()
 {
     auto a = new A!1;                           // compiles
     foo(new A!1);                               // error
     foo(new A!1u);                              // compiles
     foo(cast(A!1u) A!1)                         // compiles, but may
                                                //   crash at runtime
     assert(typeid(new A!1) == typeid(new A!1u)) // compiles, fails at runtime
 }

The second line in main gives the following error:

Error: cannot implicitly convert expression (new A) of type
       test.A!(1).A to test.A!(N).A

Adding the 'u' makes the code compile without errors. Explicitly instantiating foo with !1 or !1u does not change anything.

>From the first line, it is clear that instantiating A!1 is not the
problem. Apparently A!1 and A!1u are considered distinct types, although the template parameter must in both cases be of type uint and have value 1 and thus be identical.

What's going on here?
October 21, 2011
That's because implicit casts in D are much more strict, then those in C/C++.
Such seemingly intuitive cats, e.g. from long to int are not performed
due to potential loss of data.
Casting from int to uint has the same effect of potential loss of data.
Probably, the compile-time versions of those casts could use some
checks to enable the casts when the actual value can fit into the new
type.
In any case, I'd recommend you to specify the type of your literals
always, because the next time this kind of thing happens, the bug may
be silent (especially with templates, that cast types internally).

On Fri, Oct 21, 2011 at 4:31 AM, Tobias Brandt <tob.brandt@googlemail.com> wrote:
> Consider the following program:
>
>  class A(uint N) {}
>  void foo(uint N)(A!N) {}
>
>  void main()
>  {
>      auto a = new A!1;                           // compiles
>      foo(new A!1);                               // error
>      foo(new A!1u);                              // compiles
>      foo(cast(A!1u) A!1)                         // compiles, but may
>                                                 //   crash at runtime
>      assert(typeid(new A!1) == typeid(new A!1u)) // compiles, fails at runtime
>  }
>
> The second line in main gives the following error:
>
> Error: cannot implicitly convert expression (new A) of type
>        test.A!(1).A to test.A!(N).A
>
> Adding the 'u' makes the code compile without errors. Explicitly instantiating foo with !1 or !1u does not change anything.
>
> From the first line, it is clear that instantiating A!1 is not the problem. Apparently A!1 and A!1u are considered distinct types, although the template parameter must in both cases be of type uint and have value 1 and thus be identical.
>
> What's going on here?
>
October 21, 2011
On Friday, October 21, 2011 11:57:50 Gor Gyolchanyan wrote:
> That's because implicit casts in D are much more strict, then those in
> C/C++. Such seemingly intuitive cats, e.g. from long to int are not
> performed due to potential loss of data.
> Casting from int to uint has the same effect of potential loss of data.

In D, integral types implicitly convert to their unsigned counterparts and vice versa. D does not consider those conversions to be narrowing conversions which require a cast (though they _are_ narrowing conversions and do risk messing up the number if it's too large or too small).

- Jonathan M Davis
October 21, 2011
On 21.10.2011 02:31, Tobias Brandt wrote:
> Consider the following program:
>
>   class A(uint N) {}
>   void foo(uint N)(A!N) {}
>
>   void main()
>   {
>       auto a = new A!1;                           // compiles
>       foo(new A!1);                               // error
>       foo(new A!1u);                              // compiles
>       foo(cast(A!1u) A!1)                         // compiles, but may
>                                                  //   crash at runtime
>       assert(typeid(new A!1) == typeid(new A!1u)) // compiles, fails at runtime
>   }
>
> The second line in main gives the following error:
>
> Error: cannot implicitly convert expression (new A) of type
>         test.A!(1).A to test.A!(N).A
>
> Adding the 'u' makes the code compile without errors. Explicitly
> instantiating foo with !1 or !1u does not change anything.
>
>> From the first line, it is clear that instantiating A!1 is not the
> problem. Apparently A!1 and A!1u are considered distinct types,
> although the template parameter must in both cases be of type uint
> and have value 1 and thus be identical.
>
> What's going on here?
It's a bit similar to bug 1641.
October 21, 2011
Unlike long <-> int conversions, signed <-> unsigned conversions are
narrowing in both ways.
If anything, those conversions should be even more strict.

On another note, i had this thought about fundamental types.
What i thought about is, making fundamental types a library solution.
"What? Are you out of your mind? How're you gonna make int a library type?"
The way i see it, fundamental types could be implemented as structs
with lots of assembly code written in their operator overloads.
The emitted binary would be exactly the same and there would be much
less magic in the language.
The literals for fundamental types would be interpreted as string
literals, which would be parsed at compile time in their constructors.
AND one would be able to make custom types akin the fundamental ones
with absolutely no difference in look-and-feel or performance.
since the integral literal would be translated to a string, one could
be able to wrote a meter-long literal and use it with BigInt, which
would no longer feel like a weird outcast in the world of integral
types.
The only shortcoming, that i see right now is the fact. that the
implementation of fundamental types is completely defined by
underlying C implementation. This could be dealt with by introducing
compiler intrinsic functions for fundamental type manipulations.
The only fundamental type would be void* and void[]., which would, in
fact, be used to implement e.g. int and float.
This would also reduce the number of magical properties, like min and
max, making them all nice and visible.
And yes, i know, that the compiler benefits of them being built-in and
it can perform optimizations. It still can do those optimizations,
because their in druntime. Besides, this is a good opportunity to
improve the general optimization capabilities of the compiler.

On Fri, Oct 21, 2011 at 12:01 PM, Jonathan M Davis <jmdavisProg@gmx.com> wrote:
> On Friday, October 21, 2011 11:57:50 Gor Gyolchanyan wrote:
>> That's because implicit casts in D are much more strict, then those in
>> C/C++. Such seemingly intuitive cats, e.g. from long to int are not
>> performed due to potential loss of data.
>> Casting from int to uint has the same effect of potential loss of data.
>
> In D, integral types implicitly convert to their unsigned counterparts and vice versa. D does not consider those conversions to be narrowing conversions which require a cast (though they _are_ narrowing conversions and do risk messing up the number if it's too large or too small).
>
> - Jonathan M Davis
>
October 21, 2011
On 21 October 2011 10:01, Jonathan M Davis <jmdavisProg@gmx.com> wrote:
> On Friday, October 21, 2011 11:57:50 Gor Gyolchanyan wrote:
>> That's because implicit casts in D are much more strict, then those in
>> C/C++. Such seemingly intuitive cats, e.g. from long to int are not
>> performed due to potential loss of data.
>> Casting from int to uint has the same effect of potential loss of data.
>
> In D, integral types implicitly convert to their unsigned counterparts and vice versa. D does not consider those conversions to be narrowing conversions which require a cast (though they _are_ narrowing conversions and do risk messing up the number if it's too large or too small).

Obviously, the conversion does happen implicitly, otherwise 'new A!1' wouldn't compile (A expects a uint as parameter). But then, why are A!1 and A!1u different types?
October 21, 2011
On Friday, October 21, 2011 12:20:02 Tobias Brandt wrote:
> On 21 October 2011 10:01, Jonathan M Davis <jmdavisProg@gmx.com> wrote:
> > On Friday, October 21, 2011 11:57:50 Gor Gyolchanyan wrote:
> >> That's because implicit casts in D are much more strict, then those in
> >> C/C++. Such seemingly intuitive cats, e.g. from long to int are not
> >> performed due to potential loss of data.
> >> Casting from int to uint has the same effect of potential loss of
> >> data.
> > 
> > In D, integral types implicitly convert to their unsigned counterparts and vice versa. D does not consider those conversions to be narrowing conversions which require a cast (though they _are_ narrowing conversions and do risk messing up the number if it's too large or too small).
> 
> Obviously, the conversion does happen implicitly, otherwise 'new A!1' wouldn't compile (A expects a uint as parameter). But then, why are A!1 and A!1u different types?

I believe that it's a bug in the compiler.

- Jonathan M Davis
October 21, 2011
On 10/21/11 12:20 PM, Tobias Brandt wrote:
> Obviously, the conversion does happen implicitly, otherwise
> 'new A!1' wouldn't compile (A expects a uint as parameter).
> But then, why are A!1 and A!1u different types?

Because of a compiler bug, and contrary to the other answers, implicit conversion stuff is not (primarily) to blame here. I also hit it before, but never really had time to track it down. Sometimes (but I never really found out when), it also occurs if you are only using the »right« literals and use casts all over the place – see e.g. https://github.com/klickverbot/phobos/blob/units/std/units.d#L1913.

David
October 21, 2011
2011/10/21 Don <nospam@nospam.com>:
> On 21.10.2011 02:31, Tobias Brandt wrote:
>>
>> Consider the following program:
>>
>>  class A(uint N) {}
>>  void foo(uint N)(A!N) {}
>>
>>  void main()
>>  {
>>      auto a = new A!1;                           // compiles
>>      foo(new A!1);                               // error
>>      foo(new A!1u);                              // compiles
>>      foo(cast(A!1u) A!1)                         // compiles, but may
>>                                                 //   crash at runtime
>>      assert(typeid(new A!1) == typeid(new A!1u)) // compiles, fails at
>> runtime
>>  }
>>
>> The second line in main gives the following error:
>>
>> Error: cannot implicitly convert expression (new A) of type
>>        test.A!(1).A to test.A!(N).A
>>
>> Adding the 'u' makes the code compile without errors. Explicitly instantiating foo with !1 or !1u does not change anything.
>>
>>> From the first line, it is clear that instantiating A!1 is not the
>>
>> problem. Apparently A!1 and A!1u are considered distinct types, although the template parameter must in both cases be of type uint and have value 1 and thus be identical.
>>
>> What's going on here?
>
> It's a bit similar to bug 1641.

Bug 1641 is already fixed. That issue does not occur with newest dmd.

And, it is bug 3467. I have already posted dmd patch. http://d.puremagic.com/issues/show_bug.cgi?id=3467 https://github.com/D-Programming-Language/dmd/pull/449

Kenji Hara