Jump to page: 1 2
Thread overview
typedef behavior question
Jan 19, 2003
Scott Pigman
Jan 19, 2003
Alix Pexton
Jan 19, 2003
Scott Pigman
Jan 19, 2003
Mike Wynn
Jan 19, 2003
Scott Pigman
Jan 19, 2003
Mike Wynn
Jan 19, 2003
Scott Pigman
Jan 19, 2003
Mike Wynn
Jan 21, 2003
Scott Pigman
Jan 21, 2003
Mike Wynn
Jan 21, 2003
Ilya Minkov
Jan 21, 2003
Scott Pigman
Jan 20, 2003
Ilya Minkov
Jan 20, 2003
Walter
Jan 20, 2003
Ilya Minkov
January 19, 2003
Is the error on line 2 a bug or a feature?


typedef double money;
money m = 0.0; // ERROR "cannot implictly convert double to money"
m = (money)0.0; // Correct


-scott
January 19, 2003
Feature!

In D typedef makes a completly new type, unlike in C/C++.

C-style typedefs can be created using alias instead.

Alix Pexton...
Webmaster, www.thedjournal.com

Scott Pigman wrote:
> Is the error on line 2 a bug or a feature?
> 
> 
> typedef double money;
> money m = 0.0; // ERROR "cannot implictly convert double to money"
> m = (money)0.0; // Correct
> 
> 
> -scott

January 19, 2003
i understand that typedef really means "Define a new type", and i see that
having to cast a numeric constant to the type of the typedef is locically
consistant and follows what happens with class heirachies, but i don't
like it.  if i said "double d = 0", i wouldn't have any problem, even
though "0" is and int, not a double.  who would want to always type
"double d = (double)0.0"?  so why have to cast 0.0 to type
"money", when it really doesn't add any information or safety so far as i
can see.  when it comes to assigning variables to one another or passing
them to functions i see the benefit, but i would expect to be able to
assign numeric contants to my typedefed variable without having to cast.

-scott


On Sun, 19 Jan 2003 15:26:01 +0000, Alix Pexton wrote:

> Feature!
> 
> In D typedef makes a completly new type, unlike in C/C++.
> 
> C-style typedefs can be created using alias instead.
> 
> Alix Pexton...
> Webmaster, www.thedjournal.com
> 
> Scott Pigman wrote:
>> Is the error on line 2 a bug or a feature?
>> 
>> 
>> typedef double money;
>> money m = 0.0; // ERROR "cannot implictly convert double to money"
>> m = (money)0.0; // Correct
>> 
>> 
>> -scott

January 19, 2003
"Scott Pigman" <scottpig1@attbi.com> wrote in message news:b0effv$2rqk$1@digitaldaemon.com...
> Is the error on line 2 a bug or a feature?
>
>
> typedef double money;
> money m = 0.0; // ERROR "cannot implictly convert double to money"
> m = (money)0.0; // Correct
>
if you don't want a new type you can use alias

alias double money;
money m = 0.0;




January 19, 2003
i want a new type. i don't want an alias.  i support the current model 100% when it comes to assigning variables to each other or for parameter types.  but i think it's a pain and not terribly useful to have to cast a literal number (not a variable) to the type of my typedef.  it seems to be an excersise in typing - what else could be your intention for that number other than to be assigned to that lvalue?  what does it really mean if you cast a number to the wrong type?  why does it make the program any better/clearer/safer to require a cast in this case?

i guess i would propose that numbers are automatically considered to be of any type derived from their actual type, or of any type derived from a type they can be implicitly cast to.  i.e. "0" would be a valid rvalue for any lval typedef'd from int or float.

-s

On Sun, 19 Jan 2003 16:52:36 +0000, Mike Wynn wrote:


> "Scott Pigman" <scottpig1@attbi.com> wrote in message news:b0effv$2rqk$1@digitaldaemon.com...
>> Is the error on line 2 a bug or a feature?
>>
>>
>> typedef double money;
>> money m = 0.0; // ERROR "cannot implictly convert double to money" m =
>> (money)0.0; // Correct
>>
> if you don't want a new type you can use alias
> 
> alias double money;
> money m = 0.0;
January 19, 2003
"Scott Pigman" <scottpig1@attbi.com> wrote in message news:b0eop9$302q$1@digitaldaemon.com...
> i want a new type. i don't want an alias.  i support the current model 100% when it comes to assigning variables to each other or for parameter types.  but i think it's a pain and not terribly useful to have to cast a literal number (not a variable) to the type of my typedef.  it seems to be an excersise in typing - what else could be your intention for that number other than to be assigned to that lvalue?  what does it really mean if you cast a number to the wrong type?  why does it make the program any better/clearer/safer to require a cast in this case?

why do you draw a distiction between a literal and a variable.
why should literals be less type safe than variables.
and you've said you want a typedef and not an alias, but the behaviour you
have mentioned is what you get from an alias rather than a typedef; can you
expain why you need a typedef.

> i guess i would propose that numbers are automatically considered to be of any type derived from their actual type, or of any type derived from a type they can be implicitly cast to.  i.e. "0" would be a valid rvalue for any lval typedef'd from int or float.

it depends on what your new type is,
does 'typedef int myint;'
mean that myint is a specialised form of 'int' (thus supports less values)
or that myint is an extension to 'int' (thus supports a greater range).
or either ?
as it stands conceptually  'typedef int myint;' is the former, you can
always assign a myint to an int because it is a specialised form

typedef int foo;
const foo a = cast(foo)2;
int b = a;

the cast is required because not all int's are foo's; but all foo's are ints the fact that there is no int that is not a foo is not the point, becuase conceptually there does exists an int that is not a foo.

in the D doc's you have
A typedef can be implicitly converted to its underlying type, but going the
other way requires an explicit conversion.

what is missing is the ability to write 'foo.cast(int)' or tell the compiler that an implicit cast foo->int exists and it should use that.





January 19, 2003
Mike -
i'm not convinced yet.  i think you're correct in a theoretical sense, but
in a practical sense, i don't see the benefit of having to be so explicit
with literals.  anyway, comments are embedded within.

-scott


On Sun, 19 Jan 2003 19:24:37 +0000, Mike Wynn wrote:

> why do you draw a distiction between a literal and a variable.
> why should literals be less type safe than variables.
> and you've said you want a typedef and not an alias, but the behaviour you
> have mentioned is what you get from an alias rather than a typedef; can you
> expain why you need a typedef.

i draw a distinction because i see typedefs as a means of clarifying and enforcing programmer intent.  but i don't think it helps the programmer's intent any to force him or her to cast a literal to a typdef.  i think the intent of somebody writing "money m = 1000" is pretty clear and doesn't need casts to clutter it up.  but something like

	printMyIntVal( getYourIntVal() ) // probably an error...

does benefit from checking the types.  but i can't see how forcing

	printMyIntValue( (MyIntVal)1000 )
over
	printMyIntValue( 1000 )

benefits anybody.


can  you show me a bug introduced by using
	money m = 0.0;
instead of
	money m = (money)0;
???

>> i guess i would propose that numbers are automatically considered to be of any type derived from their actual type, or of any type derived from a type they can be implicitly cast to.  i.e. "0" would be a valid rvalue for any lval typedef'd from int or float.
> 
> it depends on what your new type is,
> does 'typedef int myint;'
> mean that myint is a specialised form of 'int' (thus supports less values)
> or that myint is an extension to 'int' (thus supports a greater range).
> or either ?

but they *can't* support more or less values with anything i see in the doc - they're still ints underneath it all. they're not new classes which could restrict or extend the possible range of values.  the only extra functionality a typedef type can have over a regular type is the default initializer -- which isn't at issue here.


> as it stands conceptually  'typedef int myint;' is the former, you can always assign a myint to an int because it is a specialised form
> 
> typedef int foo;
> const foo a = cast(foo)2;
> int b = a;
> 
> the cast is required because not all int's are foo's;

what literal integer value is illegal to assign to foo?  and if there is one, you couldn't cast it to anything that was legal, could you?

> but all foo's are ints
> the fact that there is no int that is not a foo is not the point, becuase
> conceptually there does exists an int that is not a foo.

conceptually, maybe - but practically?


> in the D doc's you have
> A typedef can be implicitly converted to its underlying type, but going the
> other way requires an explicit conversion.
> 
> what is missing is the ability to write 'foo.cast(int)' or tell the compiler that an implicit cast foo->int exists and it should use that.

that would help, and i could possibly go along with it for implicit conversions between myInt and myOtherInt, but i still think literals deserver to be a special case,  will lead to nothing but unintended errors if they aren't, and won't lead to any unintended errors (i think ;-) if they are.

-scott

January 19, 2003
"Scott Pigman" <scottpig1@attbi.com> wrote in message news:b0f11t$37h$1@digitaldaemon.com...
> Mike -
> i'm not convinced yet.  i think you're correct in a theoretical sense, but
> in a practical sense, i don't see the benefit of having to be so explicit
> with literals.  anyway, comments are embedded within.
>


I still don't understand why you are using a typedef and not an alias ('alias' is almost directly equivilent to C 'typedef')

typedef in D creaetes a theoretical new type.

> On Sun, 19 Jan 2003 19:24:37 +0000, Mike Wynn wrote:
>
> > why do you draw a distiction between a literal and a variable.
> > why should literals be less type safe than variables.
> > and you've said you want a typedef and not an alias, but the behaviour
you
> > have mentioned is what you get from an alias rather than a typedef; can
you
> > expain why you need a typedef.
>
> i draw a distinction because i see typedefs as a means of clarifying and enforcing programmer intent.  but i don't think it helps the programmer's intent any to force him or her to cast a literal to a typdef.  i think the intent of somebody writing "money m = 1000" is pretty clear and doesn't need casts to clutter it up.  but something like

but that's exactly what is does, you want money to be a new type, that is
realy a double;
1000 is an int, is that realy what the programmer wanted assign to the
variable
the compiler is not sure, so you have to tell is that yes, you realy do want
to assign
the int,float, double value to the money type.

> printMyIntVal( getYourIntVal() ) // probably an error...
>
> does benefit from checking the types.  but i can't see how forcing
>
> printMyIntValue( (MyIntVal)1000 )
> over
> printMyIntValue( 1000 )
>
> benefits anybody.
>
>
> can  you show me a bug introduced by using
> money m = 0.0;
> instead of
> money m = (money)0;

no, but just because that code does not introduce a bug, does not mean that its wrong/right.

I use numeric typedef rarely, I don't see how typedef double money benifits
the code in any way
I think I've only ever used the eqiv of typedef uint UID;  when I need a
unique ID field for some comms (Delphi not D)
but never found it that useful its only use was warning me that I should
take care with the operations I was performing.

and in your example
you have to put  `money m = cast(money)0;`
// "C" casts i.e. `(type)var`  should be banned, they cause problems to the
parser because unlike C the D compiler can not tell if
// (id)  is a typecast or not because the type 'id' can be declared after
the statement.

if you just put `money m = 0;`  you get an error, which alerts you to check
that
0 is realy a valid value for m.

> >> i guess i would propose that numbers are automatically considered to be
of
> >> any type derived from their actual type, or of any type derived from a type they can be implicitly cast to.  i.e. "0" would be a valid rvalue for any lval typedef'd from int or float.
> >
> > it depends on what your new type is,
> > does 'typedef int myint;'
> > mean that myint is a specialised form of 'int' (thus supports less
values)
> > or that myint is an extension to 'int' (thus supports a greater range).
> > or either ?
>
> but they *can't* support more or less values with anything i see in the
doc
> - they're still ints underneath it all. they're not new classes which
could
> restrict or extend the possible range of values.  the only extra functionality a typedef type can have over a regular type is the default initializer -- which isn't at issue here.

whether they can or can not support a greater or lesser range of values in
reality is not relevant
a typedef is a concept, you have created a new type, and the rules of that
new type are
all values of the new type are valid values of the underlying type
NOT all values of the underlying type are valid values of the new type

> > but all foo's are ints
> > the fact that there is no int that is not a foo is not the point,
becuase
> > conceptually there does exists an int that is not a foo.
>
> conceptually, maybe - but practically?

write in asm or C them, if you dont like the constucts of a programming language that has such rules.

> > in the D doc's you have
> > A typedef can be implicitly converted to its underlying type, but going
the
> > other way requires an explicit conversion.
> >
> > what is missing is the ability to write 'foo.cast(int)' or tell the
compiler
> > that an implicit cast foo->int exists and it should use that.
>
> that would help, and i could possibly go along with it for implicit conversions between myInt and myOtherInt, but i still think literals deserver to be a special case,  will lead to nothing but unintended errors if they aren't, and won't lead to any unintended errors (i think ;-) if they are.
>

if you look at it right you will see it is you that should bend and not the
spoon !
(even if the spoon is imperfect)
creating special cases leads to more special cases and in the end leads to
confusion.




January 20, 2003
Scott Pigman wrote:
> Mike -
> i'm not convinced yet.  i think you're correct in a theoretical sense, but
> in a practical sense, i don't see the benefit of having to be so explicit
> with literals.  anyway, comments are embedded within.
> 
> -scott
> 

If you want to create a type you need to be proficient in typing :>
(pun not originally mine)

I really think that casting constants has no purpose, because it doesn't get a "think again" effect as with variables and especially expressions, and it might lead to the effect that the programmer would be typing these casts without further thinking, even on expressions, which are most evil.

Let's see. If it was a struct, you could write a method that would allow you to read the type safely. And make requiered conversions. This is useful, for example, do define a fixedpoint type. It is built out of an integer, but represents data in a "shifted" manner, so you can not assign an integer to it directly. You write an assignment method then, which makes a shift. You can also write one for the floats, as they can be converted into fixedpoint.

Say, is operator overloading possible on typedef-s? IMO it should.

-i.

January 20, 2003
"Ilya Minkov" <midiclub@8ung.at> wrote in message news:b0h1eb$15pl$1@digitaldaemon.com...
> Say, is operator overloading possible on typedef-s? IMO it should.

It is, and you're right. Overloading on typedefs is one of the major points of it.


« First   ‹ Prev
1 2