View mode: basic / threaded / horizontal-split · Log in · Help
November 21, 2005
Var Types
Why not also include these variable types in D?
int1 - 1 byte
int2 - 2 bytes
int4 - 4 bytes
intN - N bytes (experimental)

It must be also guaranteed that these types will always, on every machine, have
the same size.
November 21, 2005
Re: Var Types
I think int8, int16, int32, int64 is more comfortable.

"John Smith" <John_member@pathlink.com> 
wrote:dlsq7f$30hv$1@digitaldaemon.com...
> Why not also include these variable types in D?
> int1 - 1 byte
> int2 - 2 bytes
> int4 - 4 bytes
> intN - N bytes (experimental)
>
> It must be also guaranteed that these types will always, on every machine, 
> have
> the same size.
>
>
November 21, 2005
Re: Var Types
In article <dlss62$13b$1@digitaldaemon.com>, Shawn Liu says...
>
>I think int8, int16, int32, int64 is more comfortable.

In the interest of hearing this idea out, I'll play the devil's advocate on this
one. :)

What is wrong with the documented convetions laid out for the byte sizes of the
current values?  Would it be enough to incorporate those definitions into the
(eventual) D ABI, to ensure that all D compiler vendors adhere to the same
sizes?

While I think there's value in having a standard that is easily grasped, I don't
think its necessary to clutter things up with more keywords for already
well-defined types.

- EricAnderton at yahoo
November 21, 2005
Re: Var Types
"pragma" <pragma_member@pathlink.com> wrote in message 
news:dlstrd$2i4$1@digitaldaemon.com...
> What is wrong with the documented convetions laid out for the byte sizes 
> of the
> current values?

Because although they're documented and strictly defined, they don't make 
much sense.  For example, long makes sense on a 32-bit machine, but on 
64-bit machines (to which everything is moving relatively soon), 64 bits is 
the default size.  So "long" would be the "normal" size.  Then there's 
short, which I suppose makes sense on both platforms, and int, but neither 
gives any indication of the size.  The only type that does is "byte."

I'd personally like int8, int16, int32, etc.  This also makes it easy to add 
new, larger types.  What comes after int64?  int128, of course.  But what 
comes after "long?"  Why, "cent."  What?!  Huh?

But of course, none of this will ever happen / even be considered, so it's 
kind of an exercise in futility.
November 21, 2005
Re: Var Types
In article <dlsuq9$3d9$1@digitaldaemon.com>, Jarrett Billingsley says...
>
>"pragma" <pragma_member@pathlink.com> wrote in message 
>news:dlstrd$2i4$1@digitaldaemon.com...
>> What is wrong with the documented convetions laid out for the byte sizes 
>> of the
>> current values?
>
>Because although they're documented and strictly defined, they don't make 
>much sense.  For example, long makes sense on a 32-bit machine, but on 
>64-bit machines (to which everything is moving relatively soon), 64 bits is 
>the default size.  So "long" would be the "normal" size.

Maybe if D bit-length specifications were relative (don't know the downsides of
this approach but I'm all ears).
For example:
____________________________________________________________________________,
TYPE    | SIZE           | LEN IN 32-BIT MACHINES | LEN IN 64-BIT MACHINES |  
| (relative to 1 | (in bits)              | (in bits)              |
| CPU word)      |                        |                        |
| (register size)|                        |                        | 
---------+----------------+------------------------+------------------------+
(u)short | 1/2            | 16                     | 32                     |
(u)int   | 1              | 32                     | 64                     |
(u)long  | 2              | 64 (as VC++s __int64)  | 128                    |

After all, isn't it ugly and less abstract to code assuming a certain sizeof
for integral types (and also maybe with other types)? (sizeof brings that
information to the programmer and the programmer should code relative to the 
'sizeof' of a type and not assuming that size with premeditation).

>Then there's 
>short, which I suppose makes sense on both platforms, and int, but neither 
>gives any indication of the size.  The only type that does is "byte."

Don't know if a type should be THAT explicit with it's size.

>I'd personally like int8, int16, int32, etc.  This also makes it easy to add 
>new, larger types.  What comes after int64?  int128, of course.  But what 
>comes after "long?"  Why, "cent."  What?!  Huh?
>
>But of course, none of this will ever happen / even be considered, so it's 
>kind of an exercise in futility. 

Hehe, I agree.

Tom
November 21, 2005
Re: Var Types
In article <dlt3c9$87f$1@digitaldaemon.com>, Tomás Rossi says...
>
>In article <dlsuq9$3d9$1@digitaldaemon.com>, Jarrett Billingsley says...
>>
>>"pragma" <pragma_member@pathlink.com> wrote in message 
>>news:dlstrd$2i4$1@digitaldaemon.com...
>>> What is wrong with the documented convetions laid out for the byte sizes 
>>> of the
>>> current values?
>>
>>Because although they're documented and strictly defined, they don't make 
>>much sense.  For example, long makes sense on a 32-bit machine, but on 
>>64-bit machines (to which everything is moving relatively soon), 64 bits is 
>>the default size.  So "long" would be the "normal" size.
>
>Maybe if D bit-length specifications were relative (don't know the downsides of
>this approach but I'm all ears).
>For example:
>____________________________________________________________________________,
>TYPE    | SIZE           | LEN IN 32-BIT MACHINES | LEN IN 64-BIT MACHINES |  
>| (relative to 1 | (in bits)              | (in bits)              |
>| CPU word)      |                        |                        |
>| (register size)|                        |                        | 
>---------+----------------+------------------------+------------------------+
>(u)short | 1/2            | 16                     | 32                     |
>(u)int   | 1              | 32                     | 64                     |
>(u)long  | 2              | 64 (as VC++s __int64)  | 128                    |

This is exactly one of the things D was designed to avoid.
But it would be nice to have an official alias for the system native register
sized type.

/ Oskar
November 21, 2005
Re: Var Types
Tomás Rossi wrote:
> In article <dlsuq9$3d9$1@digitaldaemon.com>, Jarrett Billingsley says...
> 
>>"pragma" <pragma_member@pathlink.com> wrote in message 
>>news:dlstrd$2i4$1@digitaldaemon.com...
>>
>>>What is wrong with the documented convetions laid out for the byte sizes 
>>>of the
>>>current values?
>>
>>Because although they're documented and strictly defined, they don't make 
>>much sense.  For example, long makes sense on a 32-bit machine, but on 
>>64-bit machines (to which everything is moving relatively soon), 64 bits is 
>>the default size.  So "long" would be the "normal" size.
> 
> 
> Maybe if D bit-length specifications were relative (don't know the downsides of
> this approach but I'm all ears).
> For example:
> ____________________________________________________________________________,
> TYPE    | SIZE           | LEN IN 32-BIT MACHINES | LEN IN 64-BIT MACHINES |  
> | (relative to 1 | (in bits)              | (in bits)              |
> | CPU word)      |                        |                        |
> | (register size)|                        |                        | 
> ---------+----------------+------------------------+------------------------+
> (u)short | 1/2            | 16                     | 32                     |
> (u)int   | 1              | 32                     | 64                     |
> (u)long  | 2              | 64 (as VC++s __int64)  | 128                    |
> 
> After all, isn't it ugly and less abstract to code assuming a certain sizeof
> for integral types (and also maybe with other types)? (sizeof brings that
> information to the programmer and the programmer should code relative to the 
> 'sizeof' of a type and not assuming that size with premeditation).
> 

The problem is this:  people need different guarantees about their 
types' sizes for different purposes.

In one instance, you may need a set of types that are absolutely 
fixed-size for use in reading/writing out binary data to files or 
streams, etc.  In another instance, you may need a set of types that 
match the processor's supported native word sizes for fast processing.

> 
>>Then there's 
>>short, which I suppose makes sense on both platforms, and int, but neither 
>>gives any indication of the size.  The only type that does is "byte."
> 
> 
> Don't know if a type should be THAT explicit with it's size.
> 
> 
>>I'd personally like int8, int16, int32, etc.  This also makes it easy to add 
>>new, larger types.  What comes after int64?  int128, of course.  But what 
>>comes after "long?"  Why, "cent."  What?!  Huh?
>>
>>But of course, none of this will ever happen / even be considered, so it's 
>>kind of an exercise in futility. 
> 

...unless certain ones designing new languages happen to be listening...

I do like the int8, int16, int32, int64 names.  It makes sense.  Very 
easy to scale up the language for 128-bit processing and 256-bit processing.

> 
> Hehe, I agree.
> 
> Tom
November 21, 2005
Re: Var Types
In article <dlt4tj$9va$1@digitaldaemon.com>, Oskar Linde says...
>
>In article <dlt3c9$87f$1@digitaldaemon.com>, Tomás Rossi says...
>>
>>In article <dlsuq9$3d9$1@digitaldaemon.com>, Jarrett Billingsley says...
>>>
>>>"pragma" <pragma_member@pathlink.com> wrote in message 
>>>news:dlstrd$2i4$1@digitaldaemon.com...
>>>> What is wrong with the documented convetions laid out for the byte sizes 
>>>> of the
>>>> current values?
>>>
>>>Because although they're documented and strictly defined, they don't make 
>>>much sense.  For example, long makes sense on a 32-bit machine, but on 
>>>64-bit machines (to which everything is moving relatively soon), 64 bits is 
>>>the default size.  So "long" would be the "normal" size.

What's your opinion on the above? 

>>
>>Maybe if D bit-length specifications were relative (don't know the downsides of
>>this approach but I'm all ears).
>>For example:
>>____________________________________________________________________________,
>> TYPE    | SIZE           | LEN IN 32-BIT MACHINES | LEN IN 64-BIT MACHINES |  
>>         | (relative to 1 | (in bits)              | (in bits)              |
>>         | CPU word)      |                        |                        |
>>         | (register size)|                        |                        | 
>>---------+----------------+------------------------+------------------------+
>>(u)short | 1/2            | 16                     | 32                     |
>>(u)int   | 1              | 32                     | 64                     |
>>(u)long  | 2              | 64 (as VC++s __int64)  | 128                    |
>
>This is exactly one of the things D was designed to avoid.

And why is that? (don't really know, is it in D presentation or docs?)

>But it would be nice to have an official alias for the system native register
>sized type.

Yap.

Tom
November 21, 2005
Re: Var Types
Oskar Linde wrote:
> In article <dlt3c9$87f$1@digitaldaemon.com>, Tomás Rossi says...
> 
>>In article <dlsuq9$3d9$1@digitaldaemon.com>, Jarrett Billingsley says...
>>
>>>"pragma" <pragma_member@pathlink.com> wrote in message 
>>>news:dlstrd$2i4$1@digitaldaemon.com...
>>>
>>>>What is wrong with the documented convetions laid out for the byte sizes 
>>>>of the
>>>>current values?
>>>
>>>Because although they're documented and strictly defined, they don't make 
>>>much sense.  For example, long makes sense on a 32-bit machine, but on 
>>>64-bit machines (to which everything is moving relatively soon), 64 bits is 
>>>the default size.  So "long" would be the "normal" size.
>>
>>Maybe if D bit-length specifications were relative (don't know the downsides of
>>this approach but I'm all ears).
>>For example:
>>____________________________________________________________________________,
>>TYPE    | SIZE           | LEN IN 32-BIT MACHINES | LEN IN 64-BIT MACHINES |  
>>| (relative to 1 | (in bits)              | (in bits)              |
>>| CPU word)      |                        |                        |
>>| (register size)|                        |                        | 
>>---------+----------------+------------------------+------------------------+
>>(u)short | 1/2            | 16                     | 32                     |
>>(u)int   | 1              | 32                     | 64                     |
>>(u)long  | 2              | 64 (as VC++s __int64)  | 128                    |
> 
> 
> This is exactly one of the things D was designed to avoid.
> But it would be nice to have an official alias for the system native register
> sized type.

I don't believe it would be nice. The language already has the most 
needed data types. It's really a pain in the ass to test these 
variable-length types using different architectures. Maybe they would 
result in a better c-interoperativity, but still it would make porting d 
programs harder. Of course you might say that you don't have to use this 
type, but I have a feeling that not all people ever get it right.
November 21, 2005
Re: Var Types
In article <dlss62$13b$1@digitaldaemon.com>, Shawn Liu says...
>
>I think int8, int16, int32, int64 is more comfortable.
>
>"John Smith" <John_member@pathlink.com> 
>wrote:dlsq7f$30hv$1@digitaldaemon.com...
>> Why not also include these variable types in D?
>> int1 - 1 byte
>> int2 - 2 bytes
>> int4 - 4 bytes
>> intN - N bytes (experimental)
>>
>> It must be also guaranteed that these types will always, on every machine, 
>> have
>> the same size.
>>

I think this is a nice idea. Most projects either have network data formats to
define and/or interface with hardware, both of which require you to once again
write a "prim_types.h" file (in C/C++) to yet again define what a
Uint8/Int8/Uint16/Int16 etc. are on the platform that you're using this time
round. It doesn't take that long to do, and it's an obvious thing to use "alias"
for... but it's another thing that it would be really nice to have builtin to
the language instead of having to do it by hand.

A key word for me in the suggestion is "also". We definitely need to be able to
specify "int" as the platform-native (i.e. fastest) integer type.

Just my tuppence!

Munch
« First   ‹ Prev
1 2 3 4 5
Top | Discussion index | About this forum | D home