November 21, 2005
In article <dlt9n6$dr6$1@digitaldaemon.com>, =?ISO-8859-1?Q?Jari-Matti_M=E4kel=E4?= says...
>It's really a pain in the ass to test these variable-length types using different architectures. Maybe they would result in a better c-interoperativity, but still it would make porting d programs harder.

Actually, the whole point of such types is to increase ease of portability. It's so that you can say "int32" and know that your code is going to work on any platform, instead of discovering that "int" on your new platform is a different length, and oh my gosh, that data I'm sending across the network is no longer the same format that it used to be, so the different versions of the code running on different machines can no longer talk to each other... and that interface spec I just sent out is now completely worthless - nobody else can talk to my application anymore either.

It's not impossible to write your own types - but I've lost count of the number of times I written a types.h file. You lose the "builtin type" highlighting in your favourite editor, and everybody has slightly different naming conventions (uint8, Uint8, UInt8, uint_8, u_int8 etc.), which annoys you when you come in half way through a project where they do it different to you. It would be nice (and I'm assuming, easy) to have this in D. It won't really matter if it's missing, but it's polish - another fix for another niggle that's been there since the year dot. And of course you don't have to use those types if you only ever write single platform pure software applications with no networking capability.

Just my tuppence

Munch


November 21, 2005
Munchgreeble@bigfoot.com wrote:
> In article <dlt9n6$dr6$1@digitaldaemon.com>,
> =?ISO-8859-1?Q?Jari-Matti_M=E4kel=E4?= says...
> 
>>It's really a pain in the ass to test these variable-length types using different architectures. Maybe they would result in a better c-interoperativity, but still it would make porting d programs harder.
> 
> 
> Actually, the whole point of such types is to increase ease of portability. It's
> so that you can say "int32" and know that your code is going to work on any
> platform, instead of discovering that "int" on your new platform is a different
> length, and oh my gosh, that data I'm sending across the network is no longer
> the same format that it used to be, so the different versions of the code
> running on different machines can no longer talk to each other... and that
> interface spec I just sent out is now completely worthless - nobody else can
> talk to my application anymore either.

I can't see your point here. You see, the "int" is always 32 bits in D. You can alias it to be int32 if you want. Please read http://www.digitalmars.com/d/type.html. As you can see, "real" is the only implementation-depended type.

> 
> It's not impossible to write your own types - but I've lost count of the number
> of times I written a types.h file. You lose the "builtin type" highlighting in
> your favourite editor, and everybody has slightly different naming conventions
> (uint8, Uint8, UInt8, uint_8, u_int8 etc.), which annoys you when you come in
> half way through a project where they do it different to you. It would be nice
> (and I'm assuming, easy) to have this in D. It won't really matter if it's
> missing, but it's polish - another fix for another niggle that's been there
> since the year dot. And of course you don't have to use those types if you only
> ever write single platform pure software applications with no networking
> capability.

True, but I'm still saying that D already has these types. If you don't like the current naming convention, you can always

  alias byte int8
  alias short int16
  alias int int32
  alias long int64

and so on...

In C you would have to use implementation specific sizeof-logic.

I think Walter has chosen these keywords because they are widely used in other languages. They're also closer to the natural language.
November 21, 2005
Jari-Matti Mäkelä wrote:
> Oskar Linde wrote:
> 
>> In article <dlt3c9$87f$1@digitaldaemon.com>, Tomás Rossi says...
>>
>>> In article <dlsuq9$3d9$1@digitaldaemon.com>, Jarrett Billingsley says...
>>>
>>>> "pragma" <pragma_member@pathlink.com> wrote in message news:dlstrd$2i4$1@digitaldaemon.com...
>>>>
>>>>> What is wrong with the documented convetions laid out for the byte sizes of the
>>>>> current values?
>>>>
>>>>
>>>> Because although they're documented and strictly defined, they don't make much sense.  For example, long makes sense on a 32-bit machine, but on 64-bit machines (to which everything is moving relatively soon), 64 bits is the default size.  So "long" would be the "normal" size.
>>>
>>>
>>> Maybe if D bit-length specifications were relative (don't know the downsides of
>>> this approach but I'm all ears).
>>> For example:
>>> ____________________________________________________________________________, 
>>>
>>> TYPE    | SIZE           | LEN IN 32-BIT MACHINES | LEN IN 64-BIT MACHINES |  | (relative to 1 | (in bits)              | (in bits)              |
>>> | CPU word)      |                        |                        |
>>> | (register size)|                        |                        | ---------+----------------+------------------------+------------------------+ 
>>>
>>> (u)short | 1/2            | 16                     | 32                     |
>>> (u)int   | 1              | 32                     | 64                     |
>>> (u)long  | 2              | 64 (as VC++s __int64)  | 128                    |
>>
>>
>>
>> This is exactly one of the things D was designed to avoid.
>> But it would be nice to have an official alias for the system native register
>> sized type.
> 
> 
> I don't believe it would be nice. The language already has the most needed data types. It's really a pain in the ass to test these variable-length types using different architectures. Maybe they would result in a better c-interoperativity, but still it would make porting d programs harder. Of course you might say that you don't have to use this type, but I have a feeling that not all people ever get it right.

Sorry, I was wrong. This is a big performance issue. Currently the "int" is maybe the fastest integer type for most x86 users. But now one must use some compile time macros (version/static if/etc) to ensure that the data type used will be fast enough in other environments too.
November 21, 2005
In article <dlt5jv$aj6$1@digitaldaemon.com>, Tomás Rossi says...
>>>Maybe if D bit-length specifications were relative (don't know the downsides of
>>>this approach but I'm all ears).
>>>For example:
>>>____________________________________________________________________________,
>>> TYPE    | SIZE           | LEN IN 32-BIT MACHINES | LEN IN 64-BIT MACHINES |
>>>         | (relative to 1 | (in bits)              | (in bits)              |
>>>         | CPU word)      |                        |                        |
>>>         | (register size)|                        |                        |
>>>---------+----------------+------------------------+------------------------+
>>>(u)short | 1/2            | 16                     | 32                     |
>>>(u)int   | 1              | 32                     | 64                     |
>>>(u)long  | 2              | 64 (as VC++s __int64)  | 128                    |
>>
>>This is exactly one of the things D was designed to avoid.
>
>And why is that? (don't really know, is it in D presentation or docs?)
>

From experience. Its best that Mr Bright stay away from implementation specific types. I believe its better to know absolutely what a given type is. You are programming in D, not x86.


November 21, 2005
In article <dltfj7$i8l$1@digitaldaemon.com>, Munchgreeble@bigfoot.com says...
>
>In article <dlt9n6$dr6$1@digitaldaemon.com>, =?ISO-8859-1?Q?Jari-Matti_M=E4kel=E4?= says...
>>It's really a pain in the ass to test these variable-length types using different architectures. Maybe they would result in a better c-interoperativity, but still it would make porting d programs harder.
>
>Actually, the whole point of such types is to increase ease of portability. It's so that you can say "int32" and know that your code is going to work on any platform, instead of discovering that "int" on your new platform is a different length, and oh my gosh, that data I'm sending across the network is no longer the same format that it used to be, so the different versions of the code running on different machines can no longer talk to each other... and that interface spec I just sent out is now completely worthless - nobody else can talk to my application anymore either.
>
>It's not impossible to write your own types - but I've lost count of the number of times I written a types.h file. You lose the "builtin type" highlighting in your favourite editor, and everybody has slightly different naming conventions (uint8, Uint8, UInt8, uint_8, u_int8 etc.), which annoys you when you come in half way through a project where they do it different to you. It would be nice (and I'm assuming, easy) to have this in D. It won't really matter if it's missing, but it's polish - another fix for another niggle that's been there since the year dot. And of course you don't have to use those types if you only ever write single platform pure software applications with no networking capability.
>
>Just my tuppence
>
>Munch
>
>


But that's the whole point with D. All the integral types are completely unambiguous. I'm not sure what you are asking for? You want the unambiguous integral types renamed, and you want the regular ones to become ambiguious? That sounds a lot like what we are trying to fix in the first place.


November 21, 2005
Jari-Matti Mäkelä wrote:
> Jari-Matti Mäkelä wrote:
> 
>> Oskar Linde wrote:
>>
>>> In article <dlt3c9$87f$1@digitaldaemon.com>, Tomás Rossi says...
>>>
>>>> In article <dlsuq9$3d9$1@digitaldaemon.com>, Jarrett Billingsley says...
>>>>
>>>>> "pragma" <pragma_member@pathlink.com> wrote in message news:dlstrd$2i4$1@digitaldaemon.com...
>>>>>
>>>>>> What is wrong with the documented convetions laid out for the byte sizes of the
>>>>>> current values?
>>>>>
>>>>>
>>>>>
>>>>> Because although they're documented and strictly defined, they don't make much sense.  For example, long makes sense on a 32-bit machine, but on 64-bit machines (to which everything is moving relatively soon), 64 bits is the default size.  So "long" would be the "normal" size.
>>>>
>>>>
>>>>
>>>> Maybe if D bit-length specifications were relative (don't know the downsides of
>>>> this approach but I'm all ears).
>>>> For example:
>>>> ____________________________________________________________________________, 
>>>>
>>>> TYPE    | SIZE           | LEN IN 32-BIT MACHINES | LEN IN 64-BIT MACHINES |  | (relative to 1 | (in bits)              | (in bits)              |
>>>> | CPU word)      |                        |                        |
>>>> | (register size)|                        |                        | ---------+----------------+------------------------+------------------------+ 
>>>>
>>>> (u)short | 1/2            | 16                     | 32                     |
>>>> (u)int   | 1              | 32                     | 64                     |
>>>> (u)long  | 2              | 64 (as VC++s __int64)  | 128                    |
>>>
>>>
>>>
>>>
>>> This is exactly one of the things D was designed to avoid.
>>> But it would be nice to have an official alias for the system native register
>>> sized type.
>>
>>
>>
>> I don't believe it would be nice. The language already has the most needed data types. It's really a pain in the ass to test these variable-length types using different architectures. Maybe they would result in a better c-interoperativity, but still it would make porting d programs harder. Of course you might say that you don't have to use this type, but I have a feeling that not all people ever get it right.
> 
> 
> Sorry, I was wrong. This is a big performance issue. Currently the "int" is maybe the fastest integer type for most x86 users. But now one must use some compile time macros (version/static if/etc) to ensure that the data type used will be fast enough in other environments too.

Exactly.

Fixing type sizes becomes a problem when you're switching platforms and want to retain efficieny.  You want your ints to be fast.

However, varying type sizes become an even bigger problem when you're trying to send out network data or store to binary files.

The only real solution is two use two (or more?) different sets of types which guarantee different things.  We need a set of fast types and we need a set of fixed-size types.

Do you not agree?  I've already posted the gist of this idea in a lower thread.
November 21, 2005
In article <dltije$kju$1@digitaldaemon.com>, MK says...
>
>But that's the whole point with D. All the integral types are completely unambiguous. I'm not sure what you are asking for? You want the unambiguous integral types renamed, and you want the regular ones to become ambiguious? That sounds a lot like what we are trying to fix in the first place.
>

OK - my mistake. Sorry. I should have known better really - most everything else in this language has already been fixed, what made me think this might not have been? D'oh ;-)

Thanks for correcting me

Munch


November 21, 2005
John Smith escribió:
> Why not also include these variable types in D?
> int1 - 1 byte
> int2 - 2 bytes
> int4 - 4 bytes
> intN - N bytes (experimental)
> 
> It must be also guaranteed that these types will always, on every machine, have
> the same size.
> 
> 

std.stdint contains aliases for those

-- 
Carlos Santander Bernal
November 21, 2005
On Mon, 21 Nov 2005 12:06:27 -0500, Jarrett Billingsley wrote:

> "pragma" <pragma_member@pathlink.com> wrote in message news:dlstrd$2i4$1@digitaldaemon.com...
>> What is wrong with the documented convetions laid out for the byte sizes
>> of the
>> current values?
> 
> Because although they're documented and strictly defined, they don't make much sense.  For example, long makes sense on a 32-bit machine, but on 64-bit machines (to which everything is moving relatively soon), 64 bits is the default size.  So "long" would be the "normal" size.  Then there's short, which I suppose makes sense on both platforms, and int, but neither gives any indication of the size.  The only type that does is "byte."
> 
> I'd personally like int8, int16, int32, etc.  This also makes it easy to add new, larger types.  What comes after int64?  int128, of course.  But what comes after "long?"  Why, "cent."  What?!  Huh?
> 
> But of course, none of this will ever happen / even be considered, so it's kind of an exercise in futility.

Yes it is. However, my comments are that identifiers that are a mixture of alphas and digits reduce legibility. Also, why use the number of bits? Is it likely we would use a number that is not a power of 2? Or could we have an int24? Or an int30? Using a number of bytes seems more useful because I'm sure that all such integer would be one byte boundaries.

-- 
Derek
(skype: derek.j.parnell)
Melbourne, Australia
22/11/2005 10:24:41 AM
November 21, 2005
On Tue, 22 Nov 2005 00:23:27 +0200, Jari-Matti Mäkelä wrote:


[snip]

> True, but I'm still saying that D already has these types. If you don't like the current naming convention, you can always
> 
>    alias byte int8
>    alias short int16
>    alias int int32
>    alias long int64
> 
> and so on...
> 
> In C you would have to use implementation specific sizeof-logic.
> 
> I think Walter has chosen these keywords because they are widely used in other languages. They're also closer to the natural language.

Another small point would be the increased number of tests when using typeof, typeid,  etc...


-- 
Derek
(skype: derek.j.parnell)
Melbourne, Australia
22/11/2005 10:37:05 AM