Thread overview
Renaming of integer data types needed
Sep 22, 2004
Brian Bober
Sep 22, 2004
Regan Heath
Sep 22, 2004
Regan Heath
Sep 23, 2004
Regan Heath
Sep 23, 2004
Brian Bober
September 22, 2004
I believe integer data types (http://www.digitalmars.com/d/), such as long,
should be reconsidered. I brought this up a couple years ago and discussed
it with Pavel Minayev but had a different solution then. I now rehash
it with a new suggestion:

bit -- 1 bit
byte -- 8 bits signed
ubyte -- 8 bits unsigned
int16 -- 16 bits signed
uint16 -- 16 bits unsigned
int32, uint32,int64,uint64, etc

You can provide a temporary standard header that will provide the alternate names you provided on http://www.digitalmars.com/d/type.html until people have migrated to the new system I suggested here.

Reasons for my suggestion are that:
1) It is more logical because it shows the length of the variable.

2) It won't suffer from issues if there is an addition of more integer types, such as 256 and 512 bit integers. This is what caused the whole mess with int and long in C/C++, so you are just dooming us to the same mess. Pavel Minayev said that # of bits in processors increase logarithmically. We cannot assume this to always be the case, especially if processors ever develop serial internal buses to space components out more. If that ever happens, and it will likely happen some day, then the size of integers won't really matter in terms of processor size. Scientific computers will likely have the capability for much larger integers.

3) It will cause less confusion between C and C++ and what people
remember from various systems (such as i386, long is 32 bits).

4) It will cause less issues for automatic conversions, especially using tools like sed or awk.

5) It will cause people to think more about what names they use when converting.

6) This method has proved effective for cross platform in Mozilla's prInt32, etc.

7) It'll be easier for embedded system designers who want to make,
say, 24-bit integers to implement it in a meaningful manner.
8) cent and ucent is not a good name for a 128-bit variable. First
of all, it might be too easily mixed up with a simple structure for
representing currency. Second of all, 128 is not 100. In fact, a 128-bit
integer simply backs up what I said in 2. Naming data types is getting
ridiculous. What is longer than long? I guess it could be 'extended' or
'stretch', but seriously... Let's make things a bit less complicated.
September 22, 2004
On Wed, 22 Sep 2004 17:09:37 -0400, Brian Bober <netdemonz@yahoo.com> wrote:
> I believe integer data types (http://www.digitalmars.com/d/), such as long,
> should be reconsidered. I brought this up a couple years ago and discussed
> it with Pavel Minayev but had a different solution then. I now rehash
> it with a new suggestion:
>
> bit -- 1 bit
> byte -- 8 bits signed
> ubyte -- 8 bits unsigned
> int16 -- 16 bits signed
> uint16 -- 16 bits unsigned
> int32, uint32,int64,uint64, etc
>
> You can provide a temporary standard header that will provide the alternate
> names you provided on http://www.digitalmars.com/d/type.html until people
> have migrated to the new system I suggested here.
>
> Reasons for my suggestion are that:
> 1) It is more logical because it shows the length of the variable.
>
> 2) It won't suffer from issues if there is an addition of more integer
> types, such as 256 and 512 bit integers. This is what caused the whole
> mess with int and long in C/C++, so you are just dooming us to the same
> mess. Pavel Minayev said that # of bits in processors increase
> logarithmically. We cannot assume this to always be the case, especially
> if processors ever develop serial internal buses to space components out
> more. If that ever happens, and it will likely happen some day, then the
> size of integers won't really matter in terms of processor size.
> Scientific computers will likely have the capability for much larger
> integers.
>
> 3) It will cause less confusion between C and C++ and what people
> remember from various systems (such as i386, long is 32 bits).
>
> 4) It will cause less issues for automatic conversions, especially using
> tools like sed or awk.
>
> 5) It will cause people to think more about what names they use when
> converting.
>
> 6) This method has proved effective for cross platform in Mozilla's
> prInt32, etc.
>
> 7) It'll be easier for embedded system designers who want to make,
> say, 24-bit integers to implement it in a meaningful manner.
> 8) cent and ucent is not a good name for a 128-bit variable. First
> of all, it might be too easily mixed up with a simple structure for
> representing currency. Second of all, 128 is not 100. In fact, a 128-bit
> integer simply backs up what I said in 2. Naming data types is getting
> ridiculous. What is longer than long? I guess it could be 'extended' or
> 'stretch', but seriously... Let's make things a bit less complicated.

What no go all the way...

"byte"   -> "int8"
"float"  -> "float32"
"double" -> "float64"
"real"   -> "float80" (intel only?)
"char"   -> "char8"
"wchar"  -> "char16"
"dchar"  -> "char32"

Regan

-- 
Using M2, Opera's revolutionary e-mail client: http://www.opera.com/m2/
September 22, 2004
Cross-posted from the old group 'D' to 'digitalmars.D'.

On Thu, 23 Sep 2004 09:21:52 +1200, Regan Heath <regan@netwin.co.nz> wrote:
> On Wed, 22 Sep 2004 17:09:37 -0400, Brian Bober <netdemonz@yahoo.com> wrote:
>> I believe integer data types (http://www.digitalmars.com/d/), such as long,
>> should be reconsidered. I brought this up a couple years ago and discussed
>> it with Pavel Minayev but had a different solution then. I now rehash
>> it with a new suggestion:
>>
>> bit -- 1 bit
>> byte -- 8 bits signed
>> ubyte -- 8 bits unsigned
>> int16 -- 16 bits signed
>> uint16 -- 16 bits unsigned
>> int32, uint32,int64,uint64, etc
>>
>> You can provide a temporary standard header that will provide the alternate
>> names you provided on http://www.digitalmars.com/d/type.html until people
>> have migrated to the new system I suggested here.
>>
>> Reasons for my suggestion are that:
>> 1) It is more logical because it shows the length of the variable.
>>
>> 2) It won't suffer from issues if there is an addition of more integer
>> types, such as 256 and 512 bit integers. This is what caused the whole
>> mess with int and long in C/C++, so you are just dooming us to the same
>> mess. Pavel Minayev said that # of bits in processors increase
>> logarithmically. We cannot assume this to always be the case, especially
>> if processors ever develop serial internal buses to space components out
>> more. If that ever happens, and it will likely happen some day, then the
>> size of integers won't really matter in terms of processor size.
>> Scientific computers will likely have the capability for much larger
>> integers.
>>
>> 3) It will cause less confusion between C and C++ and what people
>> remember from various systems (such as i386, long is 32 bits).
>>
>> 4) It will cause less issues for automatic conversions, especially using
>> tools like sed or awk.
>>
>> 5) It will cause people to think more about what names they use when
>> converting.
>>
>> 6) This method has proved effective for cross platform in Mozilla's
>> prInt32, etc.
>>
>> 7) It'll be easier for embedded system designers who want to make,
>> say, 24-bit integers to implement it in a meaningful manner.
>> 8) cent and ucent is not a good name for a 128-bit variable. First
>> of all, it might be too easily mixed up with a simple structure for
>> representing currency. Second of all, 128 is not 100. In fact, a 128-bit
>> integer simply backs up what I said in 2. Naming data types is getting
>> ridiculous. What is longer than long? I guess it could be 'extended' or
>> 'stretch', but seriously... Let's make things a bit less complicated.
>
> What no go all the way...
>
> "byte"   -> "int8"
> "float"  -> "float32"
> "double" -> "float64"
> "real"   -> "float80" (intel only?)
> "char"   -> "char8"
> "wchar"  -> "char16"
> "dchar"  -> "char32"
>
> Regan
>



-- 
Using M2, Opera's revolutionary e-mail client: http://www.opera.com/m2/
September 23, 2004
(no offence intended to anyone except myself)
My england very bad in this post :(


On Thu, 23 Sep 2004 11:36:26 +1200, Regan Heath <regan@netwin.co.nz> wrote:
> Cross-posted from the old group 'D' to 'digitalmars.D'.
>
> On Thu, 23 Sep 2004 09:21:52 +1200, Regan Heath <regan@netwin.co.nz> wrote:
>> On Wed, 22 Sep 2004 17:09:37 -0400, Brian Bober <netdemonz@yahoo.com> wrote:
>>> I believe integer data types (http://www.digitalmars.com/d/), such as long,
>>> should be reconsidered. I brought this up a couple years ago and discussed
>>> it with Pavel Minayev but had a different solution then. I now rehash
>>> it with a new suggestion:
>>>
>>> bit -- 1 bit
>>> byte -- 8 bits signed
>>> ubyte -- 8 bits unsigned
>>> int16 -- 16 bits signed
>>> uint16 -- 16 bits unsigned
>>> int32, uint32,int64,uint64, etc
>>>
>>> You can provide a temporary standard header that will provide the alternate
>>> names you provided on http://www.digitalmars.com/d/type.html until people
>>> have migrated to the new system I suggested here.
>>>
>>> Reasons for my suggestion are that:
>>> 1) It is more logical because it shows the length of the variable.
>>>
>>> 2) It won't suffer from issues if there is an addition of more integer
>>> types, such as 256 and 512 bit integers. This is what caused the whole
>>> mess with int and long in C/C++, so you are just dooming us to the same
>>> mess. Pavel Minayev said that # of bits in processors increase
>>> logarithmically. We cannot assume this to always be the case, especially
>>> if processors ever develop serial internal buses to space components out
>>> more. If that ever happens, and it will likely happen some day, then the
>>> size of integers won't really matter in terms of processor size.
>>> Scientific computers will likely have the capability for much larger
>>> integers.
>>>
>>> 3) It will cause less confusion between C and C++ and what people
>>> remember from various systems (such as i386, long is 32 bits).
>>>
>>> 4) It will cause less issues for automatic conversions, especially using
>>> tools like sed or awk.
>>>
>>> 5) It will cause people to think more about what names they use when
>>> converting.
>>>
>>> 6) This method has proved effective for cross platform in Mozilla's
>>> prInt32, etc.
>>>
>>> 7) It'll be easier for embedded system designers who want to make,
>>> say, 24-bit integers to implement it in a meaningful manner.
>>> 8) cent and ucent is not a good name for a 128-bit variable. First
>>> of all, it might be too easily mixed up with a simple structure for
>>> representing currency. Second of all, 128 is not 100. In fact, a 128-bit
>>> integer simply backs up what I said in 2. Naming data types is getting
>>> ridiculous. What is longer than long? I guess it could be 'extended' or
>>> 'stretch', but seriously... Let's make things a bit less complicated.
>>
>> What no go all the way...
>>
>> "byte"   -> "int8"
>> "float"  -> "float32"
>> "double" -> "float64"
>> "real"   -> "float80" (intel only?)
>> "char"   -> "char8"
>> "wchar"  -> "char16"
>> "dchar"  -> "char32"
>>
>> Regan
>>
>
>
>



-- 
Using M2, Opera's revolutionary e-mail client: http://www.opera.com/m2/
September 23, 2004
Yeah, sorry about the confusion of there being two threads on this... I accidentally submitted the article by pressing the wrong buttons (CTRL+RETURN, thank you Pan!) before I was finished writing it, but thought I lost it, then submitted another, rewritten better, to the wrong group :-/ (I guess today is not my day). I have a quoted response for you in the thread titled "Integer names should be renamed", and sent at 9/22/2004.

Let's bring the discussion to the one that first posted in this group "Integer names should be renamed" so that there is only one thread.