September 23, 2004
In article <ciu8rr$sik$1@digitaldaemon.com>, Sjoerd van Leent says...

>> I thought about byte being int8, but thought that people would like byte better since bit and byte are standard on all platforms

Nah - if "byte" were well-defined, nobody would ever have needed to invent the word "octet". In the definition of "byte" according to http://compnetworking.about.com/cs/basicnetworking/g/bldef_byte.htm it says:

>In all modern network protocols, a byte contains eight bits. A few (generally obsolete) computers may use bytes of different sizes for other purposes.

So, strictly, it would have to be int8 and uint8 - which I would be quite happy with.

Arcane Jill


September 23, 2004
In article <ciuh8l$17j3$1@digitaldaemon.com>, Arcane Jill says...
>Nah - if "byte" were well-defined, nobody would ever have needed to invent the word "octet". In the definition of "byte" according to http://compnetworking.about.com/cs/basicnetworking/g/bldef_byte.htm it says:
>
>>In all modern network protocols, a byte contains eight bits. A few (generally obsolete) computers may use bytes of different sizes for other purposes.
>
>So, strictly, it would have to be int8 and uint8 - which I would be quite happy with.
>

Also wikipedia has the details on this topic: http://en.wikipedia.org/wiki/Byte

"C, for example, defines byte as a storage unit capable of at least being large enough to hold any character of the execution environment (clause 3.5 of the C standard)."

So even though I doubt anyone will be backporting D to legacy systems with non-8-bit-bytes, it makes sense to define byte as uint8 or something similar.

pragma(EricAnderton,"at","yahoo");
September 23, 2004
Arcane Jill wrote:
> In article <cit6j3$4sp$1@digitaldaemon.com>, Brian Bober says...
> 
>>On Thu, 23 Sep 2004 09:21:52 +1200, Regan Heath wrote:
>>
>>
>>>What no go all the way...
>>>
>>>"char"   -> "char8"
>>>"wchar"  -> "char16"
>>>"dchar"  -> "char32"
>>>
>>>Regan
> 
> 
> What the hell. Let's go /all/ the way...
> 
> char  -> utf8
> wchar -> utf16
> dchar -> utf32
> 
> Then we'll have none of that nonsense of people confusing D's chars with C's
> chars, which should /in fact/ be mapped to int8 and uint8.
> 
> Jill
> 
> PS. I also suggest float32, float64, float80, ifloat32, ifloat64, ifloat80,
> cfloat64, cfloat128 and cfloat160 for the float types.
> 
> 
> 
If 'twere done, 'twere best done quickly!

(I see a lot to like about those names, but perhaps the current names should be kept, for awhile, as aliases?  And, of course, deprecated.)
September 23, 2004
Pragma wrote:
> In article <ciuh8l$17j3$1@digitaldaemon.com>, Arcane Jill says...
> 
>>...
> ...
> Also wikipedia has the details on this topic: http://en.wikipedia.org/wiki/Byte
> 
> "C, for example, defines byte as a storage unit capable of at least being large
> enough to hold any character of the execution environment (clause 3.5 of the C
> standard)."
> 
> So even though I doubt anyone will be backporting D to legacy systems with
> non-8-bit-bytes, it makes sense to define byte as uint8 or something similar.
> 
> pragma(EricAnderton,"at","yahoo");

So if the current environment includes utf32 characters, then...
September 23, 2004
In article <cive9e$2401$2@digitaldaemon.com>, Charles Hixson says...
>
>Pragma wrote:
>> In article <ciuh8l$17j3$1@digitaldaemon.com>, Arcane Jill says...
>> 
>>>...
>> ...
>> Also wikipedia has the details on this topic: http://en.wikipedia.org/wiki/Byte
>> 
>> "C, for example, defines byte as a storage unit capable of at least being large enough to hold any character of the execution environment (clause 3.5 of the C standard)."
>> 
>> So even though I doubt anyone will be backporting D to legacy systems with non-8-bit-bytes, it makes sense to define byte as uint8 or something similar.
>> 
>> pragma(EricAnderton,"at","yahoo");
>
>So if the current environment includes utf32 characters, then...

Yeah the wording could probably be refined a bit.  I'm pretty sure the current expectation is for char/uchar to always occupy one byte, otherwise they'd have to go and define a new type name.  Also, the wording is such that uchar is basically a building-block for everything, so it almost has to be one byte in size.


Sean


September 26, 2004

Brian Bober wrote:
> 
> I think it's time to re-hash an old discussion from a couple years ago. I propose replacing type names for integers (http://www.digitalmars.com/d/type.html) with the following:
> 
> bit
> byte
> ubyte
> int16
> uint16
> int32
> uint32
> int64
> uint64
> int128
> uint128

As no-one else contradicts, I do.

The problem is that this change would break all existing D code. Developers world-wide would have to put thousands of hours into their code to update. This is unfun. There are surely other issues where we can put this time into.

All programming languages require some effort to learn their basic data types. It's so simple and so fundamental that I think there is no need to make it simpler.

It would also increase the distance to languages like C and Java and add work to translations.

This is not a logical argument, it's pragmatic. The suggestion
is valid and should maybe be used for another language designed from
scratch. But not at this stage of D development where we are
longing for a 1.00 release. This would put us back 3-4 months.

-- 
Helmut Leitner    leitner@hls.via.at
Graz, Austria   www.hls-software.com
September 26, 2004
"Helmut Leitner" <helmut.leitner@wikiservice.at> wrote in message news:41568D76.A0536642@wikiservice.at...
>
>
> Brian Bober wrote:
> >
> > I think it's time to re-hash an old discussion from a couple years ago. I propose replacing type names for integers (http://www.digitalmars.com/d/type.html) with the following:
> >
> > bit
> > byte
> > ubyte
> > int16
> > uint16
> > int32
> > uint32
> > int64
> > uint64
> > int128
> > uint128
>
> As no-one else contradicts, I do.
>
> The problem is that this change would break all existing D code.

Why should it break all code if we keep aliases to current types? like: alias int 32 int...

> Developers world-wide would have to put thousands of hours into their code to update. This is unfun. There are surely other issues where we can put this time into.
>
> All programming languages require some effort to learn their basic data types. It's so simple and so fundamental that I think there is no need to make it simpler.
>
> It would also increase the distance to languages like C and Java and add work to translations.
>
> This is not a logical argument, it's pragmatic. The suggestion
> is valid and should maybe be used for another language designed from
> scratch. But not at this stage of D development where we are
> longing for a 1.00 release. This would put us back 3-4 months.
>
> --
> Helmut Leitner    leitner@hls.via.at
> Graz, Austria   www.hls-software.com


September 26, 2004

Ivan Senji wrote:
> 
> "Helmut Leitner" <helmut.leitner@wikiservice.at> wrote in message news:41568D76.A0536642@wikiservice.at...
> >
> >
> > Brian Bober wrote:
> > >
> > > I think it's time to re-hash an old discussion from a couple years ago. I propose replacing type names for integers (http://www.digitalmars.com/d/type.html) with the following:
> > >
> > > bit
> > > byte
> > > ubyte
> > > int16
> > > uint16
> > > int32
> > > uint32
> > > int64
> > > uint64
> > > int128
> > > uint128
> >
> > As no-one else contradicts, I do.
> >
> > The problem is that this change would break all existing D code.
> 
> Why should it break all code if we keep aliases to current types? like: alias int 32 int...

Then it is not a rename.
Then create aliases for the suggested types.

-- 
Helmut Leitner    leitner@hls.via.at
Graz, Austria   www.hls-software.com
September 26, 2004
Helmut Leitner wrote:
> 
> Brian Bober wrote:
> 
>>I think it's time to re-hash an old discussion from a couple years ago.
>>I propose replacing type names for integers
>>(http://www.digitalmars.com/d/type.html) with the following:
>>
>>bit
>>byte
>>ubyte
>>int16
>>uint16
>>int32
>>uint32
>>int64
>>uint64
>>int128
>>uint128
> 
> 
> As no-one else contradicts, I do.
> 
> The problem is that this change would break all existing D code.
> Developers world-wide would have to put thousands of hours into their code to update. This is unfun. There are surely other
> issues where we can put this time into.

Also, I think it should be mentioned that similar suggestions have been made numerous times in the past:

17 Aug 2001: http://www.digitalmars.com/d/archives/200.html
 2 May 2002: http://www.digitalmars.com/d/archives/4842.html
28 Aug 2002: http://www.digitalmars.com/d/archives/7996.html
22 Jan 2003: http://www.digitalmars.com/drn-bin/wwwnews?D/10321
14 Jan 2004: http://www.digitalmars.com/d/archives/9954.html
 3 Mar 2004: http://www.digitalmars.com/d/archives/25095.html


I think if Walter was interested in doing this, he would have done it last year or the year before rather than now (when so much has stabilized). It doesn't make much sense to me to /change/ the names now, I can understand the argument to /add/ the new names. You might be able to convince Walter to alias the new names in something like object.d, but I'm pretty sure the "old" names are here to stay.

> 
> All programming languages require some effort to learn their
> basic data types. It's so simple and so fundamental that I think
> there is no need to make it simpler. 
> 
> It would also increase the distance to languages like C and Java
> and add work to translations. 
> 
> This is not a logical argument, it's pragmatic. The suggestion
> is valid and should maybe be used for another language designed from
> scratch. But not at this stage of D development where we are longing for a 1.00 release. This would put us back 3-4 months.
> 

-- 
Justin (a/k/a jcc7)
http://jcc_7.tripod.com/d/
September 26, 2004
Perhaps this is the essence

http://www.digitalmars.com/drn-bin/wwwnews?D/8100