September 14, 2004
I believe variable naming should be reconsidered. I know this has been
mentioned before by various people. Two issues should be considered:
1) The first issue is that the naming convention has the same issues that
C/C++ have..
2) The second is that cent and ucent is not a very good name for 128 bit
variables.

1) Instead of using names
like long, short, and int, it would be better to use names that show the
number of bits each variable has, and whether it is unsigned. This is the
convention used in the Mozilla project, and it works very well. This will
have the advantage, also, of making people more careful when porting C/C++
applications to D. It will also mean that people migrating to D won't be
caught up in the old definition of long, which is different on Alpha and
PC systems. This will also mean there won't a lot of different types when
128 and 256 bit systems come along. It'll get too complicated. It'll also
be easier for strange system designers who want to do, say, 24-bit
integers, which might be the case on integrated systems. Then they could
just do a int24 and uint24, and no one would be confused.

You can
provide a temporary standard header that will provide the alternate names
you provided on http://www.digitalmars.com/d/type.html until people have
migrated to the new system I suggested here.

bit -- 1 bit
byte -- 8 bits signed
ubyte -- 8 bits unsigned
int16 -- 16 bits signed
uint16 -- 16 bits unsigned

... etc

This method is a lot more logical in my opinion, and I'm sure a lot will agree.

2) cent and ucent is not a good name for a 128-bit variable. First of all,
it might be too easily mixed up with a simple structure for
representing currency. Second of all, 128 is not 100. In fact, a 128-bit
integer simply backs up what I said in 1. Naming data types is getting
ridiculous. What is longer than long? I guess it could be 'extended' or
'stretch', but seriously... Let's make things a bit less complicated.
November 01, 2004
Brian Bober schrieb:
> This method is a lot more logical in my opinion, and I'm sure a lot will
> agree.

We have been through such discussions 2 and 3 years ago, and enough people disagreed. Besides, if you read the docmentation, the sizes are not specified exactly. They are specified as minimums to scape up in future architectures. Though they are exact on 32-bit machines.

If someone needs precise integer sizes, he can always import their definitions from std.stdint - see if anyone uses this and this will tell you how much support your suggestion has.

-eye