View mode: basic / threaded / horizontal-split · Log in · Help
May 02, 2002
Basic Integral Data Types flawed?
I think using the Java standard sizes for integral sizes is a mistake since
D does not need a VM :) and since "D is designed to fit comfortably with a C
compiler for the target system".

I think the D language spec is overly targeted to the IA32/x86 architecture.
Generally, the C programmer uses "int" as the most efficient type for the
CPU (it has been a while but I think "int" on the DEC Alpha is 64 bits). Of
course, there are still plenty of 16 bit CPUs and odd-ball DSPs which would
could possibly use D.

The following D types should be modified to match the underlying C types for
the specific target (since you have a single target currently that wouldn't
break much code), then interfacing to existing C code would be very
straightforward.

short
ushort
int
uint
long
ulong

D should introduce the following types (similar to C99 with the "_t"
removed) for those times when you need an exact bit length data type.
 int8 - signed 8 bits
 uint8 - unsigned 8 bits
 int16 - signed 16 bits
 uint16 - unsigned 16 bits
 int32 - signed 32 bits
 uint32 - unsigned 32 bits
 etc

I do embedded programming and use the exact size C99 types quite often.

Mark
May 02, 2002
Re: Basic Integral Data Types flawed?
"Mark T" <mt@nospam.com> wrote in message
news:aar9td$3gu$1@digitaldaemon.com...

> I think using the Java standard sizes for integral sizes is a mistake
since
> D does not need a VM :) and since "D is designed to fit comfortably with a
C
> compiler for the target system".
>
> I think the D language spec is overly targeted to the IA32/x86
architecture.
> Generally, the C programmer uses "int" as the most efficient type for the
> CPU (it has been a while but I think "int" on the DEC Alpha is 64 bits).
Of
> course, there are still plenty of 16 bit CPUs and odd-ball DSPs which
would
> could possibly use D.

D is not 16-bit.

For 64-bit computers, I think 32-bit int is not any slower than 64-bit,
or am I wrong?

Non-fixed size of C data types was (and is) a constant source of bugs and
troubles. Just look at the typical "platform.h" of any multi-platform
library - you'll see a lot of #defines and typedefs there, just to provide
some workaround.

I vote for fixed type sizes.
May 02, 2002
Re: Basic Integral Data Types flawed?
"Mark T" <mt@nospam.com> wrote in message
news:aar9td$3gu$1@digitaldaemon.com...
> I think using the Java standard sizes for integral sizes is a mistake
since
> D does not need a VM :) and since "D is designed to fit comfortably with a
C
> compiler for the target system".
>
> I think the D language spec is overly targeted to the IA32/x86
architecture.
> Generally, the C programmer uses "int" as the most efficient type for the
> CPU (it has been a while but I think "int" on the DEC Alpha is 64 bits).
Of
> course, there are still plenty of 16 bit CPUs and odd-ball DSPs which
would
> could possibly use D.

What kind of target system are you thinking of?  D is not intended to be
compatible with every target that C is; rather, D will be compatible with
the target's C environment where D supports that environment.  There may be
some environments that, consequently, D won't be able to support.  Watch:
This is me, not caring.

(IMHO I think ANSI went 'way to far in trying to make a standard for C that
can support every weirdo legacy platform ever made.  I'm sorry, but I'm not
going to worry about 12 bit, one's complement, descriptor-based machines in
which calloc *doesn't* set pointers to NULL and doubles to 0.0.)

> The following D types should be modified to match the underlying C types
for
> the specific target (since you have a single target currently that
wouldn't
> break much code), then interfacing to existing C code would be very
> straightforward.
>
> short
> ushort
> int
> uint
> long
> ulong

They *do* match the C types.  They just don't have the same name ("int" =
"long"; "ulong" = "unsigned long long").

> D should introduce the following types (similar to C99 with the "_t"
> removed) for those times when you need an exact bit length data type.
>   int8 - signed 8 bits
>   uint8 - unsigned 8 bits
>   int16 - signed 16 bits
>   uint16 - unsigned 16 bits
>   int32 - signed 32 bits
>   uint32 - unsigned 32 bits
>   etc
>
> I do embedded programming and use the exact size C99 types quite often.

D has exact-sized types.  They just have different names.

(Types with exact-sized type names in D don't solve the problem that, on
some platforms, C's "int" is D's "int", but on others, C's "int" may be D's
"short".)

Richard Krehbiel, Arlington, VA, USA
rich@kastle.com (work) or krehbiel3@comcast.net  (personal)
May 02, 2002
Re: Basic Integral Data Types flawed?
"Richard Krehbiel" <rich@kastle.com> wrote in message
news:aarpct$1o03$1@digitaldaemon.com...
> (IMHO I think ANSI went 'way to far in trying to make a standard for C
that
> can support every weirdo legacy platform ever made.  I'm sorry, but I'm
not
> going to worry about 12 bit, one's complement, descriptor-based machines
in
> which calloc *doesn't* set pointers to NULL and doubles to 0.0.)

You can see that in some of the postings to the C newsgroups. For instance,
look at the bending over backwards to support CPUs with no stack.
Apparently, some ancient IBM computer has no stack. I don't see much point
in making things more difficult for 99.9999% of the machines out there to
accommodate .00001% of them. I myself have programmed machines with 10 bit
bytes and with 18 bit words. But those machines are LONG obsolete, and for
good reason.

I once annoyed a number of C purists by suggesting that, for 8 bit
architectures, it made sense to make a non-compliant C variant that was
adapted to the particular characteristics of, say, the 6502. Their position
that if it was possible to make a compliant C implementation for it, that
should be used for all applications. Never mind the horrific inefficiency of
it. I'm much more pragmatic about bending the language to suit the need, not
the other way around <g>.

For another example, it is just a reality that to write professional C/C++
apps on DOS, you need to use near and far. Yes, that made it non-ANSI.
That's life.
May 03, 2002
Re: Basic Integral Data Types flawed?
"Walter" <walter@digitalmars.com> wrote in message
news:aas2e5$2lmm$1@digitaldaemon.com...
>
> "Richard Krehbiel" <rich@kastle.com> wrote in message
> news:aarpct$1o03$1@digitaldaemon.com...
> > (IMHO I think ANSI went 'way to far in trying to make a standard for C
> that
> > can support every weirdo legacy platform ever made.  I'm sorry, but I'm
> not
> > going to worry about 12 bit, one's complement, descriptor-based machines
> in
> > which calloc *doesn't* set pointers to NULL and doubles to 0.0.)
>
> You can see that in some of the postings to the C newsgroups. For
instance,
> look at the bending over backwards to support CPUs with no stack.
> Apparently, some ancient IBM computer has no stack.

The ancient, obsolete processor you're thinking of may well be the PowerPC!
Subroutine calls place the return address in a link register, which, by
*convention* *only*, the called function "pushes" onto a software-managed
stack referred to by R1.

(I coded IBM 370 mainframe machine code in a former life, and it also has no
stack.  This machine architecture lives on in the current IBM mainframe
lineup.)

--
Richard Krehbiel, Arlington, VA, USA
rich@kastle.com (work) or krehbiel3@comcast.net  (personal)
May 03, 2002
Re: Basic Integral Data Types flawed?
Mark T wrote:
> The following D types should be modified to match the underlying C types for
> the specific target

Hold it right there -- some hardware platforms currently
support different C implementations with different sizes
for the same underlying C type.

Example: some C compilers for 68000 Macs think an int is
16-bit ("most efficient" in terms of the 16-bit bus of
older Macs) and some think an int is 32-bit ("most
efficient" in that it's the biggest thing the CPU can
eat).

There are also lots of C compilers where a command-line
option or pragma or incompatible hack selects the int
size. What's the "underlying" size of an int there?

-Russell B
May 03, 2002
Re: Basic Integral Data Types flawed?
Richard Krehbiel wrote:
> "Walter" <walter@digitalmars.com> wrote in message
> news:aas2e5$2lmm$1@digitaldaemon.com...
>>look at the bending over backwards to support CPUs with no stack.
>>Apparently, some ancient IBM computer has no stack.
> 
> 
> The ancient, obsolete processor you're thinking of may well be the PowerPC!
> Subroutine calls place the return address in a link register, which, by
> *convention* *only*, the called function "pushes" onto a software-managed
> stack referred to by R1.

That's not all that unusual. I believe the
Hitachi SH architecture does the same thing.

I'm not sure what it means to "have no stack" --
all you need is a chunk of memory and equivalent
functionality to an address register with inc/dec.
I suppose if you have no address registers, or
none that are preserved across function calls
by convention, then you could be said to have no
stack, but you could just reserve a word of memory
to hold a stack pointer. Push and pop or call and
return just become macro sequences in this these
cases.

Besides PPC, there are architectures which have
indirect-with-predecrement or -with-postincrement
which have no dedicated stack pointer -- just
conventions.[1]

-Russell B

[1] I may be completely misremembering,
but some even use a general register as the
Program Counter/Instruction Pointer, meaning that
the same circuitry that does "*p++" is doing
instruction reads, and the same addressing modes
available with address registers are available
in PC-relative form.
May 03, 2002
Re: Basic Integral Data Types flawed?
> For 64-bit computers, I think 32-bit int is not any slower than 64-bit,
> or am I wrong?


No, you are not.


> Non-fixed size of C data types was (and is) a constant source of bugs and
> troubles. Just look at the typical "platform.h" of any multi-platform
> library - you'll see a lot of #defines and typedefs there, just to provide
> some workaround.
> 
> I vote for fixed type sizes.


Me too!  This is a major peeve for me.

Another point in the argument for fixed size data types:

+ The programmer really should the domain of his variable.  For example, 
you never would use "unsigned char i;" for a loop that you know is going 
to range from 1 to 10,000.  Careful programmers should always consider 
the domain (i.e. the range of values) their variable is allowed to take 
on.  types without fixed sizes promote sloppy thinking.

Cheers,
  --jfc
May 03, 2002
Re: Basic Integral Data Types flawed?
"Richard Krehbiel" <rich@kastle.com> wrote in message
news:aatrrm$1j1v$1@digitaldaemon.com...
> The ancient, obsolete processor you're thinking of may well be the
PowerPC!
> Subroutine calls place the return address in a link register, which, by
> *convention* *only*, the called function "pushes" onto a software-managed
> stack referred to by R1.

It's still a stack.

> (I coded IBM 370 mainframe machine code in a former life, and it also has
no
> stack.  This machine architecture lives on in the current IBM mainframe
> lineup.)

Does it emulate a stack?
May 03, 2002
Re: Basic Integral Data Types flawed?
"Russell Borogove" <kaleja@estarcion.com> wrote in message
news:3CD2C330.7050908@estarcion.com...
> [1] I may be completely misremembering,
> but some even use a general register as the
> Program Counter/Instruction Pointer, meaning that
> the same circuitry that does "*p++" is doing
> instruction reads, and the same addressing modes
> available with address registers are available
> in PC-relative form.

You remember correctly, that was the PDP-11. The 11 was a marvelously
designed 16 bit instruction set, so marvelous that many later CPUs bragged
about being "like" the 11, even though they screwed up their design.
« First   ‹ Prev
1 2 3
Top | Discussion index | About this forum | D home