March 23, 2002
"Christophe Bouchon" <cbouchon@hotmail.com> wrote in message news:a7gjt0$63o$1@digitaldaemon.com...
> "Stephen Fuld" Wrote:
>
> >     The answer is 875639241357                        or
> >     The answer is 875,639,241,357
> Agreed but: in french ;-) (but also in other roman languages), ',' and '.'
> use is reversed:
>         La réponse est 875.639.241.357

Oui!  Je t'comprende.  I did say, in part of the post you snipped, that there were issues with internationalization.  I don't know if Walter has made any decisions here, but since this stuff is in a library, it seems a good solution is to leave the syntax in the format string as I specified it, but interpret it differently in different versions of the library.  I note that the use of the period in the existing C format string seems standard, but do libraries print out floating point numbers with a comma between the integer and fractional parts?  Or do they ignore the internationalization issues.

> so you have to think twice about internationalisation and possible confusions.

Yes.

> I also like the possibility to insert '_' between digits in integers and floating point constants (but only AFTER the first digit or after the
digit
> following the '.', else it's ambiguous with identifiers). This way, you
can
> use 875_639_241_357 in your source code, and also 0x1234_5678_9ABC.

And some people prefer blanks.  i.e. 875 639 241 357.

As I said, I wasn't proposing the full generality of the COBOL picture clause, but there are many enhancements that could be added.

> > One poster noted that the run time model for D currently prohibits its
> running
> > on 64 bit systems (more properly on systems that allow greater than 32
bit
> pointers).
> Another usefull type: an intptr type being an integer garanted large
enough
> to contain a pointer on the target plateform.

Doing integer arithmetic on pointers is an error prone evil that I would not want to make any easier.  :-(.

--
 - Stephen Fuld
   e-mail address disguised to prevent spam


March 23, 2002
Russell Borogove wrote:

> Walter wrote:
> > I suppose I need to point out just why I didn't pick int32. It's purely an aesthetic one, I just don't like the look of declarations with int32, int16, etc. It's a little awkward for me to type, too, as I can touch type the letters but not the numbers.
> >
> > Try a global search/replace on some source code with int->int32, char->int8, etc. Is the result pleasing to the eye?
>
> Perhaps not, but IMO far more expressive. So what are you going to call the 128-bit type?

Hey!  I've got all these nybbles running around on my 4-bit CPU (for which I anxiously await a port of D)!  What do I call them?  :P

And what about the lowly bit?  Aren't bits really integers too?  (Teeny-tiny, itsy-bitsy unsigned ones.  Their signed cousins are used as sign bits, right?)

Let's include everything, and see where it takes us (best viewed with a
fixed-width font):

  C/D       Proposed        Hardware
-------   ------------    ------------
bool        int1            bit
???         int2            ???
????        int4            nybble
char        int8            byte
short       int16           word
int         int32           double-word or dword
long        int64           quad-word or qword
?????       int128          ???
??????      int256          ???


Yes, I added a few just to make the system complete, from a single bit to
int256.  The int2 type (or, more likely, uint2) would be useful for
multiple-valued logic, such as trinary ("true", "false", "indeterminate") or
quaternary (which is often just trinary with a "no value" state).

There actually is a use for int256!  It is the smallest integer size (in our sequence of size doublings) that can represent the size of the universe (1.5e26 m) using units of the Plank distance (1.6e-35 m).  That's a range of 61 orders of magnitude, which requires at least 140 bits to represent, which rules out int128.  (And you know, I'd just love to do my cosmological quantum gravity calculations using integer math.)

If I had a vote, I'd vote to make the proposed the default, with whatever imprecise and/or confusing legacy naming schemes you want to use being optional (but possibly included with D for porting purposes).

Much of the code I write already uses the intNN notation, no matter what language I use.  More and more coding standards specify it (for examples, check the current DOD standards, and the published coding guides from Cisco, Nortel, IBM and HP).

Even worse is the use of "unsigned":  Don't we ALL use a typedef or #define to create the "uint" type?  Unsigned integers should be treated as fundamental types, not as a "modified" types.  C has a horrible habit of confusing storage size with type.  Should D propagate it?

After all, int32, uint32 and "float" all occupy 32 bits:  Why don't we have "signed int", "unsigned int" and "floating int" in C (or in D)?  Silly, eh?  A numeric type is a numeric type, and they come in families, where the size in bits is a VERY useful, unambiguous and compact way to denote the capacity of the type.  Storage is irrelevant.

There is nothing that says a D implementation can't store int32 in a 64-bit wide register or memory location, is there?  Or that multiple int2s would be stored packed into a byte.  Storage is an implementation decision, and so long as I can use something like "sizeof()" to get the allocation space used by a data item, I'll get along just fine.  Compilers are supposed to know that kind of stuff, right?  And anyhow, isn't that the "right" way to program?  (Or are we expected to handle the alignment and packing issues surrounding aggregate types some other way?  Of course not.)

For symmetry, I'd also like to see float32, float64 and float80 added to the list of numeric types.  Forget the confusing "float", "double" and "long double" stuff.  Eliminate fundamental type modifiers!  Each fundamental type should have its own name, and only its storage/access capabilities should be modifiable (const, volatile, etc.).

D really should get out in front on this one.

If I had a vote, that is.  If only I had a vote...


-BobC

(Did I remember to ask for D integers that are as smart as D's floats?  I'd like to have an integer "NAN" be available...)


March 23, 2002
"Robert W. Cunningham" <rwc_2001@yahoo.com> wrote in message news:3C9BE9CC.9300AD70@yahoo.com...
> Eliminate fundamental type modifiers!  Each fundamental type should have its own name, and only its storage/access capabilities should be
modifiable
> (const, volatile, etc.).

One language I know of doesn't have unsigned integers.  It has "nat"s (short for naturals, as in natural numbers).  So it has nat4 nat8, etc.

>
> D really should get out in front on this one.

Of course, I agree. :-)

--
 - Stephen Fuld
   e-mail address disguised to prevent spam


March 23, 2002
"Robert W. Cunningham" <rwc_2001@yahoo.com> wrote in message news:3C9BE9CC.9300AD70@yahoo.com...
> Even worse is the use of "unsigned":  Don't we ALL use a typedef or
#define to
> create the "uint" type?

No, I used to use a lot of typedefs for basic types, but have tended to move away from it.

>  Unsigned integers should be treated as fundamental
> types, not as a "modified" types.  C has a horrible habit of confusing
storage
> size with type.  Should D propagate it?

No, D shouldn't (and doesn't). There are no basic types in D composed of
multiple keywords.

> (Did I remember to ask for D integers that are as smart as D's floats?
I'd like
> to have an integer "NAN" be available...)

I'd like it too, unfortunately, the hardware is lacking :-(


March 23, 2002
"Russell Borogove" <kaleja@estarcion.com> wrote in message news:3C9B7C54.5050707@estarcion.com...

> Stephen Fuld wrote:
> > 1.    Variable type names.  I know that short, long, etc. are a C
heritage,
> > but they have and will lead to confusion.  There was confusion over how
long
> > int was when we went from 16 bit to 32 bit computers.
>  > [snip details of int8, int16, int32, int64, uint8...,
>  > float32... proposal]
>
> Apart from your heretical use of capital letters in language-defined types, I wholeheartedly support this notion.

I'd say, leave them in, and define the aliases:

    alias byte int8;
    alias short int16;
    alias int int32;
    alias long int128;

The thing is, while all those int* might be easier to understand,
most C programmers will want to see common, expected names. I personally
wouldn't use int16 instead of short in my programs, even if it were
available.


March 23, 2002
"Walter" <walter@digitalmars.com> wrote in message news:a7gl5u$6rm$1@digitaldaemon.com...

> > Perhaps not, but IMO far more expressive. So what are you going to call the 128-bit type?
>
> cent?  (as kilo means 1024, cent should mean 128)
> centint?
> centurion? <g>

damnlongint =)


March 23, 2002
"Robert W. Cunningham" <rwc_2001@yahoo.com> wrote in message news:3C9BE9CC.9300AD70@yahoo.com...
> Russell Borogove wrote:

> Even worse is the use of "unsigned":  Don't we ALL use a typedef or
#define to
> create the "uint" type?  Unsigned integers should be treated as
fundamental
> types, not as a "modified" types.  C has a horrible habit of confusing
storage
> size with type.  Should D propagate it?

D doesn't have unsigned. ubyte, ushort, uint, and ulong are all distinct types.




March 23, 2002
"Stephen Fuld" <s.fuld.pleaseremove@att.net> wrote in message news:a7gom1$96n$2@digitaldaemon.com...

> > I can argue that you can create a list of aliases,
> >     alias int int32;
> > etc.
>
> Sure, but then it would be my private practice.  I was arguing for better "hygene" for all users.  :-)

Well, a module like this could be put into standard D distribution.

> > Internationalization of currency formatting is a real problem, but one I suggest is suited to a library class. Would you like to write one?
>
> I agree about the problem.  As for me helping to provide a solution, I may have a problem with that. In what language are such libraries written?  My

D, of course!
By the way, Walter, what's your vision of D locales? Have you thought
of the class it might employ, its interfaces, etc?

> The run time model has the pointer at offset 0 and the monitor at offset
4.
> This limits pointers to four bytes.  Other addresses similarly limit you.

This was on Intel 32-bit architecture. Otherwise, pointer size is undefined, so 64-bit systems will have 64-bit pointers (so monitor will be at offset 8).



March 23, 2002
"Christophe Bouchon" <cbouchon@hotmail.com> wrote in message news:a7gjt0$63o$1@digitaldaemon.com...

> Another usefull type: an intptr type being an integer garanted large
enough
> to contain a pointer on the target plateform.

Why should anybody need it for _multiplatform_ application?



March 23, 2002
For functions passing/retrieving generic data (either int or pointer). This is not a recommanded coding practice but you have to consider existing code/libraries... Windows uses INT_PTR/UINT_PTR (plus SSIZE_T/SIZE_T, same types with different names) where an integer large enough to contain pointers is required. On some plateforms, sizeof(void*) != sizeof(int) (not CURRENT targets for D, like AS/400 if I remeber what a colleague told me about a port on this plateform, something like 48 bit long pointers). It's always better to have the type in the language than to try to maintain system specific definitions.

"Pavel Minayev" wrote:
> "Christophe Bouchon" <cbouchon@hotmail.com> wrote in message news:a7gjt0$63o$1@digitaldaemon.com...
>
> > Another usefull type: an intptr type being an integer garanted large
> enough
> > to contain a pointer on the target plateform.
>
> Why should anybody need it for _multiplatform_ application?