Thread overview
Integer types
Aug 23, 2001
Peter Curran
Aug 23, 2001
Etienne Lorrain
Aug 23, 2001
Charles Hixson
Aug 23, 2001
Serge K
Aug 24, 2001
nicO
August 23, 2001
I've only skimmed through the spec, but this sounds a lot closer to what I think C++ should have been. I doubt that there is really any chance of a language like this going anywhere right now, but good luck with the effort.

From what I have seen, though, there is, IMHO, a major problem - integer types. The single biggest problem that the C standard has had to deal with in developing is integer types. It is, IMHO, a major error to select a fixed set of integer types, provide fixed names for them, etc. This is just begging for horrible problems as hardware and applications advance.

IMHO, the basic integer types should be defined by the range of values they support. There should be, in essence, two integer types - signed and unsigned. The declarations should then specify the range of values to be provided by the value being declared, using a modifier. Something like this (syntax not critical):

int<0..23>
unsigned int<0..255>
int<MIN_INT31...MAX_INT31>

I prefer to specify the range explicitly, because it completely defines the range of numbers a value is required to support. However, an alternative is to specify the number of bits:

int<7> - a signed integer, with values of at least -255...+255.
signed int<7> - same. The preceding one is just a short form of this
one.
unsigned int<7> - an unsigned integer, with values of at least 0...255.

(I have used the number as the number of value bits, which I think is clearest. A signed integer and an unsigned integer with the same number of value bit support the same range of non-negative values. An alternative is to include the sign bit as part of the count, which would make my first example int<8>.)

It would, of course, be possible and simple to support both forms - that is perhaps the best answer.

This is the only way, IMHO, to make the language grow cleanly, as hardware grows, supports larger primitive integer sizes, etc. Anyone who watched the debates over "long long" in C will realize the problems a fixed set of integer types, with predefined sizes, can cause.

Note that I defined that a type supports *at least as* many values as specified by the modifier. The compiler is free to use a larger type, if that is more convenient. Thus, for example, a type named "int<5..10>" would typically be implemented as an 8-bit integer on conventional hardware, and the hardware would not be required to detect invalid values (although it could if it chose to do so, or if appropriate options were selected).

I recognize that the well known type names are useful. I would suggested adding set of importable values, known as (say) the standard types. Thus, for example, adding the line "import stdint;" would make available convenient definitions of "long," "char," etc. I am suggesting that the primitive integer types be defined using modifiers, but the well known names be available easily if they are wanted.

To really make this work well, there should be no hard limit on the range of values support. It should be perfectly legal to create a declaration such as "int<0..1111111111111111111111>" or (if the bit-count form is used) "int<1234>." Building in compiler and library support for large integers is quite simple, and by doing so right from the beginning, it would eliminate many of the problems with integer sizes that plague C and similar languages.
August 23, 2001
Peter Curran wrote in message <3B845EA4.999031E2@acm.gov>...
>I've only skimmed through the spec, but this sounds a lot closer to what I think C++ should have been. I doubt that there is really any chance of a language like this going anywhere right now, but good luck with the effort.
>
>From what I have seen, though, there is, IMHO, a major problem - integer types.

  So do I. Even more there should be one number type, and everything
 should be specialised from it, conversion rules should be simple.

  Something like (whatever the syntax):
typedef   number   :0:8                     unsigned_char;
typedef   number   :0:16                   unsigned_short_int;
typedef   number   :1:15                   signed_short_int;
typedef   number   ::64                     unsigned_long_long;
typedef   number   :1:63:1:15
signed_floating_point_signed_16bits_exposent;
typedef   number   :1:31:0:0: 1:31:0:0   complex_signed_int;

  You can implicitely convert  :0:16  to  :0:32  by prepending 0
  You can implicitely convert  :1:15  to  :1:31  by prepending either 1 or 0
  You can implicitely convert  :0:16:8  to  :0:16:16  by prepending 0 to the
exposent

  Some types may not be compile-able, just do an "unimplemented error"
  If you convert :0:16 to :0:8 then the bits which will be removed have to
be zero.
  If you convert :1:15 to :1:7 then the bits which will be removed have to
be zero or one
   depending on the sign of the converted number.

  Then type attribute are: unsigned_char.sign == 0, signed_short_int.sign =
1,
  signed_floating_point_signed_16bits_exposent.exp = 16.

  I like also the concept of saturating (not rollback) integer, that I code
using "-":
typedef   number  ::-1                    boolean;
  so "boolean bool = 1; bool ++"  stay at 1.
  so "boolean bool = 0; bool --"  stay at 0.

  I do not know how to set a minimum and maximum of a number, they should
 probably not be coded inside the type because they can change. Having a
limited
 (bounded) array is accessing the array with a limited number type - and the
bouded
 array may be a variable size array.

  Just my 0.02
  Etienne Lorrain.


August 23, 2001
Etienne Lorrain wrote:
> Peter Curran wrote in message <3B845EA4.999031E2@acm.gov>...
> 
>>I've only skimmed through the spec, but this sounds a lot closer to what
>>I think C++ should have been. I doubt that there is really any chance of
>>a language like this going anywhere right now, but good luck with the
>>effort.
>>
>>
>>From what I have seen, though, there is, IMHO, a major problem - integer
> 
>>types.
>>
> 
>   So do I. Even more there should be one number type, and everything
>  should be specialised from it, conversion rules should be simple.
> 
>   Something like (whatever the syntax):
> typedef   number   :0:8                     unsigned_char;
> typedef   number   :0:16                   unsigned_short_int;
> typedef   number   :1:15                   signed_short_int;
> typedef   number   ::64                     unsigned_long_long;
> typedef   number   :1:63:1:15
> signed_floating_point_signed_16bits_exposent;
> typedef   number   :1:31:0:0: 1:31:0:0   complex_signed_int;
> 
>   You can implicitely convert  :0:16  to  :0:32  by prepending 0
>   You can implicitely convert  :1:15  to  :1:31  by prepending either 1 or 0
>   You can implicitely convert  :0:16:8  to  :0:16:16  by prepending 0 to the
> exposent
> 
>   Some types may not be compile-able, just do an "unimplemented error"
>   If you convert :0:16 to :0:8 then the bits which will be removed have to
> be zero.
>   If you convert :1:15 to :1:7 then the bits which will be removed have to
> be zero or one
>    depending on the sign of the converted number.
> 
>   Then type attribute are: unsigned_char.sign == 0, signed_short_int.sign =
> 1,
>   signed_floating_point_signed_16bits_exposent.exp = 16.
> 
>   I like also the concept of saturating (not rollback) integer, that I code
> using "-":
> typedef   number  ::-1                    boolean;
>   so "boolean bool = 1; bool ++"  stay at 1.
>   so "boolean bool = 0; bool --"  stay at 0.
> 
>   I do not know how to set a minimum and maximum of a number, they should
>  probably not be coded inside the type because they can change. Having a
> limited
>  (bounded) array is accessing the array with a limited number type - and the
> bouded
>  array may be a variable size array.
> 
>   Just my 0.02
>   Etienne Lorrain.
> 
> 
> 

You are building machine dependancies into the language.  The number of bits in a floating point number's exponent, e.g., differes from CPU to CPU.  Don't expect an Alpha to have the same number of bits as a 80386 (OK, that's a bit extreme.  Nevertheless...)

August 23, 2001
"Charles Hixson" <charleshixsn@earthlink.net> wrote in message news:3B851830.8090201@earthlink.net...

> You are building machine dependancies into the language.  The number of bits in a floating point number's exponent, e.g., differes from CPU to CPU.  Don't expect an Alpha to have the same number of bits as a 80386 (OK, that's a bit extreme.  Nevertheless...)

Well, actually - that's a way too much extreme..
You *can* expect exactly the same format for "float" and "double" numbers on
any modern general purpose processor (since IEEE754 became an industry
standard).
AlphaAXP has support for old VAX floating point format, but I'm not aware if
it's exposed by C/C++ compilers (I don't think so).

(I'm not talking about DSP and such - where you can find something like
40bit long float)


August 24, 2001
Etienne Lorrain a écrit :
> 
> Peter Curran wrote in message <3B845EA4.999031E2@acm.gov>...
> >I've only skimmed through the spec, but this sounds a lot closer to what I think C++ should have been. I doubt that there is really any chance of a language like this going anywhere right now, but good luck with the effort.
> >
> >From what I have seen, though, there is, IMHO, a major problem - integer types.
> 
>   So do I. Even more there should be one number type, and everything
>  should be specialised from it, conversion rules should be simple.
> 
>   Something like (whatever the syntax):
> typedef   number   :0:8                     unsigned_char;
> typedef   number   :0:16                   unsigned_short_int;
> typedef   number   :1:15                   signed_short_int;
> typedef   number   ::64                     unsigned_long_long;
> typedef   number   :1:63:1:15
> signed_floating_point_signed_16bits_exposent;
> typedef   number   :1:31:0:0: 1:31:0:0   complex_signed_int;
> 
>   You can implicitely convert  :0:16  to  :0:32  by prepending 0
>   You can implicitely convert  :1:15  to  :1:31  by prepending either 1 or 0
>   You can implicitely convert  :0:16:8  to  :0:16:16  by prepending 0 to the
> exposent
<...>
>   Just my 0.02
>   Etienne Lorrain.
Why giving it in bit ? It's cpu dependant. We only need to know the range of the data! The problem are precision for floating point calcul.

nicO