Jump to page: 1 2 3
Thread overview
int<bits> and arbitrary size ints
Aug 17, 2001
Russ Lewis
Aug 17, 2001
Sheldon Simms
Aug 17, 2001
Russ Lewis
Aug 17, 2001
Scott Robinson
Aug 17, 2001
Ben Cohen
Aug 17, 2001
Russ Lewis
Oct 25, 2001
Sean L. Palmer
Dec 19, 2001
Walter
Dec 20, 2001
Sean L. Palmer
Dec 20, 2001
Pavel Minayev
Dec 23, 2001
Walter
Dec 23, 2001
Pavel Minayev
Dec 29, 2001
Sean L. Palmer
Dec 30, 2001
Walter
Aug 17, 2001
Sheldon Simms
Aug 17, 2001
Russ Lewis
Sep 19, 2001
Walter
Sep 19, 2001
Ben Cohen
Sep 20, 2001
Walter
Aug 26, 2001
Jeff Frohwein
Aug 27, 2001
Dan Hursh
Aug 27, 2001
Charles Hixson
August 17, 2001
First of all, great language!  I've been playing around with a simple language spec of my own...that I also called D...with many of the same features.  Guess I'll just join on with you...

My idea here is similar to that in the "Types and sizes" thread.  I think that it is *very important* that coders can specify a specific bit size for their integers.  I would use a slightly different syntax than the previous post:

unsigned int8 myByte;
unsigned int32 my4Bytes;
int128 my16Bytes;

The bit size specified *must* be a multiple of 8; you could also require, if it made the compiler easier, that they be powers of 2.  If the bit size was smaller than the architecture natually supported, then the compiler would have to adjust for that; if it was larger, then the compiler would be required to implement emulation code to handle it.

If the compiler supports large-integer emulation, then there is no reason not to include support for integers of arbitrary size:

intX myUnlimitedInt;

Thoughts?

August 17, 2001
Im Artikel <3B7D3416.BA743773@deming-os.org> schrieb "Russ Lewis" <russ@deming-os.org>:

> First of all, great language!  I've been playing around with a simple language spec of my own...that I also called D...with many of the same features.  Guess I'll just join on with you...
> 
> My idea here is similar to that in the "Types and sizes" thread.  I think that it is *very important* that coders can specify a specific bit size for their integers.

Why?

-- 
Sheldon Simms / sheldon@semanticedge.com
August 17, 2001
Sheldon Simms wrote:

> Im Artikel <3B7D3416.BA743773@deming-os.org> schrieb "Russ Lewis" <russ@deming-os.org>:
>
> > First of all, great language!  I've been playing around with a simple language spec of my own...that I also called D...with many of the same features.  Guess I'll just join on with you...
> >
> > My idea here is similar to that in the "Types and sizes" thread.  I think that it is *very important* that coders can specify a specific bit size for their integers.
>
> Why?

Different architectures (and different compilers) use different standards for int, long, and short sizes.  If you code something using an int where an int is 32 bits then port it to something where an int is 16 bits, you have automatic trouble.  It's better to be able to specify that this is an "int32" and let the compiler deal with it.

Many APIs implement just this to increase source code portability.

August 17, 2001
Which still comes back to the problem of portability. A very simple, but popular, example is the UNIX time scenario.

UNIX time is the number of seconds past the UNIX epoch. It is stored in a 32-bit number.

If we were to go with your definitions, there would be hard definitions of "int32" for time storing variables. The trick, of course, is that if we ever compile on an alternative bit size environment we're stuck with older definitions. D doesn't have a preprocessor, probably because due to its strongly typed and parsable form you can use perl or any other text processing tool for better effect. However, because of a lack of a preprocessor our bit management is all going to have be done in-source.

Sorry, I don't buy that. Controlling bit sizes to such a level seems much better tuned for embedded/micro-managing code - two things which D, according to the spec, is not designed for.

I suppose an argument can be made for communication and external protocol support, but wasn't that addressed in the spec in reference to a structure's memory representation? (or lack of such)

Scott.

In article <3B7D35BB.25B70FDB@deming-os.org>, Russ Lewis wrote:
>Sheldon Simms wrote:
>
>> Im Artikel <3B7D3416.BA743773@deming-os.org> schrieb "Russ Lewis" <russ@deming-os.org>:
>>
>> > First of all, great language!  I've been playing around with a simple language spec of my own...that I also called D...with many of the same features.  Guess I'll just join on with you...
>> >
>> > My idea here is similar to that in the "Types and sizes" thread.  I think that it is *very important* that coders can specify a specific bit size for their integers.
>>
>> Why?
>
>Different architectures (and different compilers) use different standards for int, long, and short sizes.  If you code something using an int where an int is 32 bits then port it to something where an int is 16 bits, you have automatic trouble.  It's better to be able to specify that this is an "int32" and let the compiler deal with it.
>
>Many APIs implement just this to increase source code portability.
>


-- 
jabber:quad@jabber.org         - Universal ID (www.jabber.org) http://dsn.itgo.com/           - Personal webpage robhome.dyndns.org             - Home firewall

-----BEGIN GEEK CODE BLOCK-----
Version: 3.12
GAT dpu s: a--- C++ UL+++ P++ L+++ E- W++ N+ o+ K w
O M V- PS+ PE Y+ PGP+++ t++ 5 X R tv b++++ DI++++ D++
G+ e+ h! r- y
------END GEEK CODE BLOCK------
August 17, 2001
Im Artikel <3B7D35BB.25B70FDB@deming-os.org> schrieb "Russ Lewis" <russ@deming-os.org>:

> Sheldon Simms wrote:
> 
>> Im Artikel <3B7D3416.BA743773@deming-os.org> schrieb "Russ Lewis" <russ@deming-os.org>:
>>
>> > First of all, great language!  I've been playing around with a simple language spec of my own...that I also called D...with many of the same features.  Guess I'll just join on with you...
>> >
>> > My idea here is similar to that in the "Types and sizes" thread.  I think that it is *very important* that coders can specify a specific bit size for their integers.
>>
>> Why?
> 
> Different architectures (and different compilers) use different standards for int, long, and short sizes.  If you code something using an int where an int is 32 bits then port it to something where an int is 16 bits, you have automatic trouble.

If you're talking about C, then you're the one to blame for assuming that int always has at least 32 bits...

But the reason I asked is because it seemed to me that you were offering a solution without specifying the problem. Specifying specific sizes for integers might be a great solution for something, but I can't judge whether or not it is without knowing what the problem is in the first place.

As for the problem that (I think) you're talking about, perhaps it would be better to talk about ranges instead of the number of bits. Given that the language already offers integral types with the fixed sizes of 8,16,32, and 64 bits (you have read the D doc, haven't you?), I don't see the point of adding more programmer- defined sizes. But being able to specify particular ranges like in pascal might be useful:

int{-10..10} a;     // -10 to 10 inclusive allowed
int{0..} b;         // unsigned "infinite precision"
int{..} c;          // signed "infinite precision"

-- 
Sheldon Simms / sheldon@semanticedge.com
August 17, 2001
In article <slrn9nqfb6.igf.scott@tara.mvdomain>, "Scott Robinson" <scott@tara.mvdomain> wrote:

> If we were to go with your definitions, there would be hard definitions of "int32" for time storing variables.

What's wrong with:
  typedef  int32 time_t;
August 17, 2001
Scott Robinson wrote:

> Which still comes back to the problem of portability. A very simple, but popular, example is the UNIX time scenario.
>
> UNIX time is the number of seconds past the UNIX epoch. It is stored in a 32-bit number.
>
> If we were to go with your definitions, there would be hard definitions of "int32" for time storing variables. The trick, of course, is that if we ever compile on an alternative bit size environment we're stuck with older definitions. D doesn't have a preprocessor, probably because due to its strongly typed and parsable form you can use perl or any other text processing tool for better effect. However, because of a lack of a preprocessor our bit management is all going to have be done in-source.
>
> Sorry, I don't buy that. Controlling bit sizes to such a level seems much better tuned for embedded/micro-managing code - two things which D, according to the spec, is not designed for.
>
> I suppose an argument can be made for communication and external protocol support, but wasn't that addressed in the spec in reference to a structure's memory representation? (or lack of such)

Actually, my first idea was to say that the bit size was only a minimum....that a compiler could substitute an int64 for an int32 if it wanted.  Not sure which is the best balance.

IMHO, things like those time variables should be defined with typedef's anyway. Then you can redefine the type in the header files you have installed on the new architecture.

One of the great things about the strongly typed typedef's used in D is that when you define that times are to be given as "Time" rather than "int32", then the user of the API is *strongly* encouraged to use your (forward compatible) typedef rather than the underlying type.

August 17, 2001
Sheldon Simms wrote:

> As for the problem that (I think) you're talking about, perhaps it would be better to talk about ranges instead of the number of bits. Given that the language already offers integral types with the fixed sizes of 8,16,32, and 64 bits (you have read the D doc, haven't you?), I don't see the point of adding more programmer- defined sizes. But being able to specify particular ranges like in pascal might be useful:

Ack, sorry.  So D does fix the sizes, unlike C.  Well done, Walter, my mistake.

However, I do think that int32 is more obvious what it represents than "int", particularly for us old C programmers.  And I still think that int1024 and intX are good things that the compiler could emulate.

August 26, 2001
 I guess the biggest problem I have about defining an 'int'
as 32 bits is why 32 bits? If there is no specific reason for it then
if this language was designed 10 years ago 'int' might be 16 bits or
if it was designed 3 years from now 'int' might be 64 bits. Looking to
the future, will it be a regret to look back and see 'int' defined as
32 bits?

 From the D Overview...

       "D is a general purpose systems and applications programming
        language. It is a higher level language than C++, but retains
        the ability to write high performance code and interface
        directly with the operating system APIs and with hardware."

 With this in mind, I agree that we do need some fixed sizes for
hardware interfacing, even if for no other reason. This would also
allow D to be used for embedded systems. After all, if D does become
the next popular language, people will want to port an embedded version
as well. So lets not put any more limits on them than are easily
justifiable. As well, if we wish to interface to hardware registers
(as stated in the overview above) we need types that are fixed in size.

 Here is one possible solution:

 Hardware (non-abstract) Types

        u1              unsigned 1 bit
        u8              unsigned 8 bits
        s8              signed 8 bits
        u16             unsigned 16 bits
        s16             signed 16 bits
        u32             unsigned 32 bits
        s32             signed 32 bits
        s64             ...etc...
        ...
        f32             32 bit floating point
        f64             64 bit floating point

 Software (semi-abstract) Types

        bit             single bit or larger
        byte            signed 8 bits or larger
        ubyte           unsigned 8 bits or larger
        short           signed 16 bits or larger
        ushort          unsigned 16 bits or larger
        int             signed 32 bits or larger
        uint            unsigned 32 bits or larger
        long            signed 64 bits or larger
        ulong           unsigned 64 bits or larger
        float           32 bit floating point or larger
        double          64 bit floating point or larger

 Why use the short u8 / s8 types?

      This is a simple and clear format where there is no
      guess work. To follow the format of the "Software Types"
      you would probably need something like "int8" or "uint8".
      Since the "Hardware Types" are designed to be extremely
      specific & extremely clear, I think this might be one
      possibly valid reason for the departure of format.
      Either format would work though.

 Why do the "Software Types" specify X bits or larger?

      This allows you to promote an 'int' to 64,128, or whatever
      bits at any time in the future with little or no regrets.
      Once 256 bit systems come out that have 10 times better
      performance when dealing with 256 bits versus 32 bits,
      wouldn't we all want the 'int' to be 256 bits at that time?
      Then we can compile our code that is filled with 'int' to
      run on the new system at max performance. Assuming our
      hardware drivers for our older plug in expansion cards use
      the u8 / s8 types, as they should, they should keep working
      even though byte, short, and int may have gotten promoted.

 Why offer two different types; "hardware" and "software" ?

      Hardware types allows you to meet the D spec goal of,
         "and interface directly with the operating system
          APIs and with hardware."

      It is true that the "hardware types" can be abused by
      people using them when they should have more appropriately
      used "software types" but such is life. Do you ban all
      scissors just because they can harm someone when misused?

*EOF*
August 27, 2001
	Wow!  That looks like a moment of clarity.  One thing.  You may want to
specific a specific format.  Aside from endian-ness  I think integral
type are pretty standard in most hardware, but has the world pretty much
agree on a common floating point format for the various bit sizes?  I
know there are standards, but I don't know how many or which are in
common use.  This may not be a problem.

Dan

Jeff Frohwein wrote:
> 
>  I guess the biggest problem I have about defining an 'int'
> as 32 bits is why 32 bits? If there is no specific reason for it then
> if this language was designed 10 years ago 'int' might be 16 bits or
> if it was designed 3 years from now 'int' might be 64 bits. Looking to
> the future, will it be a regret to look back and see 'int' defined as
> 32 bits?
> 
>  From the D Overview...
> 
>        "D is a general purpose systems and applications programming
>         language. It is a higher level language than C++, but retains
>         the ability to write high performance code and interface
>         directly with the operating system APIs and with hardware."
> 
>  With this in mind, I agree that we do need some fixed sizes for
> hardware interfacing, even if for no other reason. This would also
> allow D to be used for embedded systems. After all, if D does become
> the next popular language, people will want to port an embedded version
> as well. So lets not put any more limits on them than are easily
> justifiable. As well, if we wish to interface to hardware registers
> (as stated in the overview above) we need types that are fixed in size.
> 
>  Here is one possible solution:
> 
>  Hardware (non-abstract) Types
> 
>         u1              unsigned 1 bit
>         u8              unsigned 8 bits
>         s8              signed 8 bits
>         u16             unsigned 16 bits
>         s16             signed 16 bits
>         u32             unsigned 32 bits
>         s32             signed 32 bits
>         s64             ...etc...
>         ...
>         f32             32 bit floating point
>         f64             64 bit floating point
> 
>  Software (semi-abstract) Types
> 
>         bit             single bit or larger
>         byte            signed 8 bits or larger
>         ubyte           unsigned 8 bits or larger
>         short           signed 16 bits or larger
>         ushort          unsigned 16 bits or larger
>         int             signed 32 bits or larger
>         uint            unsigned 32 bits or larger
>         long            signed 64 bits or larger
>         ulong           unsigned 64 bits or larger
>         float           32 bit floating point or larger
>         double          64 bit floating point or larger
> 
>  Why use the short u8 / s8 types?
> 
>       This is a simple and clear format where there is no
>       guess work. To follow the format of the "Software Types"
>       you would probably need something like "int8" or "uint8".
>       Since the "Hardware Types" are designed to be extremely
>       specific & extremely clear, I think this might be one
>       possibly valid reason for the departure of format.
>       Either format would work though.
> 
>  Why do the "Software Types" specify X bits or larger?
> 
>       This allows you to promote an 'int' to 64,128, or whatever
>       bits at any time in the future with little or no regrets.
>       Once 256 bit systems come out that have 10 times better
>       performance when dealing with 256 bits versus 32 bits,
>       wouldn't we all want the 'int' to be 256 bits at that time?
>       Then we can compile our code that is filled with 'int' to
>       run on the new system at max performance. Assuming our
>       hardware drivers for our older plug in expansion cards use
>       the u8 / s8 types, as they should, they should keep working
>       even though byte, short, and int may have gotten promoted.
> 
>  Why offer two different types; "hardware" and "software" ?
> 
>       Hardware types allows you to meet the D spec goal of,
>          "and interface directly with the operating system
>           APIs and with hardware."
> 
>       It is true that the "hardware types" can be abused by
>       people using them when they should have more appropriately
>       used "software types" but such is life. Do you ban all
>       scissors just because they can harm someone when misused?
> 
> *EOF*
« First   ‹ Prev
1 2 3