August 27, 2001 Re: int<bits> and arbitrary size ints | ||||
---|---|---|---|---|
| ||||
Posted in reply to Jeff Frohwein | Jeff Frohwein wrote: > ... > Here is one possible solution: > > Hardware (non-abstract) Types > > u1 unsigned 1 bit > u8 unsigned 8 bits > s8 signed 8 bits > u16 unsigned 16 bits > s16 signed 16 bits > u32 unsigned 32 bits > s32 signed 32 bits > s64 ...etc... > ... I like this scheme, but would propose that it be done a bit differently, in the following way: u{i} unsigned i bits where i in [1..64] (optionally, i in [1..128] or larger) s{i} signed 1 bits where i in [1..64] (optionally, i in [1..128]) or larger. Length of i includes the sign bit These types could only be used in structures, not classes. Their secondary purpose would be to allow the packing of structures to match externally specified data. (Structures can be packed, though classes cannot.) > f32 32 bit floating point > f64 64 bit floating point > > Software (semi-abstract) Types > > bit single bit or larger > byte signed 8 bits or larger > ubyte unsigned 8 bits or larger > short signed 16 bits or larger > ushort unsigned 16 bits or larger > int signed 32 bits or larger > uint unsigned 32 bits or larger > long signed 64 bits or larger > ulong unsigned 64 bits or larger > float 32 bit floating point or larger > double 64 bit floating point or larger > These are defined via typedefs on the underlying sized variables. Only types defined via typedefs are allowed in classes. > Why use the short u8 / s8 types? > > ... > *EOF* > |
September 19, 2001 Re: int<bits> and arbitrary size ints | ||||
---|---|---|---|---|
| ||||
Posted in reply to Russ Lewis | Remember, in D you can create strong typedefs (rather than weak type aliases in C). So, if you really want an int32 type, you can typedef it, overload based on it, get strong type checking on it, etc. Russ Lewis wrote in message <3B7D3FC5.2F9E9AD5@deming-os.org>... >Sheldon Simms wrote: > >> As for the problem that (I think) you're talking about, perhaps it would be better to talk about ranges instead of the number of bits. Given that the language already offers integral types with the fixed sizes of 8,16,32, and 64 bits (you have read the D doc, haven't you?), I don't see the point of adding more programmer- defined sizes. But being able to specify particular ranges like in pascal might be useful: > >Ack, sorry. So D does fix the sizes, unlike C. Well done, Walter, my mistake. > >However, I do think that int32 is more obvious what it represents than "int", >particularly for us old C programmers. And I still think that int1024 and intX are good things that the compiler could emulate. > |
September 19, 2001 Re: int<bits> and arbitrary size ints | ||||
---|---|---|---|---|
| ||||
Posted in reply to Walter | In article <9o97o5$9bt$1@digitaldaemon.com>, "Walter" <walter@digitalmars.com> wrote: > Remember, in D you can create strong typedefs (rather than weak type aliases in C). So, if you really want an int32 type, you can typedef it, overload based on it, get strong type checking on it, etc. The disadvantage of this is that the compiler can't make use of the knowledge that the type is always (say) a 32 bit unsigned integer. If this were a base type then the compiler could give specific warnings such as if you try to compare a variable of this type with 2^32; if you use a typedef I don't think it can catch this problem. |
September 20, 2001 Re: int<bits> and arbitrary size ints | ||||
---|---|---|---|---|
| ||||
Posted in reply to Ben Cohen | Ben Cohen wrote in message <9o9ort$l9t$1@digitaldaemon.com>... >In article <9o97o5$9bt$1@digitaldaemon.com>, "Walter" <walter@digitalmars.com> wrote: > >> Remember, in D you can create strong typedefs (rather than weak type aliases in C). So, if you really want an int32 type, you can typedef it, overload based on it, get strong type checking on it, etc. > >The disadvantage of this is that the compiler can't make use of the knowledge that the type is always (say) a 32 bit unsigned integer. If this were a base type then the compiler could give specific warnings such as if you try to compare a variable of this type with 2^32; if you use a typedef I don't think it can catch this problem. You can do things like: assert(my32int.size == 4); to signal if the typedef went wrong. |
October 25, 2001 Re: int<bits> and arbitrary size ints | ||||
---|---|---|---|---|
| ||||
Posted in reply to Russ Lewis | If we used ranges the whole issue could be moot... just define your own scalar type with the range you require, and the compiler has to find a CPU-supported type that can contain it, or generate an error. But problem is that it may pick one that's too large. On the other hand, if you *know* a platform supports a type of a given size, it'd be nice to just be able to ask for it explicitly, so you don't have to worry about the compiler maybe using something larger (breaks struct compatibility across platforms). If you ask for int32 and one doesn't exist, your program is not portable to that platform. So only ask for int32 if you really need exactly 32 bits. Compiler would be free to emulate so long as it could do so precisely (using masking etc). Sean "Russ Lewis" <russ@deming-os.org> wrote in message news:3B7D3F0F.C3945078@deming-os.org... > Scott Robinson wrote: > > > Which still comes back to the problem of portability. A very simple, but popular, example is the UNIX time scenario. > > > > UNIX time is the number of seconds past the UNIX epoch. It is stored in a > > 32-bit number. > > > > If we were to go with your definitions, there would be hard definitions of > > "int32" for time storing variables. The trick, of course, is that if we ever > > compile on an alternative bit size environment we're stuck with older definitions. D doesn't have a preprocessor, probably because due to its strongly typed and parsable form you can use perl or any other text processing tool for better effect. However, because of a lack of a preprocessor our bit management is all going to have be done in-source. > > > > Sorry, I don't buy that. Controlling bit sizes to such a level seems much > > better tuned for embedded/micro-managing code - two things which D, according to the spec, is not designed for. > > > > I suppose an argument can be made for communication and external protocol > > support, but wasn't that addressed in the spec in reference to a structure's > > memory representation? (or lack of such) > > Actually, my first idea was to say that the bit size was only a minimum....that a > compiler could substitute an int64 for an int32 if it wanted. Not sure which is > the best balance. > > IMHO, things like those time variables should be defined with typedef's anyway. > Then you can redefine the type in the header files you have installed on the new > architecture. > > One of the great things about the strongly typed typedef's used in D is that when > you define that times are to be given as "Time" rather than "int32", then the user > of the API is *strongly* encouraged to use your (forward compatible) typedef > rather than the underlying type. > |
December 19, 2001 Re: int<bits> and arbitrary size ints | ||||
---|---|---|---|---|
| ||||
Posted in reply to Sean L. Palmer | In my experience, a maximum size is rarely wished for, just a minimum size. "Sean L. Palmer" <spalmer@iname.com> wrote in message news:9r9ujl$2ac3$1@digitaldaemon.com... > If we used ranges the whole issue could be moot... just define your own scalar type with the range you require, and the compiler has to find a CPU-supported type that can contain it, or generate an error. But problem is that it may pick one that's too large. > > On the other hand, if you *know* a platform supports a type of a given size, > it'd be nice to just be able to ask for it explicitly, so you don't have to > worry about the compiler maybe using something larger (breaks struct compatibility across platforms). If you ask for int32 and one doesn't exist, your program is not portable to that platform. So only ask for int32 > if you really need exactly 32 bits. Compiler would be free to emulate so long as it could do so precisely (using masking etc). > > Sean |
December 20, 2001 Re: int<bits> and arbitrary size ints | ||||
---|---|---|---|---|
| ||||
Posted in reply to Walter | Oftentimes an exact size is, however, wished for. What I'm after is a standardized way of using a type of a known size, as opposed to a type of some size at least as large. Sure, there aren't many machines these days where a byte is not 8 bits, or a pointer isn't either 32 or 64 bits. You've defined the sizes of the first 4 types, byte, short, int, and long. What about long long? How long is a long long? If you don't standardize the naming of exact-sized types, then compilers will provide them anyway but with incompatible conventions, such as MS's __int64 vs. GNU's long long. I don't want to see that happen to D. If there are 2 kinds of machines out there that both have 10 bit bytes, I want to be able to utilize those types in my D program designed to run on only those kind of machines, without conditional compilation if possible; They both should provide int10 type aliases (and probably the machine's char and byte types would also be 10 bits) Sean "Walter" <walter@digitalmars.com> wrote in message news:9vqgve$1v9t$1@digitaldaemon.com... > In my experience, a maximum size is rarely wished for, just a minimum size. > > "Sean L. Palmer" <spalmer@iname.com> wrote in message news:9r9ujl$2ac3$1@digitaldaemon.com... > > If we used ranges the whole issue could be moot... just define your own scalar type with the range you require, and the compiler has to find a CPU-supported type that can contain it, or generate an error. But problem > > is that it may pick one that's too large. > > > > On the other hand, if you *know* a platform supports a type of a given > size, > > it'd be nice to just be able to ask for it explicitly, so you don't have > to > > worry about the compiler maybe using something larger (breaks struct compatibility across platforms). If you ask for int32 and one doesn't exist, your program is not portable to that platform. So only ask for > int32 > > if you really need exactly 32 bits. Compiler would be free to emulate so > > long as it could do so precisely (using masking etc). > > > > Sean |
December 20, 2001 Re: int<bits> and arbitrary size ints | ||||
---|---|---|---|---|
| ||||
Posted in reply to Sean L. Palmer | "Sean L. Palmer" <spalmer@iname.com> wrote in message news:9vse28$l2k$1@digitaldaemon.com... > Oftentimes an exact size is, however, wished for. > > What I'm after is a standardized way of using a type of a known size, as opposed to a type of some size at least as large. > > Sure, there aren't many machines these days where a byte is not 8 bits, or a > pointer isn't either 32 or 64 bits. You've defined the sizes of the first 4 > types, byte, short, int, and long. What about long long? How long is a long long? There is no long long in D, AFAIK. And long is an integer of the largest size this architecture can handle. It's not fixed. > If you don't standardize the naming of exact-sized types, then compilers will provide them anyway but with incompatible conventions, such as MS's __int64 vs. GNU's long long. I don't want to see that happen to D. Personally, I'd also like to see 64-bit integers strictly defined in D. Especially since they are used in WinAPI headers. Maybe just call it "int64" or steal it from Pascal - "comp". |
December 23, 2001 Re: int<bits> and arbitrary size ints | ||||
---|---|---|---|---|
| ||||
Posted in reply to Pavel Minayev | "Pavel Minayev" <evilone@omen.ru> wrote in message news:9vsnnp$tm9$1@digitaldaemon.com... > "Sean L. Palmer" <spalmer@iname.com> wrote in message news:9vse28$l2k$1@digitaldaemon.com... > > Oftentimes an exact size is, however, wished for. > > > > What I'm after is a standardized way of using a type of a known size, as opposed to a type of some size at least as large. > > > > Sure, there aren't many machines these days where a byte is not 8 bits, or > a > > pointer isn't either 32 or 64 bits. You've defined the sizes of the first > 4 > > types, byte, short, int, and long. What about long long? How long is a long long? > > There is no long long in D, AFAIK. > And long is an integer of the largest size this architecture > can handle. It's not fixed. > > > If you don't standardize the naming of exact-sized types, then compilers will provide them anyway but with incompatible conventions, such as MS's __int64 vs. GNU's long long. I don't want to see that happen to D. > > Personally, I'd also like to see 64-bit integers strictly defined in D. Especially since they are used in WinAPI headers. Maybe just call it "int64" or steal it from Pascal - "comp". If D is ported to a platform where longer than 64 bit ints make sense, I see no problem with defining a new basic type for it. I don't like the C usage of multiple keywords for a type. It'd probably be called "longlong". For those who want exact type sizes, perhaps a standard module can be defined with aliases like int8, int16, etc. |
December 23, 2001 Re: int<bits> and arbitrary size ints | ||||
---|---|---|---|---|
| ||||
Posted in reply to Walter | "Walter" <walter@digitalmars.com> wrote in message news:a0432a$1es4$1@digitaldaemon.com... > If D is ported to a platform where longer than 64 bit ints make sense, I see > no problem with defining a new basic type for it. I don't like the C usage of multiple keywords for a type. It'd probably be called "longlong". Hmmm... longint? > For those who want exact type sizes, perhaps a standard module can be defined with aliases like int8, int16, etc. Great idea! |
Copyright © 1999-2021 by the D Language Foundation