January 29, 2012
"Walter Bright" <newshound2@digitalmars.com> wrote in message news:jg2im4$30qi$1@digitalmars.com...
> There is some support for 128 bit ints already in the backend, but it is incomplete. It's a bit low on the priority list.
>

No rush.  The backend is still a mystery to me.


January 29, 2012
On 1/28/2012 9:20 PM, Daniel Murphy wrote:
> The backend is still a mystery

Better call Nancy Drew!

January 29, 2012
On 01/29/2012 04:56 AM, Jonathan M Davis wrote:
> On Sunday, January 29, 2012 14:38:41 Daniel Murphy wrote:
>> "bearophile"<bearophileHUGS@lycos.com>  wrote in message
>> news:jg2cku$2ljk$1@digitalmars.com...
>>
>>> Integer numbers have some proprieties that compilers use with built-in
>>> fixed-size numbers to optimize code. I think such optimizations are not
>>> performed on library-defined numbers like a Fixed!128 or BigInt. This
>>> means there are advantages of having cent/ucent/BigInt as built-ins.
>>
>> Yes, but the advantages in implementation ease and portability currently
>> favour a library solution.
>> Do the gcc or llvm backends support 128 bit integers?
>
> gcc does on 64-bit systems. long long is 128-bit on 64-bit Linux. I don't know
> about llvm, but it's supposed to be gcc-compatible, so I assume that it's the
> same.
>
> - Jonathan M Davis

long long is 64-bit on 64-bit linux.
January 29, 2012
On Sunday, January 29, 2012 16:26:02 Timon Gehr wrote:
> long long is 64-bit on 64-bit linux.

Are you sure? I'm _certain_ that we looked at this at work when we were sorting issue with moving some of our products to 64-bit and found that long long was 128 bits. Checking...

Well, you're right. Now I'm seriously confused. Hmmm...

long double is 128-bit. Maybe that's what threw me off. Well, thanks for correcting me in either case. I thought that I'd had all of that figured out. This is one of the many reasons why I think that any language which didn't define integers according to their _absolute_ size instead of relative size (with the possible exception of some types which vary based on the machine so that you're using the most efficient integer for that machine or are able to index the full memory space) made a huge mistake. C's type scheme is nothing but trouble as far as integral sizes go IMHO. printf in particular is one of the more annoying things to worry about with cross-platform development thanks to varying integer size. Bleh. Enough of my whining.

In any case, gcc _does_ define __int128 ( http://gcc.gnu.org/onlinedocs/gcc/_005f_005fint128.html ), so as far as the question goes, gcc _does_ have 128 bit integers, even if long long isn't 128 bits on 64-bit systems.

- Jonathan M Davis
January 29, 2012
On Sun, Jan 29, 2012 at 02:26:55PM -0800, Jonathan M Davis wrote: [...]
> This is one of the many reasons why I think that any language which didn't define integers according to their _absolute_ size instead of relative size (with the possible exception of some types which vary based on the machine so that you're using the most efficient integer for that machine or are able to index the full memory space) made a huge mistake.

IMNSHO, you need both, and I can't say I'm 100% satisfied with how D uses 'int' to mean 32-bit integer no matter what. The problem with C is that there's no built-in type for guaranteeing 32-bits (stdint.h came a bit too late into the picture--by then, people had already formed too many bad habits).

There's a time when code needs to be able to say "please give me the default fastest int type on the machine", and a time for code to say "I want the int type with exactly n bits 'cos I'm assuming specific properties of n-bit numbers".


> C's type scheme is nothing but trouble as far as integral sizes go IMHO. printf in particular is one of the more annoying things to worry about with cross-platform development thanks to varying integer size. Bleh. Enough of my whining.
[...]

Yeah, size_t especially drives me up the wall. Is it %u, %lu, or %llu? I think either gcc or C99 actually has a dedicated printf format for size_t, except that C++ doesn't include parts of C99, so you end up with format string #ifdef nightmare no matter what you do. I'm so glad that %s takes care of it all in D. Yet another thing D has done right.


T

-- 
MSDOS = MicroSoft's Denial Of Service
January 30, 2012
On Sunday, January 29, 2012 15:31:57 H. S. Teoh wrote:
> On Sun, Jan 29, 2012 at 02:26:55PM -0800, Jonathan M Davis wrote: [...]
> 
> > This is one of the many reasons why I think that any language which didn't define integers according to their _absolute_ size instead of relative size (with the possible exception of some types which vary based on the machine so that you're using the most efficient integer for that machine or are able to index the full memory space) made a huge mistake.
> 
> IMNSHO, you need both, and I can't say I'm 100% satisfied with how D uses 'int' to mean 32-bit integer no matter what. The problem with C is that there's no built-in type for guaranteeing 32-bits (stdint.h came a bit too late into the picture--by then, people had already formed too many bad habits).
> 
> There's a time when code needs to be able to say "please give me the default fastest int type on the machine", and a time for code to say "I want the int type with exactly n bits 'cos I'm assuming specific properties of n-bit numbers".

In an ideal language, I'd probably go with an integer type with an unspecified number of bits which is used when you don't care about the size of the integer. It'll be whatever is fastest for the particular architecture that it's compiled on, and it'll probably be guaranteed to be _at least_ a particular size (probably 32 bits at this point) so that you don't have to worry about average-sized numbers not fitting. Also, you should probably have a type like size_t that deals with the differing sizes of  address spaces. But _all_ other types have a fixed size. So, you don't get this nonsense of int is this on that machine, and long is that, and long long is something else, etc. So, you use them when you need a variable to be a particular size or when you need a guarantee that a larger value will fit in it. The way that C did it with _everything_ varying is horrific IMHO.

The route of languages such as D, Java, and C# where pretty much _all_ types are fixed in size is _far_ better IMHO. So, if the choice is between the C/C++ route or the D/Java/C# route, I'm with D/Java/C# all the way. But there are definitely arguments for having an integral type which is the most efficient for whatever machine that it's compiled on, and D doesn't really have that. You'd probably have to use something like c_long if you really wanted that.

- Jonathan M Davis
January 30, 2012
On 29-01-2012 23:26, Jonathan M Davis wrote:
> On Sunday, January 29, 2012 16:26:02 Timon Gehr wrote:
>> long long is 64-bit on 64-bit linux.
>
> Are you sure? I'm _certain_ that we looked at this at work when we were
> sorting issue with moving some of our products to 64-bit and found that long
> long was 128 bits. Checking...
>
> Well, you're right. Now I'm seriously confused. Hmmm...
>
> long double is 128-bit. Maybe that's what threw me off. Well, thanks for
> correcting me in either case. I thought that I'd had all of that figured out.
> This is one of the many reasons why I think that any language which didn't
> define integers according to their _absolute_ size instead of relative size
> (with the possible exception of some types which vary based on the machine so
> that you're using the most efficient integer for that machine or are able to
> index the full memory space) made a huge mistake. C's type scheme is nothing
> but trouble as far as integral sizes go IMHO. printf in particular is one of
> the more annoying things to worry about with cross-platform development thanks
> to varying integer size. Bleh. Enough of my whining.
>
> In any case, gcc _does_ define __int128 (
> http://gcc.gnu.org/onlinedocs/gcc/_005f_005fint128.html ), so as far as the
> question goes, gcc _does_ have 128 bit integers, even if long long isn't 128
> bits on 64-bit systems.
>
> - Jonathan M Davis

Well, with LLVM and GCC supporting it, there shouldn't be any problems with implementing it today, I guess.

--
- Alex
January 30, 2012
On 1/29/2012 2:26 PM, Jonathan M Davis wrote:
> long double is 128-bit.

Sort of. It's 80 bits of useful data with 48 bits of unused padding.
January 30, 2012
On 1/29/2012 3:31 PM, H. S. Teoh wrote:
> Yeah, size_t especially drives me up the wall. Is it %u, %lu, or %llu?
> I think either gcc or C99 actually has a dedicated printf format for
> size_t, except that C++ doesn't include parts of C99, so you end up with
> format string #ifdef nightmare no matter what you do. I'm so glad that
> %s takes care of it all in D. Yet another thing D has done right.

size_t does have a C99 Standard official format %z. The trouble is,

1. many compilers *still* don't implement it.

2. that doesn't do you any good for any other typedef's that change size.

printf is the single biggest nuisance in porting code between 32 and 64 bits.

January 30, 2012
On Sunday, January 29, 2012 17:57:39 Walter Bright wrote:
> On 1/29/2012 3:31 PM, H. S. Teoh wrote:
> > Yeah, size_t especially drives me up the wall. Is it %u, %lu, or %llu? I think either gcc or C99 actually has a dedicated printf format for size_t, except that C++ doesn't include parts of C99, so you end up with format string #ifdef nightmare no matter what you do. I'm so glad that %s takes care of it all in D. Yet another thing D has done right.
> 
> size_t does have a C99 Standard official format %z. The trouble is,
> 
> 1. many compilers *still* don't implement it.
> 
> 2. that doesn't do you any good for any other typedef's that change size.
> 
> printf is the single biggest nuisance in porting code between 32 and 64 bits.

It's even worse with code which you're trying to have be cross-platform between 32-bit and 64-bit. Microsoft added I32 and I64. which helps, but then you still need to add a wrapper to printf for Posix to handle them unless you want to ifdef all of your printf calls. About the only positive thing that I can say about that whole mess is that it's because of that that I learned that string literals are unaffected by macros in C/C++. The fact that I can just do %s with writefln in D and not worry about it is so fantastic it's not even funny.

- Jonathan M Davis