Thread overview
Size of the real type
Mar 09, 2006
kinghajj
Mar 09, 2006
kinghajj
Mar 10, 2006
Walter Bright
Mar 10, 2006
Don Clugston
Mar 10, 2006
Walter Bright
March 09, 2006
This is just an FYI, but on my computer, this code:

import std.stdio;

int main(char[][] args)
{
writefln(real.sizeof * 8);
return 0;
}

Outputs the size of real as 96 bits, not 80.

I have an Intel Pentium 4 (Prescott) CPU.


March 09, 2006
"kinghajj" <kinghajj_member@pathlink.com> wrote in message news:duo1sh$1go1$1@digitaldaemon.com...
> This is just an FYI, but on my computer, this code:
> Outputs the size of real as 96 bits, not 80.

Odd!  I get 80, as I'd expect.

Are you using DMD or GDC?


March 09, 2006
In article <duoad7$1psv$1@digitaldaemon.com>, Jarrett Billingsley says...
>
>"kinghajj" <kinghajj_member@pathlink.com> wrote in message news:duo1sh$1go1$1@digitaldaemon.com...
>> This is just an FYI, but on my computer, this code:
>> Outputs the size of real as 96 bits, not 80.
>
>Odd!  I get 80, as I'd expect.
>
>Are you using DMD or GDC?
>
>

DMD in Linux. I'll try runing it in Windows to see if that makes a diference.


March 09, 2006
It does.  As I recall, Walter has made past comments reflecting that the size of a real on Linux differs from the size of the same on Windows.  I believe this is for library reasons.

-[Unknown]


> In article <duoad7$1psv$1@digitaldaemon.com>, Jarrett Billingsley says...
>> "kinghajj" <kinghajj_member@pathlink.com> wrote in message news:duo1sh$1go1$1@digitaldaemon.com...
>>> This is just an FYI, but on my computer, this code:
>>> Outputs the size of real as 96 bits, not 80.
>> Odd!  I get 80, as I'd expect.
>>
>> Are you using DMD or GDC? 
>>
>>
> 
> DMD in Linux. I'll try runing it in Windows to see if that makes a diference.
> 
> 
March 09, 2006
kinghajj wrote:

> This is just an FYI, but on my computer, this code:
> 
> import std.stdio;
> 
> int main(char[][] args)
> {
> writefln(real.sizeof * 8);
> return 0;
> }

Side note:
Who said the size of a "real" is 80 bits ? The size varies.
It's just defined as: "largest hardware implemented FP size"

I get 64, here on PowerPC :-) On a SPARC, you could get 128.

> Outputs the size of real as 96 bits, not 80.
> I have an Intel Pentium 4 (Prescott) CPU.

The difference is due to alignment of the long double type.

In x86 Linux, it is 96 bits. In x64 Linux, it is 128 bits...
But they both still only use 80 bits, just add some padding.

--anders

PS.
http://gcc.gnu.org/onlinedocs/gcc-4.0.2/gcc/i386-and-x86_002d64-Options.html
"The i386 application binary interface specifies the size to be 96 bits, so -m96bit-long-double is the default in 32 bit mode." [...] "In the x86-64 compiler, -m128bit-long-double is the default choice as its ABI specifies that long double is to be aligned on 16 byte boundary."
March 09, 2006
"Anders F Björklund" <afb@algonet.se> wrote in message news:duol0f$278j$1@digitaldaemon.com...
> Side note:
> Who said the size of a "real" is 80 bits ? The size varies.
> It's just defined as: "largest hardware implemented FP size"

Which should be 80 on x86 processors!

> The difference is due to alignment of the long double type.
>
> In x86 Linux, it is 96 bits. In x64 Linux, it is 128 bits... But they both still only use 80 bits, just add some padding.
>
> --anders
>
> PS.
> http://gcc.gnu.org/onlinedocs/gcc-4.0.2/gcc/i386-and-x86_002d64-Options.html
> "The i386 application binary interface specifies the size to be 96 bits,
> so -m96bit-long-double is the default in 32 bit mode." [...] "In the
> x86-64 compiler, -m128bit-long-double is the default choice as its ABI
> specifies that long double is to be aligned on 16 byte boundary."

Well if the only difference is in the alignment, why isn't just the real.alignof field affected?  An x86-32 real is 80 bits, period.  Or does it have to do with, say, C function name mangling?  So a C function that takes one real in Windows would be _Name@80 but in Linux it'd be _Name@96 ?


March 10, 2006
"Jarrett Billingsley" <kb3ctd2@yahoo.com> wrote in message news:dupgi5$g9f$2@digitaldaemon.com...
> Well if the only difference is in the alignment, why isn't just the real.alignof field affected?  An x86-32 real is 80 bits, period.  Or does it have to do with, say, C function name mangling?  So a C function that takes one real in Windows would be _Name@80 but in Linux it'd be _Name@96 ?

It's 96 bits on linux because gcc on linux pretends that 80 bit reals are really 96 bits long. What the alignment is is something different again. Name mangling does not drive this, although the "Windows" calling convention will have different names as you point out, but that doesn't matter.

96 bit convention permeates linux, and since D must be C ABI compatible with the host system's default C compiler, 96 bits it is on linux.

If you're looking for mantissa significant bits, etc., use the various .properties of float types.


March 10, 2006
Walter Bright wrote:
> "Jarrett Billingsley" <kb3ctd2@yahoo.com> wrote in message news:dupgi5$g9f$2@digitaldaemon.com...
>> Well if the only difference is in the alignment, why isn't just the real.alignof field affected?  An x86-32 real is 80 bits, period.  Or does it have to do with, say, C function name mangling?  So a C function that takes one real in Windows would be _Name@80 but in Linux it'd be _Name@96 ?
> 
> It's 96 bits on linux because gcc on linux pretends that 80 bit reals are really 96 bits long. What the alignment is is something different again. Name mangling does not drive this, although the "Windows" calling convention will have different names as you point out, but that doesn't matter.
> 
> 96 bit convention permeates linux, and since D must be C ABI compatible with the host system's default C compiler, 96 bits it is on linux.
> 
> If you're looking for mantissa significant bits, etc., use the various .properties of float types. 

The 128 bit convention makes some kind of sense -- it means an 80-bit real is binary compatible with the proposed IEEE quad type (it just sets the last few mantissa bits to zero).
But the 96 bit case makes no sense to me at all.

pragma's DDL lets you (to some extent) mix Linux and Windows .objs. Eventually, we may need some way to deal with the different padding.
March 10, 2006
"Don Clugston" <dac@nospam.com.au> wrote in message news:durcq4$2u8o$1@digitaldaemon.com...
> But the 96 bit case makes no sense to me at all.

It doesn't matter if it makes much sense or not, we're stuck with it on linux.

> pragma's DDL lets you (to some extent) mix Linux and Windows .objs. Eventually, we may need some way to deal with the different padding.

I think it's a pipe dream to expect to be able to mix obj files between operating systems. The 96 bit thing is far from the only difference.