April 03, 2005
"Bob W" <nospam@aol.com> wrote in message news:d2nd96$1aos$1@digitaldaemon.com...
> By the way, C does it the same way for historic
> reasons. Other languages are more user friendly
> and I am still hoping that D might evolve in this
> direction.

Actually, many languages, mathematical programs, and even C compilers have *dropped* support for 80 bit long doubles. At one point, Microsoft had even made it impossible to execute 80 bit floating instructions on their upcoming Win64 (I made some frantic phone calls to them and apparently was the only one who ever made a case to them in favor of 80 bit long doubles, they said they'd put the support back in). Intel doesn't support 80 bit reals on any of their new vector floating point instructions. The 64 bit chips only support it in a 'legacy' manner. Java, C#, VC, Javascript do not support 80 bit reals.

I haven't done a comprehensive survey of computer languages, but as far as I can tell D stands pretty much alone in its support for 80 bits, along with a handful of C/C++ compilers (including DMC).

Because of this shaky operating system and chip support for 80 bits, it would be a mistake to center D's floating point around 80 bits. Some systems may force a reversion to 64 bits. On the other hand, ongoing system support for 64 bit doubles is virtually guaranteed, and D generally follows C's rules with these.

(BTW, this thread is a classic example of "build it, and they will come". D is almost single handedly rescuing 80 bit floating point from oblivion, since it makes such a big deal about it and has wound up interesting a lot of people in it. Before D, as far as I could tell, nobody cared a whit about it. I think it's great that this has struck such a responsive chord.)


April 03, 2005
Walter wrote:

> I haven't done a comprehensive survey of computer languages, but as far as I
> can tell D stands pretty much alone in its support for 80 bits, along with a
> handful of C/C++ compilers (including DMC).

The thing is that the D "real" type does *not* guarantee 80 bits ?
It doesn't even say the minimum size, so one can only assume 64...

I think it would be more clear to say "80 bits minimum", and then
future CPUs/code is still free to use 128-bit extended doubles too ?

(since D allows all FP calculations to be done at a higher precision)


This would be simplified by padding the 80-bit floating point to
a full 16 bytes, by adding zeros (as suggested by performance anyway)

And then, with both 128-bit integers and 128-bit floating point,
D would truly be equipped to face both today (64) and tomorrow...

(and with a "real" alias, it's still the "largest hardware implemented")


Just my 2 öre,
--anders
April 03, 2005
"Walter" <newshound@digitalmars.com> wrote in message news:d2od1o$25vd$1@digitaldaemon.com...
>
> "Bob W" <nospam@aol.com> wrote in message news:d2nd96$1aos$1@digitaldaemon.com...
>> By the way, C does it the same way for historic
>> reasons. Other languages are more user friendly
>> and I am still hoping that D might evolve in this
>> direction.
>
> Actually, many languages, mathematical programs, and even C compilers have
> *dropped* support for 80 bit long doubles. At one point, Microsoft had
> even
> made it impossible to execute 80 bit floating instructions on their
> upcoming
> Win64 (I made some frantic phone calls to them and apparently was the only
> one who ever made a case to them in favor of 80 bit long doubles, they
> said
> they'd put the support back in). Intel doesn't support 80 bit reals on any
> of their new vector floating point instructions. The 64 bit chips only
> support it in a 'legacy' manner. Java, C#, VC, Javascript do not support
> 80
> bit reals.
>
> I haven't done a comprehensive survey of computer languages, but as far as
> I
> can tell D stands pretty much alone in its support for 80 bits, along with
> a
> handful of C/C++ compilers (including DMC).
>
> Because of this shaky operating system and chip support for 80 bits, it
> would be a mistake to center D's floating point around 80 bits. Some
> systems
> may force a reversion to 64 bits. On the other hand, ongoing system
> support
> for 64 bit doubles is virtually guaranteed, and D generally follows C's
> rules with these.
>
> (BTW, this thread is a classic example of "build it, and they will come".
> D
> is almost single handedly rescuing 80 bit floating point from oblivion,
> since it makes such a big deal about it and has wound up interesting a lot
> of people in it. Before D, as far as I could tell, nobody cared a whit
> about
> it. I think it's great that this has struck such a responsive chord.)
>

I am probably looking like an extended precison
advocate, but I am actually not. The double
format was good enough for me even for
statistical evaluation in almost 100% of cases.
There are admittedly cases which would benefit
from having 80 bit precision available, however.

Therefore, although it would not be devastating for
me should you ever decide to drop support for the
reals, I'd still like having them available just in
case they are needed. However, if you do offer
80 bit types you'll have to assign real variables
with proper real values if evaluation can be
completed at compile time. Otherwise I suggest
that you issue a warning where accuracy might
be impaired. It is hard to believe that a new
millennium programming language would actually
require people to write

  real r=1.2L   instead of   real r=1.2

in order not to produce an incorrect assignment.
Yes, I know what C programmers would want
to say here, I am one of them.    : )

For someone not familiar with C, the number
1.2 is not a real and is not a double either,
especially if he is purely mathematically
oriented. It is a decimal floating point value.
He takes it for granted that 1.2 is fine whether
assigned to a float or to a double. But he will
refuse to understand why he has to suffix the
literal to become an accurate real value.

Of course you could try to explain him that
the usual +/- 1/2 LSB error for most fractional
(decimal) values converted to binary would
increase to about 11 LSBs if he ever forgot
to use that important "L" suffix. But would
he really want to know?



April 03, 2005
"Anders F Björklund" <afb@algonet.se> wrote in message news:d2og5l$27nh$3@digitaldaemon.com...
> Walter wrote:
>
> > I haven't done a comprehensive survey of computer languages, but as far
as I
> > can tell D stands pretty much alone in its support for 80 bits, along
with a
> > handful of C/C++ compilers (including DMC).
> The thing is that the D "real" type does *not* guarantee 80 bits ? It doesn't even say the minimum size, so one can only assume 64...

Yes, it's 64. Guaranteeing 80 bits would require writing an 80 bit software emulator. I've used such emulators before, and they are really, really slow. I don't think it's practical for D floating point to be 100x slower on some machines.

> I think it would be more clear to say "80 bits minimum", and then future CPUs/code is still free to use 128-bit extended doubles too ? (since D allows all FP calculations to be done at a higher precision)

What it's supposed to be is the max precision supported by the hardware the D program is running on.

> This would be simplified by padding the 80-bit floating point to
> a full 16 bytes, by adding zeros (as suggested by performance anyway)

C compilers that support 80 bit long doubles will align them on 2 byte boundaries. To conform to the C ABI, D must follow suit.

> And then, with both 128-bit integers and 128-bit floating point, D would truly be equipped to face both today (64) and tomorrow...
>
> (and with a "real" alias, it's still the "largest hardware implemented")
>
>
> Just my 2 öre,
> --anders


April 03, 2005
Walter wrote:

>>The thing is that the D "real" type does *not* guarantee 80 bits ?
>>It doesn't even say the minimum size, so one can only assume 64...
> 
> Yes, it's 64. Guaranteeing 80 bits would require writing an 80 bit software
> emulator. I've used such emulators before, and they are really, really slow.
> I don't think it's practical for D floating point to be 100x slower on some
> machines.

Me neither. Emulating 64-bit integers with two 32-bit registers is OK,
since that is a whole lot easier. (could even be done for 128-bit ints?)

But emulating 80-bit floating point ? Eww. Emulating a 128-bit double
is better, but the current method is cheating a lot on IEEE-755 spec...


No, I meant that extended precision should be *unavailable* on some CPU.
But maybe it's better to have it work in D, like long double does in C ?

(i.e. it falls back to using regular doubles, possibly with warnings)

If so, just tell me it's better to have a flexible width language type,
than to have some types be unavailable on certain FPU computer hardware?

Since that was the whole idea... (have "extended" map to 80-bit FP type)

> What it's supposed to be is the max precision supported by the hardware the
> D program is running on.

OK, for PPC and PPC64 that is definitely 64 bits. Not sure about SPARC ?
Think I saw that Cray (or so) has 128-bit FP, but haven't got one... :-)

It seems like likely real-life values would be: 64, 80, 96 and 128 bits
(PPC/PPC64, X86/X86_64, 68K, and whatever super-computer it was above)

It's possible that a future 128-bit CPU would have a 128-bit FPU too...
But who knows ? (I haven't even seen the slightest hint of such a beast)

>>This would be simplified by padding the 80-bit floating point to
>>a full 16 bytes, by adding zeros (as suggested by performance anyway)
> 
> C compilers that support 80 bit long doubles will align them on 2 byte
> boundaries. To conform to the C ABI, D must follow suit.

I thought that was an ABI option, how to align "long double" types ?

It was my understanding that it was aligned to 96 bits on X86,
and to 128 bits on X86_64. But I might very well be wrong there...
(it's just the impression that I got from reading the GCC manual)

i.e. it still uses the regular 80 bit floating point registers,
but pads the values out with zeroes when storing them in memory.

--anders
April 04, 2005
"Anders F Björklund" <afb@algonet.se> wrote in message news:d2pdbk$30dj$1@digitaldaemon.com...
> If so, just tell me it's better to have a flexible width language type, than to have some types be unavailable on certain FPU computer hardware?

Yes, I believe that is better. Every once in a while, an app *does* care, but they're screwed anyway if the hardware won't support it.


> > What it's supposed to be is the max precision supported by the hardware
the
> > D program is running on.
>
> OK, for PPC and PPC64 that is definitely 64 bits. Not sure about SPARC ?
> Think I saw that Cray (or so) has 128-bit FP, but haven't got one... :-)
>
> It seems like likely real-life values would be: 64, 80, 96 and 128 bits (PPC/PPC64, X86/X86_64, 68K, and whatever super-computer it was above)
>
> It's possible that a future 128-bit CPU would have a 128-bit FPU too... But who knows ? (I haven't even seen the slightest hint of such a beast)

When I first looked at the AMD64 documentation, I was thrilled to see "m128" for a floating point type. I was crushed when I found it meant "two 64 bit doubles". I'd love to see a big honker 128 bit floating point type in hardware.

> >>This would be simplified by padding the 80-bit floating point to
> >>a full 16 bytes, by adding zeros (as suggested by performance anyway)
> >
> > C compilers that support 80 bit long doubles will align them on 2 byte boundaries. To conform to the C ABI, D must follow suit.
> I thought that was an ABI option, how to align "long double" types ?

The only option is to align it to what the corresponding C compiler does.

> It was my understanding that it was aligned to 96 bits on X86,

That's not a power of 2, so won't work as alignment.

> and to 128 bits on X86_64. But I might very well be wrong there... (it's just the impression that I got from reading the GCC manual)
>
> i.e. it still uses the regular 80 bit floating point registers, but pads the values out with zeroes when storing them in memory.
>
> --anders


April 04, 2005
Walter wrote:

>>If so, just tell me it's better to have a flexible width language type,
>>than to have some types be unavailable on certain FPU computer hardware?
> 
> Yes, I believe that is better. Every once in a while, an app *does* care,
> but they're screwed anyway if the hardware won't support it.

I just fail to see how real -> double/extended, is any different from the int -> short/long that C has gotten so much beating for already ?

The suggestion was to have fixed precision types:
- float => IEEE 754 Single precision (32-bit)
- double => IEEE 754 Double precision (64-bit)
- extended => IEEE 754 Double Extended precision (80-bit)
- quadruple => "IEEE 754" Quadruple precision (128-bit)

And then have "real" be an alias to the largest hardware-supported type.
It wouldn't break code more than if it was a variadic size type format ?

> When I first looked at the AMD64 documentation, I was thrilled to see "m128"
> for a floating point type. I was crushed when I found it meant "two 64 bit
> doubles". I'd love to see a big honker 128 bit floating point type in
> hardware.

I had a similar experience, with PPC64 and GCC, a while back...
(-mlong-double-128, referring to the IBM AIX style DoubledDouble)

Anyway, double-double has no chance of being full IEEE 755 spec.

>>It was my understanding that it was aligned to 96 bits on X86,
> 
> That's not a power of 2, so won't work as alignment.

You lost me ? (anyway, I suggested 128 - which *is* a power of two)

But it was my understanding that on the X86/X86_64 family of processors
that Windows used to use 10-byte doubles (and then removed extended?),
and that Linux i386(-i686) uses 12-byte doubles and Linux X86_64 now
uses 16-byte doubles (using the GCC option of -m128bit-long-double)

And that was *not* a suggestion, but how it actually worked... Now ?

--anders
April 04, 2005
Anders F Björklund wrote:
>>> It was my understanding that it was aligned to 96 bits on X86,
>>
>> That's not a power of 2, so won't work as alignment.
> 
> You lost me ? (anyway, I suggested 128 - which *is* a power of two)

Size can be anything divisible by 8 bits, i.e. any number of bytes.

Alignment has to be a power of two, and is about _where_ in memory the thing can or cannot be stored.

Align 4 for example, means that the variable cannot be stored in a memory address which, taken as a number, is not divisible by 4.

Only something aligned 1 can be stored in any address.
April 04, 2005
Georg Wrede wrote:

>>>> It was my understanding that it was aligned to 96 bits on X86,
>>>
>>> That's not a power of 2, so won't work as alignment.
>>
>> You lost me ? (anyway, I suggested 128 - which *is* a power of two)
> 
> Size can be anything divisible by 8 bits, i.e. any number of bytes.
> 
> Alignment has to be a power of two, and is about _where_ in memory the thing can or cannot be stored.
> 
> Align 4 for example, means that the variable cannot be stored in a memory address which, taken as a number, is not divisible by 4.
> 
> Only something aligned 1 can be stored in any address.

OK, seems like my sloppy syntax is hurting me once again... :-P


I meant that the *size* of "long double" on GCC X86 is 96 bits,
so that it can be *aligned* to 32 bits always (unlike 80 bits?)

Anyway, aligning to 128 bits gives better Pentium performance ?
(or at least, that's what I heard... Only have doubles on PPC)


Thanks for clearing it up, in my head 96 bits was "a power of two".
(since anything aligned to a multiple of a power of two is fine too)

--anders
April 04, 2005
"Anders F Björklund" <afb@algonet.se> wrote in message news:d2qq5u$1aau$1@digitaldaemon.com...
> Walter wrote:
>
>>>If so, just tell me it's better to have a flexible width language type, than to have some types be unavailable on certain FPU computer hardware?
>>
>> Yes, I believe that is better. Every once in a while, an app *does* care, but they're screwed anyway if the hardware won't support it.
>
> I just fail to see how real -> double/extended, is any different from the int -> short/long that C has gotten so much beating for already ?
>
> The suggestion was to have fixed precision types:
> - float => IEEE 754 Single precision (32-bit)
> - double => IEEE 754 Double precision (64-bit)
> - extended => IEEE 754 Double Extended precision (80-bit)
> - quadruple => "IEEE 754" Quadruple precision (128-bit)
>
> And then have "real" be an alias to the largest hardware-supported type. It wouldn't break code more than if it was a variadic size type format ?

What happens when someone declares a variable as quadruple on a platform without hardware support? Does D plug in a software quadruple implementation? That isn't the right thing to do. That's been my whole point of bringing up Java's experience. They tried to foist too much rigor on their floating point model in the name of portability and had to redo it.