March 07, 2007
John Reimer wrote:
> On Tue, 06 Mar 2007 01:16:07 -0500, David Friedman wrote:
> 
>> GDC now supports 64-bit targets! A new x86_64 Linux binary is
>> available and the MacOS X binary supports x86_64 and ppc64.
[snip]
> 
> I just realized that gdc still hasn't arrived at 1.0 yet even though the
> stated criteria for a gdc 1.0 was 64-bit support. :)

It was *a* stated criteria, not *the* stated criteria :P.
In http://www.digitalmars.com/webnews/newsgroups.php?art_group=D.gnu&article_id=2324 David stated:
---
I still want 64-bit and workable
cross-compilation for a 1.00 release.
---

So I guess the next question would be "What's the status of cross-compilation?" :).

> I guess gdc will remain pre-1.0 for awhile to see if any 64-bit bugs
> surface?

Always a good idea. No need to rush that sort of stuff.
March 07, 2007
Sean Kelly wrote:

>> The double+double type has caused me no end of trouble, but I think it is important to maintain interoperability with C.  If I make the D 'real' implementation IEEE double, there would be no way interact with C code that uses 'long double'.  I could add another floating point type for this purpose, but that would diverge from the D spec more than what I have now.
> 
> Yeah that doesn't sound like a very attractive option.  Some of the later replies in the Darwin thread mention a compiler switch:
> 
> http://lists.apple.com/archives/Darwin-development/2001/Jan/msg00471.html
> 
> Is that a possibility?  Or did that switch not make it into an actual release?

There are two switches: -mlong-double-64 and -mlong-double-128,
just that the second one ("double-double") is now the default...

So if you changed the meaning of "long double" back to the old one
(i.e. same as "double"), it wouldn't be compatible with C/C++ ABI ?


This is similar to the -m96bit-long-double and -m128bit-long-double
for Intel, but those just change the padding (not the 80-bit format)

But on the X86_64 architecture, a "long double" is now padded to 16
bytes instead of the previous 12 bytes (the actual data is 10 bytes)


These were all known problems with adding "real" as a built-in, though.
In all the D specs I've seen, it's pretty much #defined to long double.

Such as http://www.digitalmars.com/d/htod.html
http://www.digitalmars.com/d/interfaceToC.html

Might as well keep the real <-> long double one-to-one mapping, and
recommend *not* using real/ireal/creal types for any portable code ?

--anders
March 07, 2007
David Friedman wrote:
> GDC now supports 64-bit targets! A new x86_64 Linux binary is
> available and the MacOS X binary supports x86_64 and ppc64.
> 
> http://sourceforge.net/project/showfiles.php?group_id=154306
> 
> Changes:
>   * Added support for 64-bit targets
>   * Added multilib support
>   * Updated to DMD 1.007
>   * Fixed Bugzilla 984, 1013

While everyone is talking about 64-bit support, I haven't seen anyone make an mention of multilib support. So I thought I'd ask: what does that mean?

Great work, by the way.
March 07, 2007
Anders F Björklund wrote:
> Sean Kelly wrote:
> 
>>> The double+double type has caused me no end of trouble, but I think it is important to maintain interoperability with C.  If I make the D 'real' implementation IEEE double, there would be no way interact with C code that uses 'long double'.  I could add another floating point type for this purpose, but that would diverge from the D spec more than what I have now.
>>
>> Yeah that doesn't sound like a very attractive option.  Some of the later replies in the Darwin thread mention a compiler switch:
>>
>> http://lists.apple.com/archives/Darwin-development/2001/Jan/msg00471.html
>>
>> Is that a possibility?  Or did that switch not make it into an actual release?
> 
> There are two switches: -mlong-double-64 and -mlong-double-128,
> just that the second one ("double-double") is now the default...
> 
> So if you changed the meaning of "long double" back to the old one
> (i.e. same as "double"), it wouldn't be compatible with C/C++ ABI ?
> 
> 
> This is similar to the -m96bit-long-double and -m128bit-long-double
> for Intel, but those just change the padding (not the 80-bit format)
> 
> But on the X86_64 architecture, a "long double" is now padded to 16
> bytes instead of the previous 12 bytes (the actual data is 10 bytes)
> 
> 
> These were all known problems with adding "real" as a built-in, though.
> In all the D specs I've seen, it's pretty much #defined to long double.
> 
> Such as http://www.digitalmars.com/d/htod.html
> http://www.digitalmars.com/d/interfaceToC.html
> 
> Might as well keep the real <-> long double one-to-one mapping, and
> recommend *not* using real/ireal/creal types for any portable code ?

No, that does not work. double is *not* portable!
I'll say it again, because it's such a widespread myth: **double is not portable**. Only about 20% of computers world-wide have native support for calculations at 64-bit precision! More than 90% have native support for 80-bit precision.
(The most common with 64-bit precision are PPC and Pentium4. Earlier Intel/AMD CPUs do not support it).

Suppose you have the code

double a;

a = expr1 + expr2;

where expr1 and expr2 are expressions.

Then you want to split this expression in two:
b = expr1;
a = b + expr2;

Q. What type should 'b' be, so that the value of 'a' is unchanged?
A. For x87, it should be an 80-bit fp number. For PPC, it should be a 64-bit fp number. Using 'double' on x87 for intermediate results causes roundoff to occur twice. That's what 'real' is for -- it prevents weird things happening behind your back.

There is no choice -- intermediate calculations are done at 'real' precision, and the precision of 'real' is not constant across platforms.
In adding 'real' to D, Walter hasn't just provide the possibility to use  80-bit floating point numbers -- that's actually a minor issue. 'real' reflects the underlying reality of the hardware.




March 07, 2007
David Friedman wrote:
> Sean Kelly wrote:
>> Anders F Björklund wrote:
>>
>>>>> GDC now supports 64-bit targets! A new x86_64 Linux binary is
>>>>> available and the MacOS X binary supports x86_64 and ppc64.
>>>>
>>>>
>>>> Excellent news! I'll try it on ppc64 Linux too (Fedora Core)
>>>
>>>
>>> Except for some strange (temporary?) build error with soft-float,
>>> it built just fine for powerpc64-unknown-linux-gnu (with FC5/PPC*)
>>
>>
>> That reminds me.  Is it really a good idea to map the GCC/PPC "long double" to "real" in D?  I know this has come up before:
>>
>> http://www.digitalmars.com/d/archives/digitalmars/D/20790.html
>>
>> and the data type seems like an aberration.  Here is some more info:
>>
>> http://lists.apple.com/archives/Darwin-development/2001/Jan/msg00499.html
>>
>> And from the ELF ABI:
>>
>>     This "Extended precision" differs from the IEEE 754 Standard
>>     in the following ways:
>>
>>     * The software support is restricted to round-to-nearest
>>       mode. Programs that use extended precision must ensure
>>       that this rounding mode is in effect when
>>       extended-precision calculations are performed.
>>     * Does not fully support the IEEE special numbers NaN and
>>       INF. These values are encoded in the high-order double
>>       value only. The low-order value is not significant.
>>     * Does not support the IEEE status flags for overflow,
>>       underflow, and other conditions. These flag have no
>>       meaning in this format.
>>
>> I can't claim to have the maths background of some folks here, but this suggests to me that this 128-bit representation isn't truly IEEE-754 compliant and therefore probably shouldn't be a default data type in D?
>>
>>
>> Sean
> 
> The double+double type has caused me no end of trouble, but I think it is important to maintain interoperability with C.

Agreed, but we need to be able to do it without wrecking interoperability with D!

>  If I make the D 'real' implementation IEEE double, there would be no way interact with C code that uses 'long double'.  I could add another floating point type for this purpose, but that would diverge from the D spec more than what I have now.

I disagree -- superficially, a new type looks like a bigger divergence, but when it actually comes to writing FP code, it's much less of a divergence, because it wouldn't mess with the semantics of existing floating point types.
In fact, since all CPUs are capable of using double+double, I would like to see it become a regular part of the language at some point -- it's a portable data type.

March 07, 2007
Don Clugston wrote:

>> Might as well keep the real <-> long double one-to-one mapping, and
>> recommend *not* using real/ireal/creal types for any portable code ?
> 
> No, that does not work. double is *not* portable!
> I'll say it again, because it's such a widespread myth: **double is not portable**. Only about 20% of computers world-wide have native support for calculations at 64-bit precision! More than 90% have native support for 80-bit precision.
> (The most common with 64-bit precision are PPC and Pentium4. Earlier Intel/AMD CPUs do not support it).

The actual suggestion made was to make "real" into an *alias* instead.

That is, you would have one type "extended" that would be 80-bit
(and not available* on PowerPC/SPARC except with software emulation)
and one type "double" that would be 64 and one type "float"/32...

Then "quad" could be reserved as a future keyword for IEEE 128-bit,
just as "cent" is reserved for 128-bit integers. (it's not important)
Using extended precision floats on Intel is not a bad thing at all.

I think we agree that using double on X86 (or long double on others)
isn't optimal, because of the round-off (or even missing exceptions).
So you probably will need different types on different architectures.

But as it is now, "real" in D is the same as "long double" in C/C++.
So you would have to make a new alias for the D floating point type,
and then alias it over to real on X86 and to double on the others ?

Or perhaps use "float" instead, for vectorization opportunities ? :-)

--anders

* not available or using double-double, it doesn't matter much ?
  software emulated or not fully IEEE, either way not for "real"
  (i.e. the "largest hardware implemented floating point size")
March 07, 2007
Anders F Björklund wrote:
> Don Clugston wrote:
> 
>>> Might as well keep the real <-> long double one-to-one mapping, and
>>> recommend *not* using real/ireal/creal types for any portable code ?
>>
>> No, that does not work. double is *not* portable!
>> I'll say it again, because it's such a widespread myth: **double is not portable**. Only about 20% of computers world-wide have native support for calculations at 64-bit precision! More than 90% have native support for 80-bit precision.
>> (The most common with 64-bit precision are PPC and Pentium4. Earlier Intel/AMD CPUs do not support it).
> 
> The actual suggestion made was to make "real" into an *alias* instead.

OK. In many ways that would be better; in reality, when writing a math library, you always have to know what precision you're using.

> That is, you would have one type "extended" that would be 80-bit
> (and not available* on PowerPC/SPARC except with software emulation)
> and one type "double" that would be 64 and one type "float"/32...
> 
> Then "quad" could be reserved as a future keyword for IEEE 128-bit,
> just as "cent" is reserved for 128-bit integers. (it's not important)
> Using extended precision floats on Intel is not a bad thing at all.
> 
> I think we agree that using double on X86 (or long double on others)
> isn't optimal, because of the round-off (or even missing exceptions).
> So you probably will need different types on different architectures.
> 
> But as it is now, "real" in D is the same as "long double" in C/C++.
>
> So you would have to make a new alias for the D floating point type,
> and then alias it over to real on X86 and to double on the others ?

I think that could work. Although for the others, it might need to be a typedef rather than an alias, so that you can overload real + double without causing compilation problems? (I'm not sure about this). And you  want 'real' to appear in error messages.

March 07, 2007
Frits van Bommel wrote:
> David Friedman wrote:
>> GDC now supports 64-bit targets! A new x86_64 Linux binary is
>> available and the MacOS X binary supports x86_64 and ppc64.
>>
>> http://sourceforge.net/project/showfiles.php?group_id=154306
>>
>> Changes:
>>   * Added support for 64-bit targets
>>   * Added multilib support
>>   * Updated to DMD 1.007
>>   * Fixed Bugzilla 984, 1013
> 
> While everyone is talking about 64-bit support, I haven't seen anyone make an mention of multilib support. So I thought I'd ask: what does that mean?
> 
> Great work, by the way.

Multilib refers to multiple architecture variants in a single GCC deployment.  Often this is 32/64-bit, but it would have been an issue before 0.23 for targets like ARM which have ARM and Thumb code generation.
March 07, 2007
Anders F Björklund wrote:
> Sean Kelly wrote:
> 
>>> The double+double type has caused me no end of trouble, but I think it is important to maintain interoperability with C.  If I make the D 'real' implementation IEEE double, there would be no way interact with C code that uses 'long double'.  I could add another floating point type for this purpose, but that would diverge from the D spec more than what I have now.
>>
>> Yeah that doesn't sound like a very attractive option.  Some of the later replies in the Darwin thread mention a compiler switch:
>>
>> http://lists.apple.com/archives/Darwin-development/2001/Jan/msg00471.html
>>
>> Is that a possibility?  Or did that switch not make it into an actual release?
> 
> There are two switches: -mlong-double-64 and -mlong-double-128,
> just that the second one ("double-double") is now the default...

Oh I see.  That thread above suggested the opposite.  Could GDC simply key the size of real off this switch as well then?  If the point is for real to map to double-double, then it must be aware of it, correct?  I know it's not ideal to have the size of any variable change dynamically, but this seems like a case where doing so may actually be desirable.


Sean
March 07, 2007
Sean Kelly wrote:

>>> Yeah that doesn't sound like a very attractive option.  Some of the later replies in the Darwin thread mention a compiler switch:
>>>
>>> http://lists.apple.com/archives/Darwin-development/2001/Jan/msg00471.html 
>>>
>>> Is that a possibility?  Or did that switch not make it into an actual release?
>>
>> There are two switches: -mlong-double-64 and -mlong-double-128,
>> just that the second one ("double-double") is now the default...
> 
> Oh I see.  That thread above suggested the opposite.  Could GDC simply key the size of real off this switch as well then?  If the point is for real to map to double-double, then it must be aware of it, correct?  I know it's not ideal to have the size of any variable change dynamically, but this seems like a case where doing so may actually be desirable.

The thread was old, things change. Especially: from GCC 3.3 to GCC 4.0
http://developer.apple.com/releasenotes/DeveloperTools/RN-GCC4/index.html

"In previous releases of GCC, the long double type was just a synonym for double. GCC 4.0 now supports true long double. In GCC 4.0 long double is made up of two double parts, arranged so that the number of bits of precision is approximately twice that of double."

(this was for Apple GCC, but Linux PPC went through a similar change)


Older versions of PPC operating systems used 64-bit for "long double",
newer versions use 128-bit. Both are still in use, so we won't know.

And since the D "real" type simply maps over to C/C++ "long double",
it means that it will be either 64-bit, 80-bit or 128-bit. Varying.

--anders