July 03, 2014 Re: std.math performance (SSE vs. real) | ||||
---|---|---|---|---|
| ||||
Posted in reply to Walter Bright Attachments:
| On 3 Jul 2014 01:50, "Walter Bright via Digitalmars-d" < digitalmars-d@puremagic.com> wrote: > > On 7/2/2014 2:28 PM, Iain Buclaw via Digitalmars-d wrote: >> >> On 2 July 2014 19:58, via Digitalmars-d <digitalmars-d@puremagic.com> wrote: >>> >>> I don't really understand the reasoning here. Is D Intel x86 specific? >> >> Yes it is, more than you might realise. I've been spending the last 4 >> >> years breaking it to be platform agnostic. :o) > > > I think you're conflating dmd with D. > I suppose I am just a bit. At the time I was thinking about the spec on _argptr (which has been fixed), __simd and intrinsics. |
July 03, 2014 Re: std.math performance (SSE vs. real) | ||||
---|---|---|---|---|
| ||||
Posted in reply to Sean Kelly Attachments:
| On 3 Jul 2014 04:50, "Sean Kelly via Digitalmars-d" < digitalmars-d@puremagic.com> wrote: > > On Thursday, 3 July 2014 at 01:13:13 UTC, H. S. Teoh via Digitalmars-d wrote: >> >> >> I'm not sure I understand how removing support 80-bit floats hurts interoperability with C? I thought none of the standard C float types map to the x87 80-bit float? > > > The C spec never says anything specific about representation because it's meant to target anything, but the x86 and AMD64 ABIs basically say that "long double" is 80-bit for calculations. The x86 spec says that the type is stored in 96 bits though, and the AMD64 spec says they're stored in 128 bits for alignment purposes. > > I'm still unclear whether we're aiming for C interoperability or hardware support though, based on Walter's remark about SPARC and PPC. There, 'long double' is represented differently but is not backed by specialized hardware, so I'm guessing D would make 'real' 64-bits on these platforms and break compatibility with C. So... I guess we really do need a special alias for C compatibility, and this can map to whatever intrinsic type the applicable compiler supports for that platform. I take the spec to mean that I can map float, double, and real to the native C types for float, double, and long double - or in jargon terms, the native single, double, and extended (or quad if the user or backend requests it) types. Certainly I have implemented my bits of std.math with real == double and real == quad in mind. PPC will have to wait, but it is easy enough to identify doubledouble reals in CTFE. Iain. |
July 03, 2014 Re: std.math performance (SSE vs. real) | ||||
---|---|---|---|---|
| ||||
Posted in reply to Ola Fosheim Grøstad | On Wednesday, 2 July 2014 at 12:24:51 UTC, Ola Fosheim Grøstad wrote:
> On Wednesday, 2 July 2014 at 12:16:18 UTC, Wanderer wrote:
>
> D is not even production ready, so why should there be? Who in their right mind would use a language in limbo for building a serious operating system or do embedded work? You need language stability and compiler maturity first.
That's plain wrong: here at SR Labs we are using it in production, and making money over it.
---
Paolo
|
July 03, 2014 Re: std.math performance (SSE vs. real) | ||||
---|---|---|---|---|
| ||||
Posted in reply to Iain Buclaw | On 7/2/2014 11:38 PM, Iain Buclaw via Digitalmars-d wrote:
> I suppose I am just a bit. At the time I was thinking about the spec on _argptr
> (which has been fixed), __simd and intrinsics.
You do have a good point with those aspects.
|
July 03, 2014 Re: std.math performance (SSE vs. real) | ||||
---|---|---|---|---|
| ||||
Posted in reply to H. S. Teoh | On 7/2/2014 6:11 PM, H. S. Teoh via Digitalmars-d wrote: > What should we do in the case of hardware that offers strange hardware > types, like a hypothetical 48-bit floating point type? Should D offer a > built-in type for that purpose, even if it only exists on a single chip > that's used by 0.01% of the community? I'd leave that decision up to the guy implementing the D compiler for that chip. >> Not only that, a marquee feature of D is interoperability with C. We'd >> need an AWFULLY good reason to throw that under the bus. > > I'm not sure I understand how removing support 80-bit floats hurts > interoperability with C? I thought none of the standard C float types > map to the x87 80-bit float? > > (I'm not opposed to keeping real as 80-bit on x87, but I just don't > understand what this has to do with C interoperability.) For the 4th time in this thread, the C ABI for x86 32 and 64 bit OSX, Linux, and FreeBSD maps "long double" to 80 bit reals. How can you call a C function: int foo(long double r); ?? |
July 03, 2014 Re: std.math performance (SSE vs. real) | ||||
---|---|---|---|---|
| ||||
Posted in reply to Sean Kelly | On 7/2/2014 8:48 PM, Sean Kelly wrote:
> I'm still unclear whether we're aiming for C interoperability or hardware
> support though, based on Walter's remark about SPARC and PPC. There, 'long
> double' is represented differently but is not backed by specialized hardware, so
> I'm guessing D would make 'real' 64-bits on these platforms and break
> compatibility with C. So... I guess we really do need a special alias for C
> compatibility, and this can map to whatever intrinsic type the applicable
> compiler supports for that platform.
What is unclear about being able to call a C function declared as:
int foo(long double r);
from D?
|
July 03, 2014 Re: std.math performance (SSE vs. real) | ||||
---|---|---|---|---|
| ||||
Posted in reply to Walter Bright | On Thursday, 3 July 2014 at 00:49:33 UTC, Walter Bright wrote:
> On 7/2/2014 2:28 PM, Iain Buclaw via Digitalmars-d wrote:
>> On 2 July 2014 19:58, via Digitalmars-d <digitalmars-d@puremagic.com> wrote:
>>> I don't really understand the reasoning here. Is D Intel x86 specific?
>> Yes it is, more than you might realise. I've been spending the last 4
>> years breaking it to be platform agnostic. :o)
>
> I think you're conflating dmd with D.
>
> And IEEE 754 is a standard.
I understand what you're saying here, which is that any conflation of D with x86 is a fault in the implementation rather than the spec, but at the end of the day, D lives by its implementation.
It's not just about what the dmd backend supports per se, but about what assumptions that leads people to make when writing code for the frontend, runtime and standard library. Iain has done some heroic work in the last year going through compiler frontend, runtime and Phobos and correcting code with faulty assumptions such as "real == 80 bit floating point" (which IIRC was often made as a general assumption even though it's x86-specific).
|
July 03, 2014 Re: std.math performance (SSE vs. real) | ||||
---|---|---|---|---|
| ||||
Posted in reply to Walter Bright | On Wed, 02 Jul 2014 23:56:21 -0700
Walter Bright via Digitalmars-d <digitalmars-d@puremagic.com> wrote:
> On 7/2/2014 8:48 PM, Sean Kelly wrote:
> > I'm still unclear whether we're aiming for C interoperability or hardware support though, based on Walter's remark about SPARC and PPC. There, 'long double' is represented differently but is not backed by specialized hardware, so I'm guessing D would make 'real' 64-bits on these platforms and break compatibility with C. So... I guess we really do need a special alias for C compatibility, and this can map to whatever intrinsic type the applicable compiler supports for that platform.
>
> What is unclear about being able to call a C function declared as:
>
> int foo(long double r);
>
> from D?
I don't think that there's anything unclear about that. The problem is that if real is supposed to be the largest hardware supported floating point type, then that doesn't necessarily match long double. It happens to on x86 and x86_64, but it doesn't on other architectures.
So, is real the same as C's long double, or is it the same as the largest floating point type supported by the hardware? We have erroneously treated them as the same thing, because they happen to be the same thing on x86 hardware. But D needs to work on more than just x86 and x86_64, even if dmd doesn't.
We already have aliases such as c_long to deal with C types. I don't think that it would be all that big a deal to make it so that real was specifically the largest supported hardware type and then have c_long_double for interacting with C.
- Jonathan M Davis
|
July 03, 2014 Re: std.math performance (SSE vs. real) | ||||
---|---|---|---|---|
| ||||
On 3 July 2014 11:49, Jonathan M Davis via Digitalmars-d <digitalmars-d@puremagic.com> wrote:
> On Wed, 02 Jul 2014 23:56:21 -0700
> Walter Bright via Digitalmars-d <digitalmars-d@puremagic.com> wrote:
>
>> On 7/2/2014 8:48 PM, Sean Kelly wrote:
>> > I'm still unclear whether we're aiming for C interoperability or hardware support though, based on Walter's remark about SPARC and PPC. There, 'long double' is represented differently but is not backed by specialized hardware, so I'm guessing D would make 'real' 64-bits on these platforms and break compatibility with C. So... I guess we really do need a special alias for C compatibility, and this can map to whatever intrinsic type the applicable compiler supports for that platform.
>>
>> What is unclear about being able to call a C function declared as:
>>
>> int foo(long double r);
>>
>> from D?
>
> I don't think that there's anything unclear about that. The problem is that if real is supposed to be the largest hardware supported floating point type, then that doesn't necessarily match long double. It happens to on x86 and x86_64, but it doesn't on other architectures.
>
> So, is real the same as C's long double, or is it the same as the largest floating point type supported by the hardware? We have erroneously treated them as the same thing, because they happen to be the same thing on x86 hardware. But D needs to work on more than just x86 and x86_64, even if dmd doesn't.
>
The spec should be clearer on that. The language should respect the long double ABI of the platform it is targeting - so if the compiler is targeting a real=96bit system, but the max supported on the chip is 128bit, the compiler should *still* map real to the 96bit long doubles, unless explicitly told otherwise on the command-line.
The same goes to other ABI aspects of real, for instance, if you are targeting an ABI where scalar 64bit operations are done in Neon, then the compiler adheres to that (though I'd hope that people stick to defaults and not do this, because the cost of moving data from core registers to neon is high).
Regards
Iain
|
July 03, 2014 Re: std.math performance (SSE vs. real) | ||||
---|---|---|---|---|
| ||||
Posted in reply to Iain Buclaw | On Thursday, 3 July 2014 at 11:21:34 UTC, Iain Buclaw via Digitalmars-d wrote:
> The spec should be clearer on that. The language should respect the long double ABI of the platform it is targeting
> - so if the compiler is targeting a real=96bit system, but
> the max supported on the chip is 128bit, the compiler should
> *still* map real to the 96bit long doubles, unless explicitly
> told otherwise on the command-line.
This would be a change in the standard, no? "The long double ABI of the target platform" is not necessarily the same as the current definition of real as the largest hardware-supported floating-point type.
I can't help but feel that this is another case where the definition of real in the D spec, and its practical use in the implementation, have wound up in conflict because of assumptions made relative to x86, where it's simply a nice coincidence that the largets hardware-supported FP and the long double type happen to be the same.
|
Copyright © 1999-2021 by the D Language Foundation