April 20, 2011 Re: Floating Point + Threads? | ||||
---|---|---|---|---|
| ||||
Posted in reply to JimBob | On Apr 20, 2011, at 10:46 AM, JimBob wrote:
>
> "Sean Kelly" <sean@invisibleduck.org> wrote in message news:mailman.3597.1303316625.4748.digitalmars-d@puremagic.com... On Apr 20, 2011, at 5:06 AM, Don wrote:
>
>> Sean Kelly wrote:
>>> On Apr 16, 2011, at 1:02 PM, Robert Jacques wrote:
>>>> On Sat, 16 Apr 2011 15:32:12 -0400, Walter Bright <newshound2@digitalmars.com> wrote:
>>>>>
>>>>> The dmd startup code (actually the C startup code) does an fninit. I never thought about new thread starts. So, yeah, druntime should do an fninit on thread creation.
>>>> The documentation I've found on fninit seems to indicate it defaults to 64-bit precision, which means that by default we aren't seeing the benefit of D's reals. I'd much prefer 80-bit precision by default.
>>> There is no option to set "80-bit precision" via the FPU control word.
>>
>> ??? Yes there is.
>>
>> enum PrecisionControl : short {
>> PRECISION80 = 0x300,
>> PRECISION64 = 0x200,
>> PRECISION32 = 0x000
>> };
>>
>> So has Intel deprecated 80-bit FPU support? Why do the docs for this say
>> that 64-bit
>> is the highest precision? And more importantly, does this mean that we
>> should be setting
>> the PC field explicitly instead of relying on fninit? The docs say that
>> fninit initializes to
>> 64-bit precision. Or is that inaccurate as well?=
>
> You misread the docs, it's talking about precision which is just the size of the mantisa, not the actual full size of the floating point data. IE...
>
> 80 float = 64 bit precision
> 64 float = 53 bit precision
> 32 float = 24 bit precision
Oops, you're right. So to summarize: fninit does what we want because it sets 64-bit precision, which is effectively 80-bit mode. Is this correct?
|
April 20, 2011 Re: Floating Point + Threads? | ||||
---|---|---|---|---|
| ||||
Posted in reply to Sean Kelly | Sean Kelly wrote:
> On Apr 20, 2011, at 10:46 AM, JimBob wrote:
>> "Sean Kelly" <sean@invisibleduck.org> wrote in message news:mailman.3597.1303316625.4748.digitalmars-d@puremagic.com...
>> On Apr 20, 2011, at 5:06 AM, Don wrote:
>>
>>> Sean Kelly wrote:
>>>> On Apr 16, 2011, at 1:02 PM, Robert Jacques wrote:
>>>>> On Sat, 16 Apr 2011 15:32:12 -0400, Walter Bright <newshound2@digitalmars.com> wrote:
>>>>>> The dmd startup code (actually the C startup code) does an fninit. I never thought about new thread starts. So, yeah, druntime should do an fninit on thread creation.
>>>>> The documentation I've found on fninit seems to indicate it defaults to 64-bit precision, which means that by default we aren't seeing the benefit of D's reals. I'd much prefer 80-bit precision by default.
>>>> There is no option to set "80-bit precision" via the FPU control word.
>>> ??? Yes there is.
>>>
>>> enum PrecisionControl : short {
>>> PRECISION80 = 0x300,
>>> PRECISION64 = 0x200,
>>> PRECISION32 = 0x000
>>> };
>>>
>>> So has Intel deprecated 80-bit FPU support? Why do the docs for this say that 64-bit
>>> is the highest precision? And more importantly, does this mean that we should be setting
>>> the PC field explicitly instead of relying on fninit? The docs say that fninit initializes to
>>> 64-bit precision. Or is that inaccurate as well?=
>> You misread the docs, it's talking about precision which is just the size of the mantisa, not the actual full size of the floating point data. IE...
>>
>> 80 float = 64 bit precision
>> 64 float = 53 bit precision
>> 32 float = 24 bit precision
>
> Oops, you're right. So to summarize: fninit does what we want because it sets 64-bit precision, which is effectively 80-bit mode. Is this correct?
Yes.
|
Copyright © 1999-2021 by the D Language Foundation