Jump to page: 1 24  
Page
Thread overview
Phobos unit testing uncovers a CPU bug
Nov 26, 2010
Don
Nov 26, 2010
%u
Nov 26, 2010
bearophile
Nov 26, 2010
bearophile
Nov 29, 2010
so
Nov 29, 2010
bearophile
Nov 29, 2010
Jonathan M Davis
Nov 30, 2010
so
Nov 30, 2010
Jonathan M Davis
Nov 30, 2010
so
Nov 27, 2010
Walter Bright
Nov 27, 2010
bearophile
Nov 26, 2010
Simen kjaeraas
Nov 27, 2010
KennyTM~
Nov 27, 2010
Walter Bright
Nov 27, 2010
Don
Nov 27, 2010
Kagamin
Nov 28, 2010
Don
Nov 28, 2010
Kagamin
Nov 28, 2010
Mike James
Nov 28, 2010
Walter Bright
Nov 29, 2010
Kagamin
Nov 29, 2010
Walter Bright
Nov 30, 2010
Kagamin
Nov 30, 2010
Andrew Wiley
Dec 03, 2010
Kagamin
Dec 02, 2010
Ali Çehreli
Nov 29, 2010
so
Nov 29, 2010
Bruno Medeiros
Nov 27, 2010
Dmitry Olshansky
Nov 27, 2010
Lionello Lunesu
November 26, 2010
The code below compiles to a single machine instruction, yet the results are CPU manufacturer-dependent.
----
import std.math;

void main()
{
     assert( yl2x(0x1.0076fc5cc7933866p+40L, LN2)
	== 0x1.bba4a9f774f49d0ap+4L); // Pass on Intel, fails on AMD
}
----
The results for yl2x(0x1.0076fc5cc7933866p+40L, LN2) are:

Intel:  0x1.bba4a9f774f49d0ap+4L
AMD:    0x1.bba4a9f774f49d0cp+4L

The least significant bit is different. This corresponds only to a fraction of a bit (that is, it's hardly important for accuracy. For comparison, sin and cos on x86 lose nearly sixty bits of accuracy in some cases!). Its importance is only that it is an undocumented difference between manufacturers.

The difference was discovered through the unit tests for the mathematical Special Functions which will be included in the next compiler release. Discovery of the discrepancy happened only because of several features of D:

- built-in unit tests (encourages tests to be run on many machines)

- built-in code coverage (the tests include extreme cases, simply because I was trying to increase the code coverage to high values)

- D supports the hex format for floats. Without this feature, the discrepancy would have been blamed on differences in the floating-point conversion functions in the C standard library.

This experience reinforces my belief that D is an excellent language for scientific computing.

Thanks to David Simcha and Dmitry Olshansky for help in tracking this down.
November 26, 2010
== Quote from Don (nospam@nospam.com)'s article
> The code below compiles to a single machine instruction, yet the results are CPU manufacturer-dependent.
> ----
> import std.math;
> void main()
> {
>       assert( yl2x(0x1.0076fc5cc7933866p+40L, LN2)
> 	== 0x1.bba4a9f774f49d0ap+4L); // Pass on Intel, fails on AMD
> }
> ----
> The results for yl2x(0x1.0076fc5cc7933866p+40L, LN2) are:
> Intel:  0x1.bba4a9f774f49d0ap+4L
> AMD:    0x1.bba4a9f774f49d0cp+4L
> The least significant bit is different. This corresponds only to a
> fraction of a bit (that is, it's hardly important for accuracy. For
> comparison, sin and cos on x86 lose nearly sixty bits of accuracy in
> some cases!). Its importance is only that it is an undocumented
> difference between manufacturers.
> The difference was discovered through the unit tests for the
> mathematical Special Functions which will be included in the next
> compiler release. Discovery of the discrepancy happened only because of
> several features of D:
> - built-in unit tests (encourages tests to be run on many machines)
> - built-in code coverage (the tests include extreme cases, simply
> because I was trying to increase the code coverage to high values)
> - D supports the hex format for floats. Without this feature, the
> discrepancy would have been blamed on differences in the floating-point
> conversion functions in the C standard library.
> This experience reinforces my belief that D is an excellent language for
> scientific computing.
> Thanks to David Simcha and Dmitry Olshansky for help in tracking this down.

Must have made you smile ;)

Slightly related, do you have some code to convert a hex float string to float? I think the hex format is a nice compromise between size and readability.

Regarding unit tests, I should really use them :(
I use std2 in my D1 project and a few of std2's unit tests fail, so I run my
tests() manually..
November 26, 2010
Don <nospam@nospam.com> wrote:

> The difference was discovered through the unit tests for the mathematical Special Functions which will be included in the next compiler release. Discovery of the discrepancy happened only because of several features of D:
>
> - built-in unit tests (encourages tests to be run on many machines)
>
> - built-in code coverage (the tests include extreme cases, simply because I was trying to increase the code coverage to high values)
>
> - D supports the hex format for floats. Without this feature, the discrepancy would have been blamed on differences in the floating-point conversion functions in the C standard library.
>
> This experience reinforces my belief that D is an excellent language for scientific computing.

This sounds like a great sales argument. Gives us some bragging rights. :p


> Thanks to David Simcha and Dmitry Olshansky for help in tracking this down.

Great job!

Now, which of the results is correct, and has AMD and Intel been informed?

-- 
Simen
November 26, 2010
%u:

> Slightly related, do you have some code to convert a hex float string to float?

This doesn't work, but it's supposed to work. Add this to bugzilla if it's not already present:

import std.conv: to;
void main() {
    auto r = to!real("0x1.0076fc5cc7933866p+40L");
    auto d = to!double("0x1.0076fc5cc7933866p+40L");
    auto f = to!float("0x1.0076fc5cc7933866p+40L");
}


> Regarding unit tests, I should really use them :(

Yep, and DbC too, and compile your D code with -w.

Bye,
bearophile
November 26, 2010
> This doesn't work, but it's supposed to work. Add this to bugzilla if it's not already present:

http://d.puremagic.com/issues/show_bug.cgi?id=5280

Bye,
bearophile

November 27, 2010
Don wrote:
> The code below compiles to a single machine instruction, yet the results are CPU manufacturer-dependent.

This is awesome work, Don. Kudos to you, David and Dmitry.

BTW, I've read that fine-grained CPU detection can be done, beyond what CPUID gives, by examining slight differences in FPU results. I expect that *, +, -, / should all give exactly the same answers. But the transcendentals, and obviously yl2x, vary.
November 27, 2010
%u wrote:
> Slightly related, do you have some code to convert a hex float string to float?

Hex float literals are supported by D.
November 27, 2010
Walter:

> %u wrote:
> > Slightly related, do you have some code to convert a hex float string to float?
> 
> Hex float literals are supported by D.

"hex float string" != "Hex float literal".

Bye,
bearophile
November 27, 2010
Walter Bright wrote:
> Don wrote:
>> The code below compiles to a single machine instruction, yet the results are CPU manufacturer-dependent.
> 
> This is awesome work, Don. Kudos to you, David and Dmitry.
> 
> BTW, I've read that fine-grained CPU detection can be done, beyond what CPUID gives, by examining slight differences in FPU results. I expect that *, +, -, / should all give exactly the same answers. But the transcendentals, and obviously yl2x, vary.

I believe that would have once been possible, I doubt it's true any more.
Basic arithmetic and sqrt all give correctly rounded results, so they're identical on all processors. The 387 gives greatly improved accuracy, compared to the 287. But AFAIK there have not been intentional changes since then.

The great tragedy was that an early AMD processor gave much accurate sin and cos than the 387. But, people complained that it was different from Intel! So, their next processor duplicated Intel's hopelessly wrong trig functions.
I haven't seen any examples of values which are calculated differently between the processors. I only found one vague reference in a paper from CERN.
November 27, 2010
On Nov 27, 10 05:25, Simen kjaeraas wrote:
> Don <nospam@nospam.com> wrote:
>
>> The difference was discovered through the unit tests for the
>> mathematical Special Functions which will be included in the next
>> compiler release. Discovery of the discrepancy happened only because
>> of several features of D:
>>
>> - built-in unit tests (encourages tests to be run on many machines)
>>
>> - built-in code coverage (the tests include extreme cases, simply
>> because I was trying to increase the code coverage to high values)
>>
>> - D supports the hex format for floats. Without this feature, the
>> discrepancy would have been blamed on differences in the
>> floating-point conversion functions in the C standard library.
>>
>> This experience reinforces my belief that D is an excellent language
>> for scientific computing.
>
> This sounds like a great sales argument. Gives us some bragging rights. :p
>
>
>> Thanks to David Simcha and Dmitry Olshansky for help in tracking this
>> down.
>
> Great job!
>
> Now, which of the results is correct, and has AMD and Intel been informed?
>

Intel is correct.

  yl2x(0x1.0076fc5cc7933866p+40L, LN2)
   == log(9240117798188457011/8388608)
   == 0x1.bba4a9f774f49d0a64ac5666c969fd8ca8e...p+4
                         ^


« First   ‹ Prev
1 2 3 4