Jump to page: 1 2 3
Thread overview
[phobos] phobos commit, revision 2186
Nov 20, 2010
dsource.org
Nov 20, 2010
David Simcha
Nov 21, 2010
Jonathan M Davis
Nov 21, 2010
David Simcha
Nov 21, 2010
Brad Roberts
Nov 21, 2010
David Simcha
Nov 21, 2010
David Simcha
Nov 21, 2010
Don Clugston
Nov 21, 2010
David Simcha
Nov 21, 2010
Don Clugston
Nov 21, 2010
David Simcha
Nov 21, 2010
Dmitry Olshansky
Nov 21, 2010
Brad Roberts
Nov 22, 2010
Sean Kelly
Nov 25, 2010
Don Clugston
Nov 25, 2010
David Simcha
Nov 25, 2010
Dmitry Olshansky
Nov 26, 2010
Don Clugston
Nov 26, 2010
Dmitry Olshansky
Nov 26, 2010
Don Clugston
Nov 26, 2010
Dmitry Olshansky
Nov 26, 2010
Don Clugston
Nov 21, 2010
David Simcha
November 20, 2010
phobos commit, revision 2186


user: dsimcha

msg:
Revert the last changeset.

http://www.dsource.org/projects/phobos/changeset/2186

November 20, 2010
Yeah, I checked this in even though it wasn't working on my computer.  I was getting a couple specific failures that weren't happening to anyone else even before this revision (still trying to figure out why).  I figured if these changes broke anything I'd just revert them immediately.

Given both the fact that some tests fail on my computer and nowhere else and the seeming innocuousness of the changes I made and the fact that they broke something, I'm thinking maybe the unit tests are too strict. For example, my changes might have changed how constant folding is done, resulting in the least significant few bits being screwed up or something.  Also, maybe subtleties of different hardware and/or different operating systems and C runtimes has an effect.

On 11/20/2010 5:48 PM, dsource.org wrote:
> phobos commit, revision 2186
>
>
> user: dsimcha
>
> msg:
> Revert the last changeset.
>
> http://www.dsource.org/projects/phobos/changeset/2186
>
> _______________________________________________
> phobos mailing list
> phobos at puremagic.com
> http://lists.puremagic.com/mailman/listinfo/phobos
>

November 20, 2010
On Saturday 20 November 2010 14:52:37 David Simcha wrote:
> Yeah, I checked this in even though it wasn't working on my computer.  I was getting a couple specific failures that weren't happening to anyone else even before this revision (still trying to figure out why).  I figured if these changes broke anything I'd just revert them immediately.
> 
> Given both the fact that some tests fail on my computer and nowhere else and the seeming innocuousness of the changes I made and the fact that they broke something, I'm thinking maybe the unit tests are too strict. For example, my changes might have changed how constant folding is done, resulting in the least significant few bits being screwed up or something.  Also, maybe subtleties of different hardware and/or different operating systems and C runtimes has an effect.

Didn't you say that you were running 64 bit windows whereas Don said that he was running 32 bit windows? Could that be having any effect?

- Jonathan M Davis
November 20, 2010
Yes, I'm running Windows 7 64-bit.  In addition to running 32-bit Windows, Don is running Windows XP IIRC.  It's probably related somehow to different C runtimes.  At any rate, the unit tests need to be made lenient enough that such minor details as constant folding and minor differences in C runtimes don't make the difference between passing and failing.  I don't understand the code well enough to do this myself, but I'm willing to help by testing whatever Don asks me to test.  Also, after this issue is cleared up, I'll probably try re-committing the dynamic-to-static changeset, as I'm pretty sure the problem here is with the unit tests, not the changes.

On 11/20/2010 8:10 PM, Jonathan M Davis wrote:
> Didn't you say that you were running 64 bit windows whereas Don said that he was running 32 bit windows? Could that be having any effect?

November 20, 2010
The auto tester is on win7 32 bit.

On 11/20/2010 5:43 PM, David Simcha wrote:
> Yes, I'm running Windows 7 64-bit.  In addition to running 32-bit Windows, Don is running Windows XP IIRC.  It's probably related somehow to different C runtimes.  At any rate, the unit tests need to be made lenient enough that such minor details as constant folding and minor differences in C runtimes don't make the difference between passing and failing.  I don't understand the code well enough to do this myself, but I'm willing to help by testing whatever Don asks me to test.  Also, after this issue is cleared up, I'll probably try re-committing the dynamic-to-static changeset, as I'm pretty sure the problem here is with the unit tests, not the changes.
> 
> On 11/20/2010 8:10 PM, Jonathan M Davis wrote:
>> Didn't you say that you were running 64 bit windows whereas Don said that he was running 32 bit windows? Could that be having any effect?
> 
> _______________________________________________
> phobos mailing list
> phobos at puremagic.com
> http://lists.puremagic.com/mailman/listinfo/phobos

November 20, 2010
I just tried using my WinXP32 virtual machine to run the tests on the exact same folder, and it fails in the same way.  It must just be something weird about my environment, though I have no idea what it might be.

On 11/20/2010 9:25 PM, Brad Roberts wrote:
> The auto tester is on win7 32 bit.
>
> On 11/20/2010 5:43 PM, David Simcha wrote:
>> Yes, I'm running Windows 7 64-bit.  In addition to running 32-bit Windows, Don is running Windows XP IIRC.  It's probably related somehow to different C runtimes.  At any rate, the unit tests need to be made lenient enough that such minor details as constant folding and minor differences in C runtimes don't make the difference between passing and failing.  I don't understand the code well enough to do this myself, but I'm willing to help by testing whatever Don asks me to test.  Also, after this issue is cleared up, I'll probably try re-committing the dynamic-to-static changeset, as I'm pretty sure the problem here is with the unit tests, not the changes.
>>
>> On 11/20/2010 8:10 PM, Jonathan M Davis wrote:
>>> Didn't you say that you were running 64 bit windows whereas Don said that he was running 32 bit windows? Could that be having any effect?
>> _______________________________________________
>> phobos mailing list
>> phobos at puremagic.com
>> http://lists.puremagic.com/mailman/listinfo/phobos
> _______________________________________________
> phobos mailing list
> phobos at puremagic.com
> http://lists.puremagic.com/mailman/listinfo/phobos
>

November 20, 2010
An HTML attachment was scrubbed...
URL: <http://lists.puremagic.com/pipermail/phobos/attachments/20101120/3ee0b204/attachment.html>
November 21, 2010
On 21 November 2010 05:48, David Simcha <dsimcha at gmail.com> wrote:
> More research into this issue:? I compiled the unittest.exe executable on my
> main (desktop) computer, ran it under my primary OS (Win7 64) and it
> failed..? I then ran the exact same executable (no recompile) on my Linux
> Partition (Ubuntu 10.10 64) using Wine and it failed.
>
> I then ran the exact same executable on my laptop on my primary OS (also
> Win7 64) and it passed.? I ran it on my laptop's Linux partition under Wine
> (Ubuntu 10.10 32) and it passed.
>
> The only difference between the two systems that might account for this is that the laptop has an Intel Penryn CPU, whereas the desktop as an AMD Brisbane CPU.? Does anyone know whether different x86 CPUs can produce subtly different floating point results when executing the exact same code? Alternatively, is it possible that some processor-specific optimizations to some function getting called by Don's code could be causing slightly different results?

That's _very_ interesting. The code in question doesn't use the C
runtime at all.
If it's the same exe, then the difference can only lie in the CPU or
in the environment.
Eg, if it starts with 80-bit floats disabled.
But the fact that every other test passes on your system, makes that
seem unlikely.
Does the failing system have execution protection enabled?

The only documented floating point difference between AMD and Intel
that I know of, is that AMD
raises the invalid exception when loading an 80-bit NaN, but Intel
doesn't. BTW I found that difference
myself, and added it to Wikipedia. That difference is not relevant here.

If the CPU itself is responsible for the difference, that's a CPU bug. BTW this test was present in Tango for years, and nobody ever reported this issue before.
November 21, 2010
Can others on this mailing list please submit info about their CPUs (manufacturer and core type) and whether the unit tests pass?  My working hypothesis (mostly because I can't think of anything else that's at all plausible) is that this discrepancy is somehow hardware-related. I'll start and give an example of what I'm looking for:

Intel Penryn:  Pass
AMD Brisbane:  Fail

On 11/21/2010 1:19 AM, Don Clugston wrote:
> On 21 November 2010 05:48, David Simcha<dsimcha at gmail.com>  wrote:
>> More research into this issue:  I compiled the unittest.exe executable on my
>> main (desktop) computer, ran it under my primary OS (Win7 64) and it
>> failed..  I then ran the exact same executable (no recompile) on my Linux
>> Partition (Ubuntu 10.10 64) using Wine and it failed.
>>
>> I then ran the exact same executable on my laptop on my primary OS (also
>> Win7 64) and it passed.  I ran it on my laptop's Linux partition under Wine
>> (Ubuntu 10.10 32) and it passed.
>>
>> The only difference between the two systems that might account for this is that the laptop has an Intel Penryn CPU, whereas the desktop as an AMD Brisbane CPU.  Does anyone know whether different x86 CPUs can produce subtly different floating point results when executing the exact same code? Alternatively, is it possible that some processor-specific optimizations to some function getting called by Don's code could be causing slightly different results?
> That's _very_ interesting. The code in question doesn't use the C
> runtime at all.
> If it's the same exe, then the difference can only lie in the CPU or
> in the environment.
> Eg, if it starts with 80-bit floats disabled.
> But the fact that every other test passes on your system, makes that
> seem unlikely.
> Does the failing system have execution protection enabled?
>
> The only documented floating point difference between AMD and Intel
> that I know of, is that AMD
> raises the invalid exception when loading an 80-bit NaN, but Intel
> doesn't. BTW I found that difference
> myself, and added it to Wikipedia. That difference is not relevant here.
>
> If the CPU itself is responsible for the difference, that's a CPU bug.
> BTW this test was present in Tango for years, and nobody ever reported
> this issue before.
> _______________________________________________
> phobos mailing list
> phobos at puremagic.com
> http://lists.puremagic.com/mailman/listinfo/phobos
>

November 21, 2010
Forgot to mention:  I use the default settings for data execution prevention (if that's what you were referring to):  "Turn on DEP for essential Windows programs and services only"

On 11/21/2010 1:19 AM, Don Clugston wrote:
> On 21 November 2010 05:48, David Simcha<dsimcha at gmail.com>  wrote:
>> More research into this issue:  I compiled the unittest.exe executable on my
>> main (desktop) computer, ran it under my primary OS (Win7 64) and it
>> failed..  I then ran the exact same executable (no recompile) on my Linux
>> Partition (Ubuntu 10.10 64) using Wine and it failed.
>>
>> I then ran the exact same executable on my laptop on my primary OS (also
>> Win7 64) and it passed.  I ran it on my laptop's Linux partition under Wine
>> (Ubuntu 10.10 32) and it passed.
>>
>> The only difference between the two systems that might account for this is that the laptop has an Intel Penryn CPU, whereas the desktop as an AMD Brisbane CPU.  Does anyone know whether different x86 CPUs can produce subtly different floating point results when executing the exact same code? Alternatively, is it possible that some processor-specific optimizations to some function getting called by Don's code could be causing slightly different results?
> That's _very_ interesting. The code in question doesn't use the C
> runtime at all.
> If it's the same exe, then the difference can only lie in the CPU or
> in the environment.
> Eg, if it starts with 80-bit floats disabled.
> But the fact that every other test passes on your system, makes that
> seem unlikely.
> Does the failing system have execution protection enabled?
>
> The only documented floating point difference between AMD and Intel
> that I know of, is that AMD
> raises the invalid exception when loading an 80-bit NaN, but Intel
> doesn't. BTW I found that difference
> myself, and added it to Wikipedia. That difference is not relevant here.
>
> If the CPU itself is responsible for the difference, that's a CPU bug.
> BTW this test was present in Tango for years, and nobody ever reported
> this issue before.
> _______________________________________________
> phobos mailing list
> phobos at puremagic.com
> http://lists.puremagic.com/mailman/listinfo/phobos
>

« First   ‹ Prev
1 2 3