January 26, 2004
>There's something else going on in the benchmark code, then. Something
else,
like perhaps file I/O, that is taking so much time it is swamping the
result.
Yes, there are exactly 30 fwrite calls with buffer of 100000 long double
values (note that the buffer for gcc is 1.2 times larger than the DMC one
and therefore the resultant file).

> Or perhaps MinGW happens to determine that your benchmark
computation is all dead code and deletes it entirely.
No, I compared visually the data from the source code compiled with DMC and
with MinGW gcc. There are some zones which differ significantly, but there
are also zones which doesn't differ. According to my empiric observations
this is due to the different accumulated errors (which in some zones
probably deplete the long double precision) in the computational space.
In addition compilation with Lcc-win32 produces exactly the same resultant
file as compilation with gcc -O2, but about 11 times slower.

I'll send you the source code within an hour.
Is there precision difference between the 80 bit long double and 96 bit one
(LDBL_EPSILON, LDBL_MIN, LDBL_MAX & LDBL_DIG are the same in DMC, Lcc-Win32
& MinGW gcc with -m96bit-long-double option) on x86 platform?

Thank you,
Ronald


1 2
Next ›   Last »