Jump to page: 1 25  
Page
Thread overview
80 Bit Challenge
Apr 02, 2005
Bob W
Apr 02, 2005
Ben Hinkle
Apr 02, 2005
Walter
Apr 03, 2005
Ben Hinkle
Apr 03, 2005
Walter
Apr 03, 2005
Walter
Apr 03, 2005
Walter
Apr 03, 2005
Ben Hinkle
Apr 03, 2005
Ben Hinkle
Apr 02, 2005
Sean Kelly
Apr 02, 2005
Walter
Apr 03, 2005
Bob W
Apr 03, 2005
Georg Wrede
Apr 03, 2005
Bob W
Apr 03, 2005
Georg Wrede
Apr 04, 2005
Georg Wrede
Apr 04, 2005
Georg Wrede
Apr 04, 2005
Georg Wrede
Apr 04, 2005
Bob W
Apr 05, 2005
Walter
Apr 05, 2005
Georg Wrede
Apr 05, 2005
Walter
Apr 06, 2005
Bob W
Apr 05, 2005
Bob W
Apr 05, 2005
Bob W
Apr 05, 2005
Walter
Apr 03, 2005
Bob W
Apr 03, 2005
Ben Hinkle
Apr 03, 2005
Bob W
Apr 04, 2005
Georg Wrede
Apr 04, 2005
Georg Wrede
Apr 04, 2005
Bob W
Apr 04, 2005
Georg Wrede
Re: 80 Bit Challenge (Apple)
April 02, 2005
Thread "Exotic floor() function - D is different" went
into general discussion about the internal FP format.
So I have moved this over to a new thread:


"Walter" <newshound@digitalmars.com> wrote in message news:d2l1ds$2479$1@digitaldaemon.com...
>
>
> ...... The x86 FPU *wants* to evaluate things to 80 bits.
>
> The D compiler's internal paths fully support 80 bit arithmetic, that
> means
> there are no surprising "choke points" where it gets truncated to 64 bits.
> If the type of a literal is specified to be 'double', which is the case
> for
> no suffix, then you get 64 bits of precision. I hope you'll agree that
> that
> is the least surprising thing to do.


I would have agreed a couple of days ago. But after carefully thinking it over, I've come to the following conclusions:

- It was a good move to open up (almost) everything
  in D to handle the 80bit FP fomat (real format).

- I'm also with you when you have mentioned the following:
  "... intermediate values generated are allowed to be
  evaluated to the largest precision available...."

- But - they are by default not allowed to accept the
  largest precision available, because the default format
  for literals w/o suffix is double.

- In my opinion there is no single reason why literals
  w/o suffix have to be treated as doubles (except maybe
  for C legacy).

- It has never harmed floats to be fed with doubles,
  consequently it will not harm doubles to accept reals.
  The FPU gladly will take care of this.

- Forget C for a moment: Doesn't it look strange parsing
  unsuffixed literals as doubles, converting them and
  evaluating them internally to reals (FPU) and eventually
  passing the precision-impaired results to a real?

- This is why I'd like to see default (unsuffixed) literals
  to be parsed and evaluated in "the highest precision
  available" (whatever this will be in future, real for now).

- Since everything else is prepared for 80bits in D,
  casting and/or a double suffix would be the logical
  way in the rare cases, when double generation has to
  be enforced.

- Experience shows that there will be a loss of precision
  in the final result whenever double values are converted
  to real and evaluated further. But this is of no concern
  when reals are truncated to doubles.

Finally I'd like to say that although I am convinced that the above mentioned would be worthwhile to be implemented, I am actually more concerned about the 32 bit integers.

The 64 bit CPUs are coming and they'll change our way
of thinking just the way the 32 bit engines have done.
Internal int format 32 bits? Suffixes for 64 bit int's?
For now it is maybe still a yes, in the not so distant
future maybe not. I just hope D can cope and will still
be "young of age" when this happens.

I like the slogan "D fully supports 80 bit reals", but
a marketing guy would probably suggest to change this to
"D fully supports 64 bit CPUs".



April 02, 2005
> - In my opinion there is no single reason why literals
>  w/o suffix have to be treated as doubles (except maybe
>  for C legacy).
> - This is why I'd like to see default (unsuffixed) literals
>  to be parsed and evaluated in "the highest precision
>  available" (whatever this will be in future, real for now).

and human legacy. Personally I'm used to .3 being a double. If I had three
overloaded function func(float), func(double) and func(real) and I wrote
func(.3) I'd be surprised it chose the real one just because I'm used to
literals being doubles.
But that's my only complaint about your proposal. Since D doesn't have to
worry about legacy code we can make .3 parse as whatever we want
technically.

> The 64 bit CPUs are coming and they'll change our way
> of thinking just the way the 32 bit engines have done.
> Internal int format 32 bits? Suffixes for 64 bit int's?
> For now it is maybe still a yes, in the not so distant
> future maybe not. I just hope D can cope and will still
> be "young of age" when this happens.

I'm sure people would get thrown for a loop if given a choice between
func(int) and func(long) the code func(1) called func(long). Even on a 64
bit platform. If one really didn't care which was chosen then
  import std.stdint;
  ...
  func(cast(int_fast32_t)1);
would be a platform-independent way of choosing the "natural" size for 1
(assuming "fast32" would be 64 bits on 64 bit platforms). And more explicit,
too.


April 02, 2005
"Ben Hinkle" <ben.hinkle@gmail.com> wrote in message news:d2m8c1$9jj$1@digitaldaemon.com...
> > - In my opinion there is no single reason why literals
> >  w/o suffix have to be treated as doubles (except maybe
> >  for C legacy).
> > - This is why I'd like to see default (unsuffixed) literals
> >  to be parsed and evaluated in "the highest precision
> >  available" (whatever this will be in future, real for now).
>
> and human legacy. Personally I'm used to .3 being a double. If I had three
> overloaded function func(float), func(double) and func(real) and I wrote
> func(.3) I'd be surprised it chose the real one just because I'm used to
> literals being doubles.
> But that's my only complaint about your proposal. Since D doesn't have to
> worry about legacy code we can make .3 parse as whatever we want
> technically.

I've been thinking about this. The real issue is not the precision of .3,
but it's type. Suppose it was kept internally with full 80 bit precision,
participated in constant folding as a full 80 bit type, and was only
converted to 64 bits when a double literal needed to be actually inserted
into the .obj file? This would tend to mimic the runtime behavior of
intermediate value evaluation. It will be numerically superior. An explicit
cast would still be honored:
    cast(double).3
will actually truncate the bits in the internal representation.

> > The 64 bit CPUs are coming and they'll change our way
> > of thinking just the way the 32 bit engines have done.
> > Internal int format 32 bits? Suffixes for 64 bit int's?
> > For now it is maybe still a yes, in the not so distant
> > future maybe not. I just hope D can cope and will still
> > be "young of age" when this happens.
>
> I'm sure people would get thrown for a loop if given a choice between
> func(int) and func(long) the code func(1) called func(long). Even on a 64
> bit platform.

I agree. The 'fuzzy' nature of C's int size has caused endless grief, porting bugs, and misguided coding styles over the last 20 years. Portability is significantly enhanced by giving a reliable, predictable size to it.

> If one really didn't care which was chosen then
>   import std.stdint;
>   ...
>   func(cast(int_fast32_t)1);
> would be a platform-independent way of choosing the "natural" size for 1
> (assuming "fast32" would be 64 bits on 64 bit platforms). And more
explicit,
> too.

Interestingly, current C compilers for AMD and Intel's 64 bit CPUs still put int's at 32 bits. I think I was proven right <g>.


April 02, 2005
Walter wrote:

> I agree. The 'fuzzy' nature of C's int size has caused endless grief,
> porting bugs, and misguided coding styles over the last 20 years.
> Portability is significantly enhanced by giving a reliable, predictable size
> to it.

Which is why it's so strange to have a "real" FP type built-in to D,
that does not have a fixed size but is instead highly CPU dependant ?

My suggestion was to name the 80-bit (or more: 128) type "extended",
and to make "real" into an alias - just like for instance size_t is ?

As a side effect, it would also fix the "ireal" and "creal" types...
(as it would mean that the "extended" types only exists on X86 CPUs)

But currently, a "real" *could* be exactly the same as a "double"...
(i.e how it works in DMD 0.11x and GDC 0.10, using C's: long double)

1)

Revert the type names back to the old ones:
>     * real -> extended
>     * ireal -> iextended
>     * creal -> cextended

2)

// GCC: LONG_DOUBLE_TYPE_SIZE
version (GNU_BitsPerReal80) // DMD: "all"
{
    alias  extended real;
    alias iextended imaginary;
    alias cextended complex;
}
else version (GNU_BitsPerReal64) // DMD: "none"
{
    alias  double real;
    alias idouble imaginary;
    alias cdouble complex;
}
else static assert(0);

3) for reference, these already exist:

// GCC: POINTER_SIZE
version (GNU_BitsPerPointer64) // DMD: "AMD64"
{
    alias ulong size_t;
    alias long ptrdiff_t;
}
else version (GNU_BitsPerPointer32) // DMD: "X86"
{
    alias uint size_t;
    alias int ptrdiff_t;
}
else static assert(0);


And we would still know how to say "80-bit extended precision" ?
(and avoid the "imaginary real" and "complex real" embarrasment)

--anders
April 02, 2005
Bob W wrote:

> The 64 bit CPUs are coming and they'll change our way
> of thinking just the way the 32 bit engines have done.

The 64 bit CPUs are already here, and supported by Linux...
Mainstream OS support, i.e. Win XP and Mac OS X, is now GM:

http://www.theinquirer.net/?article=22246
http://www.appleinsider.com/article.php?id=976

> Internal int format 32 bits? Suffixes for 64 bit int's?

I think the preferred int format is still 32 bits, even if the
CPU now can handle 64 bit ints as well (but I only know PPC64)

However, all pointers and indexes will *need* to be 64-bit...
(means use "size_t" instead of int, and not cast void[]->long)

> I like the slogan "D fully supports 80 bit reals", but
> a marketing guy would probably suggest to change this to
> "D fully supports 64 bit CPUs".

I think the D spec and compilers are more or less 64-bit now ?
Phobos, on the other hand, still have a *lot* of 32/64 bugs...

See the D.gnu newsgroup, for a listing of some of them - in GDC ?
Finally, it's perfectly good to run a 32-bit OS on a 64-bit CPU.

I know I do. ;-)
--anders
April 02, 2005
In article <d2mn51$nju$1@digitaldaemon.com>, Walter says...
>
>I agree. The 'fuzzy' nature of C's int size has caused endless grief, porting bugs, and misguided coding styles over the last 20 years. Portability is significantly enhanced by giving a reliable, predictable size to it.

I agree, though it's worth noting that some few architectures that C has been ported to don't have 8 bit bytes.  On such machines, I imagine it may be difficult to comform to standard size requirements.

>Interestingly, current C compilers for AMD and Intel's 64 bit CPUs still put int's at 32 bits. I think I was proven right <g>.

At the very least, I think it's likely that future architectures will be more consistent and the odd byte size problem will likely go away, if it ever really existed in the first place.  I find it very useful to have standard size requirements for primitives as it reduces a degree of unpredictability (or the need for preprocessor code) for cross-platform code.


Sean


April 02, 2005
"Sean Kelly" <sean@f4.ca> wrote in message news:d2n5ok$14si$1@digitaldaemon.com...
> In article <d2mn51$nju$1@digitaldaemon.com>, Walter says...
> >
> >I agree. The 'fuzzy' nature of C's int size has caused endless grief, porting bugs, and misguided coding styles over the last 20 years. Portability is significantly enhanced by giving a reliable, predictable
size
> >to it.
>
> I agree, though it's worth noting that some few architectures that C has
been
> ported to don't have 8 bit bytes.  On such machines, I imagine it may be difficult to comform to standard size requirements.

I've worked on such a machine, the PDP-10, with 36 bit sized 'ints'. They were beautiful machines for their day, the 1970's, but they went obsolete 25+ years ago. But I don't think anyone is going to make an odd bit size machine anymore - just try running Java on it.


April 03, 2005
"Ben Hinkle" <ben.hinkle@gmail.com> wrote in message news:d2m8c1$9jj$1@digitaldaemon.com...
>> - In my opinion there is no single reason why literals
>>  w/o suffix have to be treated as doubles (except maybe
>>  for C legacy).
>> - This is why I'd like to see default (unsuffixed) literals
>>  to be parsed and evaluated in "the highest precision
>>  available" (whatever this will be in future, real for now).
>
> and human legacy. Personally I'm used to .3 being a double. If I had three overloaded function func(float), func(double) and func(real) and I wrote func(.3) I'd be surprised it chose the real one just because I'm used to literals being doubles.

Would you even notice in most cases? The FPU
will happily accept your real and do with it
whatever it is instructed to do.

On the other hand, if your .3 defaults to a
double, a rounding error should not surprise
you.

If I was convinced that overloading is more
often found than literals in mainstream
(=moderately sophisticated) programs, then
I'd give it more of a thought.



> But that's my only complaint about your proposal. Since D doesn't have to worry about legacy code we can make .3 parse as whatever we want technically.

Exactly. I'd also be concerned how to explain
to someone interested in D, supposedly a
much more modern language than C,
the following:

The compiler offers an 80 bit type,
the FPU calculates only in 80 bit format,
but default literals are parsed for some
illogical reason to double precision values.

That would not really impress me.



>
>> The 64 bit CPUs are coming and they'll change our way
>> of thinking just the way the 32 bit engines have done.
>> Internal int format 32 bits? Suffixes for 64 bit int's?
>> For now it is maybe still a yes, in the not so distant
>> future maybe not. I just hope D can cope and will still
>> be "young of age" when this happens.
>
> I'm sure people would get thrown for a loop if given a choice between
> func(int) and func(long) the code func(1) called func(long). Even on a 64
> bit platform. If one really didn't care which was chosen then
>  import std.stdint;
>  ...
>  func(cast(int_fast32_t)1);
> would be a platform-independent way of choosing the "natural" size for 1
> (assuming "fast32" would be 64 bits on 64 bit platforms). And more
> explicit, too.
>

Not too long from now we'll be averaging 16GB of
main memory, 32 bit computers will be gone and I
bet the average programmer will not be bothered
using anything else than 64 bits for his integer
of choice.

I doubt that there are many people left who are
still trying to use 16 bit variables for integer
calculations, even if they'd fit their requirements.
The same thing will happen to 32 bit formats in
PC-like equipment, I'm sure. (I am not talking
about UTF-32 formats here.)

The main reason is that for the first time
ever the integer range will be big enough for
almost anything, no overflow at 128 nor 32768
nor 2+ billion. What a relief!



April 03, 2005
"Anders F Björklund" <afb@algonet.se> wrote in message news:d2mt6p$sq6$1@digitaldaemon.com...
> Walter wrote:
>
>> I agree. The 'fuzzy' nature of C's int size has caused endless grief,
>> porting bugs, and misguided coding styles over the last 20 years.
>> Portability is significantly enhanced by giving a reliable, predictable
>> size
>> to it.
>
> Which is why it's so strange to have a "real" FP type built-in to D, that does not have a fixed size but is instead highly CPU dependant ?

Specifying the floating point numeric model for portability is what Java did and it ended in disaster - it hosed performance on Intel chips. Allowing for more is the only sane thing to do (or SANE for those who remember the old Apple API and the 68881 96 bit extended type - ah that precision rocked!).

Of course if you actually have to worry about a few bits of precision my own personal philosophy is to use GMP and double the precision until roundoff is no longer even close to a problem. For those who haven't been following at home I'll plug my D wrapper for GMP (the GNU multi-precision library): http://home.comcast.net/~benhinkle/gmp-d/


April 03, 2005
Ben Hinkle wrote:

>>Which is why it's so strange to have a "real" FP type built-in to D,
>>that does not have a fixed size but is instead highly CPU dependant ?
> 
> Specifying the floating point numeric model for portability is what Java did and it ended in disaster - it hosed performance on Intel chips. Allowing for more is the only sane thing to do (or SANE for those who remember the old Apple API and the 68881 96 bit extended type - ah that precision rocked!).

You misunderstood. I think that having an 80-bit floating point type is a *good* thing. I just think it should be *fixed* at 80-bit, and not be
64-bit on some platforms and 80-bit on some platforms ? And rename it...

(again, "real" is not the name problem here - "ireal" and "creal" are)

I still think the main reason why Java does not have 80-bit floating point is that the SPARC chip doesn't have it, so Sun didn't bother ? :-)
And the PowerPC 128-bit "long double" is not fully IEEE-compliant...

(and for portability, it would be nice if D's extended.sizeof was 16 ?)

--anders

PS. GCC says:

> -m96bit-long-double, -m128bit-long-double
> 
> These switches control the size of long double type. The i386
> application binary interface specifies the size to be 96 bits, so
> -m96bit-long-double is the default in 32 bit mode.
> 
> Modern architectures (Pentium and newer) would prefer long double to be
> aligned to an 8 or 16 byte boundary. In arrays or structures conforming
> to the ABI, this would not be possible. So specifying a
> -m128bit-long-double will align long double to a 16 byte boundary by
> padding the long double with an additional 32 bit zero.
> 
> In the x86-64 compiler, -m128bit-long-double is the default choice as
> its ABI specifies that long double is to be aligned on 16 byte boundary.
> 
> Notice that neither of these options enable any extra precision over the
> x87 standard of 80 bits for a long double.

http://gcc.gnu.org/onlinedocs/gcc-3.4.3/gcc/i386-and-x86_002d64-Options.html

Note that D uses a "80bit-long-double", by default (i.e. REALSIZE is 10)
« First   ‹ Prev
1 2 3 4 5