April 03, 2005
Bob W wrote:
> "Ben Hinkle" <ben.hinkle@gmail.com> wrote in message 
>>But that's my only complaint about your proposal. Since D doesn't have
>>to worry about legacy code we can make .3 parse as whatever we want technically.
> 
> Exactly. I'd also be concerned how to explain
> to someone interested in D, supposedly a
> much more modern language than C,
> the following:
> 
> The compiler offers an 80 bit type,
> the FPU calculates only in 80 bit format,
> but default literals are parsed for some
> illogical reason to double precision values.
> 
> That would not really impress me.

Well put. It's plain embarrassing. Makes D look home-made.

Ever since I started using D it never crossed my mind to even suspect that they'd be anything else than 80 bit.

Luckily, most of my programs use integers, but had I unexpectingly stumbled upon this... It's like you're on a sunny picnic with your family, and around comes this 4 year old. Suddenly he's got a semiautomatic shotgun, and he empties it in your stomach. You'd die with a face expressing utter disbelief.

>>>The 64 bit CPUs are coming and they'll change our way
>>>of thinking just the way the 32 bit engines have done.
>>>Internal int format 32 bits? Suffixes for 64 bit int's?
>>>For now it is maybe still a yes, in the not so distant
>>>future maybe not. I just hope D can cope and will still
>>>be "young of age" when this happens.

>>I'm sure people would get thrown for a loop if given a choice between func(int) and func(long) the code func(1) called func(long). Even on a 64 bit platform. If one really didn't care which was chosen then
>> import std.stdint;
>> ...
>> func(cast(int_fast32_t)1);

I'd've said "if one really _did_ care". :-)

>>would be a platform-independent way of choosing the "natural" size for 1 (assuming "fast32" would be 64 bits on 64 bit platforms). And more explicit, too.

Actually, I wish there were a pragma to force the default -- and it would force all internal stuff too. I hate it when somebody else knows better what I want to calculate with. And hate not trusting what is secretly cast to what and when.

What if some day I'm using DMD on a Sparc, and had to read through the asm listings of my own binaries, just because my client needs to know for sure.

> Not too long from now we'll be averaging 16GB of
> main memory, 32 bit computers will be gone and I
> bet the average programmer will not be bothered
> using anything else than 64 bits for his integer
> of choice.

This'll happen too fast. When M$ gets into 64 bits on the desktop, no self-respecting suit, office clerk, or other jerk wants to be even seen with a 32 bit computer. Need'em or not.

> I doubt that there are many people left who are
> still trying to use 16 bit variables for integer
> calculations, even if they'd fit their requirements.
> The same thing will happen to 32 bit formats in
> PC-like equipment, I'm sure. (I am not talking
> about UTF-32 formats here.)

At first I had some getting used to when writing in D:

for (int i=0; i<8; i++) {.....}

knowing I'm "wasting", but now it seems natural. Things change.

> The main reason is that for the first time
> ever the integer range will be big enough for
> almost anything, no overflow at 128 nor 32768
> nor 2+ billion. What a relief!

That will change some things for good. For example, then it will be perfectly reasonable to do all money calculations with int64. There's enough room to do the World Budget, counted in cents. That'll make it so much easier to do serious and fast bean counting software.
April 03, 2005
Georg Wrede wrote:

>> The compiler offers an 80 bit type,

(some compilers)

>> the FPU calculates only in 80 bit format,

(some FPUs)

>> but default literals are parsed for some
>> illogical reason to double precision values.

The default precision is double, f is for single and l is for extended.
I'm not sure it makes sense to have the default be a non-portable type ?

>> That would not really impress me.
> 
> Well put. It's plain embarrassing. Makes D look home-made.

I don't see how picking a certain default (which happens to be
the same default as in most other C-like languages) is "home-made" ?

> This'll happen too fast. When M$ gets into 64 bits on the desktop, no self-respecting suit, office clerk, or other jerk wants to be even seen with a 32 bit computer. Need'em or not.

Currently there is a real shortage of 64-bit Windows drivers, though...
However, nobody wants to be seen with a *16-bit* computer for sure :-)

--anders
April 03, 2005
"Anders F Björklund" <afb@algonet.se> wrote in message news:d2o7iv$21pg$1@digitaldaemon.com...
> You misunderstood. I think that having an 80-bit floating point type is a *good* thing. I just think it should be *fixed* at 80-bit, and not be 64-bit on some platforms and 80-bit on some platforms ? And rename it...

Unfortunately, that just isn't practical. In order to implement D efficiently, the floating point size must map onto what the native hardware supports. We can get away with specifying the size of ints, longs, floats, and doubles, but not of the extended floating point type.


April 03, 2005
Walter wrote:

>>You misunderstood. I think that having an 80-bit floating point type is
>>a *good* thing. I just think it should be *fixed* at 80-bit, and not be
>>64-bit on some platforms and 80-bit on some platforms ? And rename it...
> 
> Unfortunately, that just isn't practical. In order to implement D
> efficiently, the floating point size must map onto what the native hardware
> supports. We can get away with specifying the size of ints, longs, floats,
> and doubles, but not of the extended floating point type.

I understand this, my "solution" there was to use an alias instead...

e.g. "real" would map to 80-bit on X86, and to 64-bit on PPC
     (in reality, it does this already in GDC. Just implicitly)

--anders
April 03, 2005
"Anders F Björklund" <afb@algonet.se> wrote in message news:d2oas2$245e$1@digitaldaemon.com...
> Georg Wrede wrote:
>
>>> The compiler offers an 80 bit type,
>
> (some compilers)
>
>>> the FPU calculates only in 80 bit format,
>
> (some FPUs)
>

I would never advocate the real for internal
calculation on compilers or target systems
without 80 bit FPU (although it would not
harm using 80 bit emulation during compile
time except for a slight compiler performace
degradation), I am just certain that you need
to use the highest precision available on the
(compiler) system to represent the maximum
number precison correctly.

Remember: you will not introduce errors
in your double values by using real for
evaluation, but you'll definitely have
inaccuracies in most fractional real values
if they are derived from a double.

And yes, the compiler would be able to
evaluate the expression
    double d=1e999/1e988;
correctly, because the result is a valid
double value. Currently it doesn't
unless you override your defaults.



>>> but default literals are parsed for some
>>> illogical reason to double precision values.
>
> The default precision is double, f is for single and l is for extended. I'm not sure it makes sense to have the default be a non-portable type ?

Again, there are no portability issues,
except if you want to introduce inaccuracies
as a compiler feature. A fractional value
like 1.2 is its double (and float) equivalent
even if derived from a real. But you will be
way off precision compared to 1.2L if you
intentionally or unintentionally create a real
from a double.

"real r=1.2" simply does not work properly
and should be flagged by the compiler with
at least a warning, if you guys for some
reason have to mimic C in this respect.
(Any Delphi programmers out there to
comment?)



>
>>> That would not really impress me.
>>
>> Well put. It's plain embarrassing. Makes D look home-made.
>
> I don't see how picking a certain default (which happens to be
> the same default as in most other C-like languages) is "home-made" ?

C looks "home made" at times, but you'd
have to expect that from a language which
is several decades old. Why would D want
to start like this in the very first place?



>
>> This'll happen too fast. When M$ gets into 64 bits on the desktop, no self-respecting suit, office clerk, or other jerk wants to be even seen with a 32 bit computer. Need'em or not.
>
> Currently there is a real shortage of 64-bit Windows drivers, though... However, nobody wants to be seen with a *16-bit* computer for sure :-)
>
> --anders

That's what I've told my parents in my
student times, but they have never bought
me that Ferrari ....



April 03, 2005
"Anders F Björklund" <afb@algonet.se> wrote in message news:d2o7iv$21pg$1@digitaldaemon.com...
> Ben Hinkle wrote:
>
>>>Which is why it's so strange to have a "real" FP type built-in to D, that does not have a fixed size but is instead highly CPU dependant ?
>>
>> Specifying the floating point numeric model for portability is what Java did and it ended in disaster - it hosed performance on Intel chips. Allowing for more is the only sane thing to do (or SANE for those who remember the old Apple API and the 68881 96 bit extended type - ah that precision rocked!).
>
> You misunderstood. I think that having an 80-bit floating point type is a *good* thing. I just think it should be *fixed* at 80-bit, and not be 64-bit on some platforms and 80-bit on some platforms ?

Let me rephrase my point. Fixing the precision on a platform that doesn't support that precision is a bad idea. What if the hardware supports 96 bit extended precision and D fixes the value at 80? We should just do what the hardware will support since it varies so much between platforms.

>And rename it...
> (again, "real" is not the name problem here - "ireal" and "creal" are)

That's another thread :-)

> I still think the main reason why Java does not have 80-bit floating point
> is that the SPARC chip doesn't have it, so Sun didn't bother ? :-)
> And the PowerPC 128-bit "long double" is not fully IEEE-compliant...

could be.

> (and for portability, it would be nice if D's extended.sizeof was 16 ?)
>
> --anders
>
> PS. GCC says:
>
>> -m96bit-long-double, -m128bit-long-double
>>
>> These switches control the size of long double type. The i386 application binary interface specifies the size to be 96 bits, so -m96bit-long-double is the default in 32 bit mode.
>>
>> Modern architectures (Pentium and newer) would prefer long double to be aligned to an 8 or 16 byte boundary. In arrays or structures conforming to the ABI, this would not be possible. So specifying a -m128bit-long-double will align long double to a 16 byte boundary by padding the long double with an additional 32 bit zero.
>>
>> In the x86-64 compiler, -m128bit-long-double is the default choice as its ABI specifies that long double is to be aligned on 16 byte boundary.
>>
>> Notice that neither of these options enable any extra precision over the x87 standard of 80 bits for a long double.
>
> http://gcc.gnu.org/onlinedocs/gcc-3.4.3/gcc/i386-and-x86_002d64-Options.html
>
> Note that D uses a "80bit-long-double", by default (i.e. REALSIZE is 10)

This section looks like a quote of a previous post since it is indented using > but I think you are quoting another source. I've noticed you use > to quote replies, which is the character I use, too. Please use > only for quoting replies since it is misleading to indent other content using >. It looks like you are putting words into other people's posts.


April 03, 2005
"Bob W" <nospam@aol.com> wrote in message news:d2nn80$1jsd$1@digitaldaemon.com...
>
> "Ben Hinkle" <ben.hinkle@gmail.com> wrote in message news:d2m8c1$9jj$1@digitaldaemon.com...
>>> - In my opinion there is no single reason why literals
>>>  w/o suffix have to be treated as doubles (except maybe
>>>  for C legacy).
>>> - This is why I'd like to see default (unsuffixed) literals
>>>  to be parsed and evaluated in "the highest precision
>>>  available" (whatever this will be in future, real for now).
>>
>> and human legacy. Personally I'm used to .3 being a double. If I had three overloaded function func(float), func(double) and func(real) and I wrote func(.3) I'd be surprised it chose the real one just because I'm used to literals being doubles.
>
> Would you even notice in most cases? The FPU
> will happily accept your real and do with it
> whatever it is instructed to do.
>
> On the other hand, if your .3 defaults to a
> double, a rounding error should not surprise
> you.
>
> If I was convinced that overloading is more
> often found than literals in mainstream
> (=moderately sophisticated) programs, then
> I'd give it more of a thought.

Double is the standard in many languages. Libraries expect doubles. People expect doubles. No-one was ever fired for choosing Double (to mangle an old IBM saying).

>> But that's my only complaint about your proposal. Since D doesn't have to worry about legacy code we can make .3 parse as whatever we want technically.
>
> Exactly. I'd also be concerned how to explain
> to someone interested in D, supposedly a
> much more modern language than C,
> the following:
>
> The compiler offers an 80 bit type,
> the FPU calculates only in 80 bit format,
> but default literals are parsed for some
> illogical reason to double precision values.
>
> That would not really impress me.

heh - I sense a slight bias creeping in "for some illogical reason". D's model is like C#'s model and what Java's model has changed to be (except that C# and Java don't have the extended precision type).

>>> The 64 bit CPUs are coming and they'll change our way
>>> of thinking just the way the 32 bit engines have done.
>>> Internal int format 32 bits? Suffixes for 64 bit int's?
>>> For now it is maybe still a yes, in the not so distant
>>> future maybe not. I just hope D can cope and will still
>>> be "young of age" when this happens.
>>
>> I'm sure people would get thrown for a loop if given a choice between
>> func(int) and func(long) the code func(1) called func(long). Even on a 64
>> bit platform. If one really didn't care which was chosen then
>>  import std.stdint;
>>  ...
>>  func(cast(int_fast32_t)1);
>> would be a platform-independent way of choosing the "natural" size for 1
>> (assuming "fast32" would be 64 bits on 64 bit platforms). And more
>> explicit, too.
>>
>
> Not too long from now we'll be averaging 16GB of
> main memory, 32 bit computers will be gone and I
> bet the average programmer will not be bothered
> using anything else than 64 bits for his integer
> of choice.

The Itanium was before it's time, I guess.

> I doubt that there are many people left who are
> still trying to use 16 bit variables for integer
> calculations, even if they'd fit their requirements.
> The same thing will happen to 32 bit formats in
> PC-like equipment, I'm sure. (I am not talking
> about UTF-32 formats here.)

Could be. I can't see the future that clearly.

> The main reason is that for the first time
> ever the integer range will be big enough for
> almost anything, no overflow at 128 nor 32768
> nor 2+ billion. What a relief!


April 03, 2005
Ben Hinkle wrote:

> Let me rephrase my point. Fixing the precision on a platform that doesn't support that precision is a bad idea. What if the hardware supports 96 bit extended precision and D fixes the value at 80? We should just do what the hardware will support since it varies so much between platforms.

All floating point precision in D is minimum. So 96 bit would be fine,
just as 128 bit would also be just fine. Again, making them all use
128 bits for storage would simplify things - even if only 80 are used ?

But allowing 64 bit too for extended, like now, is somewhat confusing...

> This section looks like a quote of a previous post since it is indented using > but I think you are quoting another source. I've noticed you use > to quote replies, which is the character I use, too. Please use > only for quoting replies since it is misleading to indent other content using >. It looks like you are putting words into other people's posts. 

I use the '>' character (actually I just use Quote / Paste as Quotation), but I also quote several sources - with attributions:

A wrote:
> foo

B wrote:
> bar

C wrote:
> baz

Sorry if you find this confusing, and I'll try to make it clearer...
(if there's no such attribution, it's quoted from the previous post)

--anders

April 03, 2005
> I use the '>' character (actually I just use Quote / Paste as Quotation), but I also quote several sources - with attributions:
>
> A wrote:
>> foo
>
> B wrote:
>> bar
>
> C wrote:
>> baz
>
> Sorry if you find this confusing, and I'll try to make it clearer... (if there's no such attribution, it's quoted from the previous post)

thanks - what newsreader do you use by the way?


April 03, 2005
"Ben Hinkle" <ben.hinkle@gmail.com> wrote in message news:d2ookl$2eqm$1@digitaldaemon.com...
>
------------------------------
>>
>> If I was convinced that overloading is more
>> often found than literals in mainstream
>> (=moderately sophisticated) programs, then
>> I'd give it more of a thought.
>
> Double is the standard in many languages. Libraries expect doubles. People expect doubles. No-one was ever fired for choosing Double (to mangle an old IBM saying).

(I quite like that saying.)

If libraries want doubles, no problem they'll get
doubles freshly produced by the FPU from an internal
real. The libraries won't even know that a real was
involved and would be happy. People don't expect
doubles (C programmers do), they expect results
being as accurate as possible without suffixes,
headaches, etc.



>> The compiler offers an 80 bit type,
>> the FPU calculates only in 80 bit format,
>> but default literals are parsed for some
>> illogical reason to double precision values.
>>
>> That would not really impress me.
>
> heh - I sense a slight bias creeping in "for some illogical reason". D's model is like C#'s model and what Java's model has changed to be (except that C# and Java don't have the extended precision type).

I hereby officially withdraw the "illogical reason"
statement. But lets theoretically introduce a new
extended precision type to either Java or C#.

Do you really think that they would dare to require
us to use a suffix for a simple assignment like
"hyperprecision x=1.2" ? I bet not.



>>
>> Not too long from now we'll be averaging 16GB of
>> main memory, 32 bit computers will be gone and I
>> bet the average programmer will not be bothered
>> using anything else than 64 bits for his integer
>> of choice.
>
> The Itanium was before it's time, I guess.

The Itanium never existed. Just ask any
mechanic, houswive, lawyer or his secretary.
Its either "Pentium inside" or some "..on"
from the other company. The other company
made a 64 bit to let Chipzilla suffer, so
Chipzilla will have "64 bit inside" for the
rest of us.