April 03, 2005
Ben Hinkle wrote:

> thanks - what newsreader do you use by the way? 

I am using Thunderbird:
  User-Agent: Mozilla Thunderbird 1.0.2 (Macintosh/20050317)

And I see you use LookOut:
  X-Newsreader: Microsoft Outlook Express 6.00.2900.2180

I can recommend both Thunderbird and Firefox, from Mozilla.

http://www.mozilla.org/products/

--anders
April 03, 2005
Anders F Björklund wrote:
> Georg Wrede wrote:
> The default precision is double, f is for single and l is for extended.
> I'm not sure it makes sense to have the default be a non-portable type ?

My dream would be that depending on the FPU, the default would be the "best" -- i.e. 80 on Intel, 64 on, say, Sparc -- and that an undecorated float literal would be of _this_type_ by default.

That'd make me euphorious. (Pardon the pun.)

So, I'd like a "default floating type", that is automatically aliased to be the smartest choice on the current platform. AND that all internal confusion would be removed.

As a user, I want to write 2.3 and _know_ that the system understands that it means "whatever precision we happen to use on this particular platform for floating operations anyway". If I had another opinion of my own, I'd damn well tell the compiler.

If I need to work with 64 bit floats on an 80 bit machine, then I'll specify that (pragma, decorated literal, whatever), and trust that "everything" then happens with 64 bits.

There's too much C legacy clouding this room.  ;-)

I mean, when's the last time anyone did half their float math with 64 bits and half with 80, in the same program?
April 03, 2005
Georg Wrede wrote:

> My dream would be that depending on the FPU, the default would be the "best" -- i.e. 80 on Intel, 64 on, say, Sparc -- and that an undecorated float literal would be of _this_type_ by default.

You seem to be ignoring 32-bit floating point types ?

That would be a mistake, they are very useful for
sound and image processing, for instance ? Using
64 or more bits per channel would be overkill...

Also, with vector units one can process like 4 floats
at a time. That is not too bad, either... (speedwise)
Unfortunately, D does not support SSE/AltiVec (yet ?)

Or maybe it's just a little side-effect of your dislike
of having to type an extra 'L' to get "extended" constants ?
(as been pointed out, 1.0 is universal "C" code for "double")

> So, I'd like a "default floating type", that is automatically aliased to be the smartest choice on the current platform. AND that all internal confusion would be removed.

From what I have seen, D is not about "automatically
choosing the smartest type". It's about letting the
programmer choose which type is the smartest to use ?

Even if that means that one has to pick from like
5 integer types, 4 floating point types, 3 string
types and even 3 boolean types... (choices, choices)

And sometimes you have to cast those literals. Like for
instance: -1U, or cast(wchar[]) "string". Kinda annoying,
but not very complex - and avoids complicating the compiler ?

D can be a pretty darn low-level language at times, IMHO...

--anders
April 03, 2005
"Anders F Björklund" <afb@algonet.se> wrote in message news:d2obfj$24ee$1@digitaldaemon.com...
> Walter wrote:
>
> >>You misunderstood. I think that having an 80-bit floating point type is a *good* thing. I just think it should be *fixed* at 80-bit, and not be 64-bit on some platforms and 80-bit on some platforms ? And rename it...
> >
> > Unfortunately, that just isn't practical. In order to implement D efficiently, the floating point size must map onto what the native
hardware
> > supports. We can get away with specifying the size of ints, longs,
floats,
> > and doubles, but not of the extended floating point type.
>
> I understand this, my "solution" there was to use an alias instead...
>
> e.g. "real" would map to 80-bit on X86, and to 64-bit on PPC
>       (in reality, it does this already in GDC. Just implicitly)

Why does GDC do this, since gcc on linux supports 80 bit long doubles?


April 03, 2005
Walter wrote:

>>I understand this, my "solution" there was to use an alias instead...
>>
>>e.g. "real" would map to 80-bit on X86, and to 64-bit on PPC
>>      (in reality, it does this already in GDC. Just implicitly)
> 
> Why does GDC do this, since gcc on linux supports 80 bit long doubles?

Just me being vague again... Make that "GDC on PowerPC"
(and probably other CPU families too, like SPARC or so)

GDC just falls back on what GCC reports for long double
support, so you can compile with AIX long-double-128 if
you like (unfortunately those are pretty darn buggy in
GCC, and not IEEE-755 compliant even in IBM AIX either)

But for GDC with X87 hardware, everything should be normal.

--anders
April 03, 2005
"Georg Wrede" <georg.wrede@nospam.org> wrote in message news:42500AC4.8000703@nospam.org...
> Anders F Björklund wrote:
>> Georg Wrede wrote:
>> The default precision is double, f is for single and l is for extended.
>> I'm not sure it makes sense to have the default be a non-portable type ?
>
> My dream would be that depending on the FPU, the default would be the "best" -- i.e. 80 on Intel, 64 on, say, Sparc -- and that an undecorated float literal would be of _this_type_ by default.
>
> That'd make me euphorious. (Pardon the pun.)

I bet at least 90% of D users are with you.


>
> So, I'd like a "default floating type", that is automatically aliased to be the smartest choice on the current platform. AND that all internal confusion would be removed.
>
> As a user, I want to write 2.3 and _know_ that the system understands that it means "whatever precision we happen to use on this particular platform for floating operations anyway". If I had another opinion of my own, I'd damn well tell the compiler.

Fair enough. Your 2.3 will be as close as it can get
for doubles and floats alike. But reals will have to
be treated differently unless you are prepared to
accept a 11 bit precision deficiency. This compiler
behaviour is unnecessary and nobody should blame you
for complaining about it.



>
> If I need to work with 64 bit floats on an 80 bit machine, then I'll specify that (pragma, decorated literal, whatever), and trust that "everything" then happens with 64 bits.
>
> There's too much C legacy clouding this room.  ;-)

I am too frightened to comment.


>
> I mean, when's the last time anyone did half their float math with 64 bits and half with 80, in the same program?


I am frequently mixing floats and doubles. If literals
are used to assign values to them they are all doubles
by default, I'd never even think of suffixing any of
the desired float format values, because it is simply
unnecessary. One can always assign a higher precision
FP value to a lower precision without trouble. If
the literals were parsed as reals instead of doubles,
my float variables would still be the same and no
legacy compatibility paranoia whatsoever would come
true.

It just does not work in the other direction, because
the D compiler on 80 bit FPU systems is currently
instructed to set the 11 precision bits, which are
required to form a proper real value to zero - and
it does this without a warning.

So you can handle doubles and floats the usual way,
but do not even think about using a real if you
are not absolutely sure that you'll always remember
that dreaded "L" suffix. Your 2.3 is a double, it
is a float and it is a crippled real by design.


April 03, 2005
"Anders F Björklund" <afb@algonet.se> wrote in message news:d2pbds$2v38$1@digitaldaemon.com...
> GDC just falls back on what GCC reports for long double
> support, so you can compile with AIX long-double-128 if
> you like (unfortunately those are pretty darn buggy in
> GCC, and not IEEE-755 compliant even in IBM AIX either)

That's what I expected it to do. Thanks for clearing that up.


April 04, 2005
Anders F Björklund wrote:
> Georg Wrede wrote:
> 
>> My dream would be that depending on the FPU, the default would be the "best" -- i.e. 80 on Intel, 64 on, say, Sparc -- and that an undecorated float literal would be of _this_type_ by default.
> 
> You seem to be ignoring 32-bit floating point types ?

No, I just wanted to keep focused on the 80 vs 64 issue.

> That would be a mistake, they are very useful for
> sound and image processing, for instance ? Using
> 64 or more bits per channel would be overkill...
> 
> Also, with vector units one can process like 4 floats
> at a time. That is not too bad, either... (speedwise)
> Unfortunately, D does not support SSE/AltiVec (yet ?)
> 
> Or maybe it's just a little side-effect of your dislike
> of having to type an extra 'L' to get "extended" constants ?
> (as been pointed out, 1.0 is universal "C" code for "double")

<Sigh.> Dislike indeed.

And I admit, mostly the L or not, makes no difference. So one ends up not using L. And the one day it makes a difference, one will look everywhere else for own bugs, D bugs, hardware bugs, before noticing it was the missing L _this_ time. And then one gets a hammer and bangs one's head real hard.

>> So, I'd like a "default floating type", that is automatically aliased to be the smartest choice on the current platform. AND that all internal confusion would be removed.
> 
> 
>  From what I have seen, D is not about "automatically
> choosing the smartest type". It's about letting the
> programmer choose which type is the smartest to use ?
> 
> Even if that means that one has to pick from like
> 5 integer types, 4 floating point types, 3 string
> types and even 3 boolean types... (choices, choices)
> 
> And sometimes you have to cast those literals. Like for
> instance: -1U, or cast(wchar[]) "string". Kinda annoying,
> but not very complex - and avoids complicating the compiler ?
> 
> D can be a pretty darn low-level language at times, IMHO...

:-)

"A practical language for practical programmers."

Maybe it's just me. When I read the specs, it is absolutely clear, and I agree with it. But when I write literals having a decimal point I somehow can't help "feeling" that they're full 80 bit. Even when I know they're not.

Somehow I don't seem to have this same problem in C.

Maybe I should see a doctor. :-(
April 04, 2005
Bob W wrote:


########################################################

> But lets theoretically introduce a new
> extended precision type to either Java or C#.
> 
> Do you really think that they would dare to require
> us to use a suffix for a simple assignment like
> "hyperprecision x=1.2" ? I bet not.

########################################################


THERE! Wish I'd said that myself! :-)
April 04, 2005
Georg Wrede wrote:

> And I admit, mostly the L or not, makes no difference. So one ends up not using L. And the one day it makes a difference, one will look everywhere else for own bugs, D bugs, hardware bugs, before noticing it was the missing L _this_ time. And then one gets a hammer and bangs one's head real hard.

D lives in a world of two schools. The string literals, for instance,
they are untyped and only spring into existance when you actually do
assign them to anything. But the two numeric types are "different"...

To be compatible with C they default to "int" and "double", and
then you have to either cast them or use the 'L' suffixes to make
them use "long" or "extended" instead. Annoying, but same as before ?


BTW; In Java, you get an error when you do "float f = 1.0;"
     I'm not sure that is all better, but more "helpful"...
     Would you prefer it if you had to cast your constants ?

     Maybe one of those new D warnings would be in place here ?
     "warning - implicit conversion of expression 1.0 of
      type double to type extended can cause loss of data"

--anders