April 05, 2005
Walter wrote:
> "Anders F Björklund" <afb@algonet.se> wrote in message
> news:d2og5l$27nh$3@digitaldaemon.com...
> 
>>Walter wrote:
>>
>>
>>>I haven't done a comprehensive survey of computer languages, but as far
> 
> as I
> 
>>>can tell D stands pretty much alone in its support for 80 bits, along
> 
> with a
> 
>>>handful of C/C++ compilers (including DMC).
>>
>>The thing is that the D "real" type does *not* guarantee 80 bits ?
>>It doesn't even say the minimum size, so one can only assume 64...
> 
> 
> Yes, it's 64. Guaranteeing 80 bits would require writing an 80 bit software
> emulator. I've used such emulators before, and they are really, really slow.
> I don't think it's practical for D floating point to be 100x slower on some
> machines.
> ...
> 
Would implementing fixed point arithmetic improve that?  Even with a 128-bit integer as the underlying type, I think it would have operational limitations, but it should be a lot faster then "100 times as slow as hardware".  (OTOH, there's lots of reasons why it isn't a normal feature of languages.  Apple on the 68000 series is the only computer I know of using it, and then only for specialized applications.)
April 05, 2005
Ben Hinkle wrote:
> "Anders F Björklund" <afb@algonet.se> wrote in message news:d2qq5u$1aau$1@digitaldaemon.com...
> 
>>Walter wrote:
>>
>>
>>>>If so, just tell me it's better to have a flexible width language type,
>>>>than to have some types be unavailable on certain FPU computer hardware?
>>>
>>>Yes, I believe that is better. Every once in a while, an app *does* care,
>>>but they're screwed anyway if the hardware won't support it.
>>
>>I just fail to see how real -> double/extended, is any different from the int -> short/long that C has gotten so much beating for already ?
>>
>>The suggestion was to have fixed precision types:
>>- float => IEEE 754 Single precision (32-bit)
>>- double => IEEE 754 Double precision (64-bit)
>>- extended => IEEE 754 Double Extended precision (80-bit)
>>- quadruple => "IEEE 754" Quadruple precision (128-bit)
>>
>>And then have "real" be an alias to the largest hardware-supported type.
>>It wouldn't break code more than if it was a variadic size type format ?
> 
> 
> What happens when someone declares a variable as quadruple on a platform without hardware support? Does D plug in a software quadruple implementation? That isn't the right thing to do. That's been my whole point of bringing up Java's experience. They tried to foist too much rigor on their floating point model in the name of portability and had to redo it. 
> 
> 
Perhaps Ada has the right idea here.  Have a system default that depends on the available hardware, but also allow the user to define what size/precision is needed in any particular case.  It may slow things down a lot if you demand 17 places of accuracy, but if you really need exactly 17, you should be able to specify it.  (OTOH, Ada had the govt. paying for it's development, and it still ended up as a language people didn't want to use.)
April 05, 2005
"Charles Hixson" <charleshixsn@earthlink.net> wrote in message news:d2unfm$2n6s$1@digitaldaemon.com...
> Would implementing fixed point arithmetic improve that?  Even with a 128-bit integer as the underlying type, I think it would have operational limitations, but it should be a lot faster then "100 times as slow as hardware".  (OTOH, there's lots of reasons why it isn't a normal feature of languages.  Apple on the 68000 series is the only computer I know of using it, and then only for specialized applications.)

If using a 128 bit fixed point would work, then one can use integer arithmetic on it. But that isn't floating point, which is a fundamentally different animal.


1 2 3 4 5
Next ›   Last »