April 05, 2005 Re: 80 Bit Challenge | ||||
---|---|---|---|---|
| ||||
Posted in reply to Anders F Björklund | "Anders F Björklund" <afb@algonet.se> wrote in message news:d2sit7$fac$1@digitaldaemon.com... > "Suppose it was kept internally with full 80 bit precision, > participated in constant folding as a full 80 bit type, and was only > converted to 64 bits when a double literal needed to be actually > inserted into the .obj file?" A Sun3 compiler was even more extreme if I remember correctly: It used some extended precision for evaluations at compile time without even offering that format to the programmer. As you have already mentioned - they prefer to give us no more than doubles. > This would make the floating point work the same as the various strings do now, without adding more suffixes or changing defaults. No objections. > And that I have no problem with, I was just pre-occupied with all that other talk about the non-portable 80-bit float stuff... :-) I cannot imagine too many people complaining if constant folding produces the intended double precision 2.0 instead of 1.999999.... > Although, one might want to consider keeping L"str" and 1.0f around, > simply because it is less typing than cast(wchar[]) or cast(float) ? Of course I would like to keep all of that too. I just don't like 'stupid' compilers, i.e. I don't want to be required to explicitly tell them what to do if my code is sufficient to imply the action required. > It would also be a nice future addition, for when we have 128-bit floating point? Then those literals would still be kept at the max... Sure it would. But then we might have to decide between radix 2 and radix 10 formats if IEEE gets it into the new 754 revision. That will be a tough choice, especially if the performance of the latter comes close to the binary version. Ten years ago this would have been unthinkable, but now the silicon could be ready for it. |
April 05, 2005 Re: 80 Bit Challenge | ||||
---|---|---|---|---|
| ||||
Posted in reply to Anders F Björklund | "Anders F Björklund" <afb@algonet.se> wrote in message news:d2shm6$e0c$1@digitaldaemon.com... > All I know is a lot of D features is because "C does it" ? A big problem with subtly changing C semantics is that many programmers that D appeals to are longtime C programmers, and the semantics of C are burned into their brain. It would cause a lot of grief to change them. It's ok to change things in an obvious way, like how casts are done, but changing the subtle behaviors needs to be approached with a lot of caution. |
April 05, 2005 Re: 80 Bit Challenge | ||||
---|---|---|---|---|
| ||||
Posted in reply to Bob W | Bob W wrote:
>>It would also be a nice future addition, for when we have 128-bit
>>floating point? Then those literals would still be kept at the max...
>
> Sure it would. But then we might have to decide
> between radix 2 and radix 10 formats if IEEE gets
> it into the new 754 revision. That will be a tough
> choice, especially if the performance of the latter
> comes close to the binary version. Ten years
> ago this would have been unthinkable, but now
> the silicon could be ready for it.
Call me old-fashioned, but I prefer binary...
Of course, sometimes BCD and Decimal are useful
like when adding up money and things like that.
Or when talking to those puny non-hex humans. :-)
--anders
|
April 05, 2005 Re: 80 Bit Challenge | ||||
---|---|---|---|---|
| ||||
Posted in reply to Walter | Walter wrote: > A big problem with subtly changing C semantics is that many programmers that > D appeals to are longtime C programmers, and the semantics of C are burned > into their brain. It would cause a lot of grief to change them. It's ok to > change things in an obvious way, like how casts are done, but changing the > subtle behaviors needs to be approached with a lot of caution. As long as it doesn't stifle innovation, that approach sounds sound to me. But keep in mind that a lot of people have not used C at all, but are starting with Java, or even D, as their first compiled language... ? After all: (http://www.digitalmars.com/d/overview.html) "Extensions to C that maintain source compatibility have already been done (C++ and Objective-C). Further work in this area is hampered by so much legacy code it is unlikely that significant improvements can be made." The same thing applies to a lot of the C semantics, perhaps now "old" ? So far I think D has maintained a balance between "same yet different", but it could still have a few remaing rough edges filed off... (IMHO) And just changing floating literals shouldn't be *that* bad, should it ? --anders |
April 05, 2005 Re: 80 Bit Challenge | ||||
---|---|---|---|---|
| ||||
Posted in reply to Walter | Walter wrote:
> "Anders F Björklund" <afb@algonet.se> wrote in message news:d2shm6$e0c$1@digitaldaemon.com...
>
>> All I know is a lot of D features is because "C does it" ?
>
> A big problem with subtly changing C semantics is that many
> programmers that D appeals to are longtime C programmers, and the
> semantics of C are burned into their brain. It would cause a lot of
> grief to change them. It's ok to change things in an obvious way,
> like how casts are done, but changing the subtle behaviors needs to
> be approached with a lot of caution.
Do you have specific examples of situations where (either parsing decimal literals at full precision before assignment, or parsing them at the precision to be assigned -- your choice) does actually bite the aged C programmer?
|
April 05, 2005 Re: 80 Bit Challenge | ||||
---|---|---|---|---|
| ||||
Posted in reply to Anders F Björklund | "Anders F Björklund" <afb@algonet.se> wrote in message news:d2teag$19gm$1@digitaldaemon.com... > Bob W wrote: > >>>It would also be a nice future addition, for when we have 128-bit floating point? Then those literals would still be kept at the max... >> >> Sure it would. But then we might have to decide >> between radix 2 and radix 10 formats if IEEE gets >> it into the new 754 revision. That will be a tough >> choice, especially if the performance of the latter >> comes close to the binary version. Ten years >> ago this would have been unthinkable, but now >> the silicon could be ready for it. > > Call me old-fashioned, but I prefer binary... > > Of course, sometimes BCD and Decimal are useful > like when adding up money and things like that. > Or when talking to those puny non-hex humans. :-) > > --anders It will be application and performance dependent. My optimism about the silicon which could get radix 10 computation close to binary is most likely unfounded. So scientific work and high performance computing will have to be done in binary. But there is a huge demand for radix 10 computation for casual and financial use. Just imagine: most radix 10 fractions cannot be represented properly in a radix 2 format (except 0.75, 0.5, 0.25 etc.). You'll eliminate a great deal of rounding errors by using radix 10 formats for decimal in - decimal out apps. BCD is not an issue here because you'll be unable to pack the same amount of data in there as compared to binary. IEEE makes sure that radix 10 and radix 2 formats will be comparable in precison and range. So they will use declets to encode 3 digits each (10 bits holding 000..999), thus sacrifising only a fraction of what BCD is wasting. In general the formats look like an implementer's nightmare, especially if someone wanted to emulate the DecimalXX's in software. But I can almost smell that there is something in the FPU pipelines of several companies. They just have to wait until the 754 and 854 groups are nearing conclusion. |
April 05, 2005 Re: 80 Bit Challenge | ||||
---|---|---|---|---|
| ||||
Posted in reply to Bob W | Bob W wrote:
> In general the formats look like an implementer's
> nightmare, especially if someone wanted to
> emulate the DecimalXX's in software. But I can
> almost smell that there is something in the
> FPU pipelines of several companies. They just
> have to wait until the 754 and 854 groups are
> nearing conclusion.
Great, I just love a committee designing something...
<sniff> Smells like C++ ;-)
Think I'll just continue to use the time-honored
workaround to count the money in "cents" instead...
(and no, Walter, I don't mean the 128-bit kind :-) )
But 128-bit binary floats and integer would be nice.
---anders
|
April 05, 2005 Re: 80 Bit Challenge | ||||
---|---|---|---|---|
| ||||
Posted in reply to Georg Wrede | "Georg Wrede" <georg.wrede@nospam.org> wrote in message news:42526EB2.2050304@nospam.org... > Walter wrote: > > "Anders F Björklund" <afb@algonet.se> wrote in message news:d2shm6$e0c$1@digitaldaemon.com... > > > >> All I know is a lot of D features is because "C does it" ? > > > > A big problem with subtly changing C semantics is that many programmers that D appeals to are longtime C programmers, and the semantics of C are burned into their brain. It would cause a lot of grief to change them. It's ok to change things in an obvious way, like how casts are done, but changing the subtle behaviors needs to be approached with a lot of caution. > > Do you have specific examples of situations where (either parsing decimal literals at full precision before assignment, or parsing them at the precision to be assigned -- your choice) does actually bite the aged C programmer? I do know of several programs that are designed to "explore" the limits and characteristics of the floating point implementation that will produce incorrect results. I don't think it would be a problem if those programs broke. (*) C programs that provide a "back end" or VM to languages that require 64 bit floats, no more, no less, could break when ported to D. Another problem is that the program can produce different results when optimized - because optimization produces more opportunities for constant folding. This can already happen, though, because of the way the FPU handles intermediate results, and the only problem I know of that has caused is (*). And lastly there's the potential problem of using the D front end with a C optimizer/code generator that would be very difficult to upgrade to this new behavior with floating point constants. I know the DMD back end has this problem. I don't know if GDC does. Requiring this new behavior can retard the development of D compilers. |
April 05, 2005 Re: 80 Bit Challenge | ||||
---|---|---|---|---|
| ||||
Posted in reply to Anders F Björklund | "Anders F Björklund" <afb@algonet.se> wrote in message news:d2tubo$1nso$1@digitaldaemon.com... > Think I'll just continue to use the time-honored > workaround to count the money in "cents" instead... I agree. I don't see any advantage BCD has over that. |
April 06, 2005 Re: 80 Bit Challenge | ||||
---|---|---|---|---|
| ||||
Posted in reply to Walter | "Walter" <newshound@digitalmars.com> wrote in message news:d2uh5j$2fab$1@digitaldaemon.com... > > "Georg Wrede" <georg.wrede@nospam.org> wrote in message news:42526EB2.2050304@nospam.org... >> Walter wrote: >> > "Anders F Björklund" <afb@algonet.se> wrote in message news:d2shm6$e0c$1@digitaldaemon.com... >> > >> >> All I know is a lot of D features is because "C does it" ? >> > >> > A big problem with subtly changing C semantics is that many programmers that D appeals to are longtime C programmers, and the semantics of C are burned into their brain. It would cause a lot of grief to change them. It's ok to change things in an obvious way, like how casts are done, but changing the subtle behaviors needs to be approached with a lot of caution. >> >> Do you have specific examples of situations where (either parsing decimal literals at full precision before assignment, or parsing them at the precision to be assigned -- your choice) does actually bite the aged C programmer? > > I do know of several programs that are designed to "explore" the limits > and > characteristics of the floating point implementation that will produce > incorrect results. I don't think it would be a problem if those programs > broke. I also don't think that D has to make sure that these 0.01% of all applications will run properly. > > (*) C programs that provide a "back end" or VM to languages that require 64 bit floats, no more, no less, could break when ported to D. If you require "no more, no less" you cannot use the IA32 architecture the comventional way. You'd have to make sure that each and every intermedite result is converted back to 64 bits. This was already done in several portability-paranoic designs by storing intermediates to memory and reloading them. > > Another problem is that the program can produce different results when > optimized - because optimization produces more opportunities for constant > folding. This can already happen, though, because of the way the FPU > handles > intermediate results, and the only problem I know of that has caused is > (*). That problem is not just limited to optimisation and constant folding. It can strike ones program even at runtime if internal FPU precision is higher than the target precision. I just fail to understand that this is a problem, because there is just too much ported software running happily on IA32 architecture. Yes, I am cruel enough not to care at all about these remaining 0.01%, because I think that sophisticated portable programs need sophisticated programmers. That is not exactly the group which needs all the handholding they can get from the compiler. > > And lastly there's the potential problem of using the D front end with a C > optimizer/code generator that would be very difficult to upgrade to this > new > behavior with floating point constants. I know the DMD back end has this > problem. I don't know if GDC does. Requiring this new behavior can retard > the development of D compilers. Now we are talking! To my own surprise I have not the slightest idea how NOT to agree with that. |
Copyright © 1999-2021 by the D Language Foundation