October 30, 2019 Re: Accuracy of floating point calculations | ||||
---|---|---|---|---|
| ||||
Posted in reply to kinke | On Tuesday, 29 October 2019 at 20:15:13 UTC, kinke wrote: > Note that there's at least one bugzilla for these float/double math overloads already. For a start, one could simply wrap the corresponding C functions. I guess, that this issue: https://issues.dlang.org/show_bug.cgi?id=20206 boils down to the same problem. I allready found out, that it's high likely that the bug is not to be found inside std.complex but probably the log function. |
October 30, 2019 Re: Accuracy of floating point calculations | ||||
---|---|---|---|---|
| ||||
Posted in reply to Robert M. Münch | On Wed, Oct 30, 2019 at 09:03:49AM +0100, Robert M. Münch via Digitalmars-d-learn wrote: > On 2019-10-29 17:43:47 +0000, H. S. Teoh said: > > > On Tue, Oct 29, 2019 at 04:54:23PM +0000, ixid via Digitalmars-d-learn wrote: > > > On Tuesday, 29 October 2019 at 16:11:45 UTC, Daniel Kozak wrote: > > > > On Tue, Oct 29, 2019 at 5:09 PM Daniel Kozak <kozzi11@gmail.com> wrote: > > > > > > > > AFAIK dmd use real for floating point operations instead of double > > > > > > Given x87 is deprecated and has been recommended against since 2003 at the latest it's hard to understand why this could be seen as a good idea. > > > > Walter talked about this recently as one of the "misses" in D (one of the things he predicted wrongly when he designed it). > > Why should the real type be a wrong decision? Maybe the code generation should be optimized if all terms are double to avoid x87 but overall more precision is better for some use-cases. It wasn't a wrong *decision* per se, but a wrong *prediction* of where the industry would be headed. Walter was expecting that people would move towards higher precision, but what with SSE2 and other such trends, and the general neglect of x87 in hardware developments, it appears that people have been moving towards 64-bit doubles rather than 80-bit extended. Though TBH, my opinion is that it's not so much neglecting higher precision, but a general sentiment of the recent years towards standardization, i.e., to be IEEE-compliant (64-bit floating point) rather than work with a non-standard format (80-bit x87 reals). I also would prefer to have higher precision, but it would be nicer if that higher precision was a standard format with guaranteed semantics that isn't dependent upon a single vendor or implementation. > I'm very happpy it exists, and x87 too because our app really needs this extended precision. I'm not sure what we would do if we only had doubles. > > I'm not aware of any 128 bit real implementations done using SIMD instructions which get good speed. Anyone? [...] Do you mean *simulated* 128-bit reals (e.g. with a pair of 64-bit doubles), or do you mean actual IEEE 128-bit reals? 'cos the two are different, semantically. I'm still longing for 128-bit reals (i.e., actual IEEE 128-bit format) to show up in x86, but I'm not holding my breath. In the meantime, I've been looking into arbitrary-precision float libraries like libgmp instead. It's software-simulated, and therefore slower, but for certain applications where I want very high precision, it's the currently the only option. T -- If Java had true garbage collection, most programs would delete themselves upon execution. -- Robert Sewell |
October 31, 2019 Re: Accuracy of floating point calculations | ||||
---|---|---|---|---|
| ||||
Posted in reply to H. S. Teoh | On 2019-10-30 15:12:29 +0000, H. S. Teoh said: > It wasn't a wrong *decision* per se, but a wrong *prediction* of where > the industry would be headed. Fair point... > Walter was expecting that people would move towards higher precision, but what with SSE2 and other such trends, and the general neglect of x87 in hardware developments, it appears that people have been moving towards 64-bit doubles rather than 80-bit extended. Yes, which is wondering me as well... but all the AI stuff seems to dominate the game and follow the hype is still a frequently used management strategy. > Though TBH, my opinion is that it's not so much neglecting higher > precision, but a general sentiment of the recent years towards > standardization, i.e., to be IEEE-compliant (64-bit floating point) > rather than work with a non-standard format (80-bit x87 reals). I see it more of a "let's sell what people want". The CPU vendors don't seem able to market higher precision. Better implement a highly-specific and exploding command-set... > Do you mean *simulated* 128-bit reals (e.g. with a pair of 64-bit > doubles), or do you mean actual IEEE 128-bit reals? Simulated, because HW support is lacking on X86. And PPC is not that mainstream. I exect Apple to move to ARM, but never heard about 128-Bit support for ARM. > I'm still longing for 128-bit reals (i.e., actual IEEE 128-bit format) > to show up in x86, but I'm not holding my breath. Me too. > In the meantime, I've been looking into arbitrary-precision float libraries like libgmp instead. It's software-simulated, and therefore slower, but for certain applications where I want very high precision, it's the currently the only option. Yes, but it's way too slow for our product. Maybe one day we need to deliver an FPGA based co-processor PCI card that can run 128-Bit based calculations... but that will be a pretty hard way to go. -- Robert M. Münch http://www.saphirion.com smarter | better | faster |
October 31, 2019 Re: Accuracy of floating point calculations | ||||
---|---|---|---|---|
| ||||
Posted in reply to Robert M. Münch | On Thu, Oct 31, 2019 at 09:52:08AM +0100, Robert M. Münch via Digitalmars-d-learn wrote: > On 2019-10-30 15:12:29 +0000, H. S. Teoh said: [...] > > Do you mean *simulated* 128-bit reals (e.g. with a pair of 64-bit doubles), or do you mean actual IEEE 128-bit reals? > > Simulated, because HW support is lacking on X86. And PPC is not that mainstream. I exect Apple to move to ARM, but never heard about 128-Bit support for ARM. Maybe you might be interested in this: https://stackoverflow.com/questions/6769881/emulate-double-using-2-floats It's mostly talking about simulating 64-bit floats where the hardware only supports 32-bit floats, but the same principles apply for simulating 128-bit floats with 64-bit hardware. [...] > > In the meantime, I've been looking into arbitrary-precision float libraries like libgmp instead. It's software-simulated, and therefore slower, but for certain applications where I want very high precision, it's the currently the only option. > > Yes, but it's way too slow for our product. Fair point. In my case I'm mainly working with batch-oriented processing, so a slight slowdown is an acceptable tradeoff for higher accuracy. > Maybe one day we need to deliver an FPGA based co-processor PCI card that can run 128-Bit based calculations... but that will be a pretty hard way to go. [...] Maybe switch to PPC? :-D T -- If you want to solve a problem, you need to address its root cause, not just its symptoms. Otherwise it's like treating cancer with Tylenol... |
October 31, 2019 Re: Accuracy of floating point calculations | ||||
---|---|---|---|---|
| ||||
Posted in reply to H. S. Teoh | On 2019-10-31 16:07:07 +0000, H. S. Teoh said: > Maybe you might be interested in this: > > https://stackoverflow.com/questions/6769881/emulate-double-using-2-floats Thanks, I know the 2nd mentioned paper. > Maybe switch to PPC? :-D Well, our customers don't use PPC Laptops ;-) otherwise that would be cool. -- Robert M. Münch http://www.saphirion.com smarter | better | faster |
Copyright © 1999-2021 by the D Language Foundation