June 28, 2014 Re: std.math performance (SSE vs. real) | ||||
---|---|---|---|---|
| ||||
Posted in reply to Walter Bright | On 6/28/14, 3:42 AM, Walter Bright wrote:
> Inverting matrices is commonplace for solving N equations with N
> unknowns.
Actually nobody does that.
Also, one consideration is that the focus of numeric work changes with time; nowadays it's all about machine learning, a field that virtually didn't exist 20 years ago. In machine learning precision does make a difference sometimes, but the key to good ML work is to run many iterations over large data sets - i.e., speed.
I have an alarm go off when someone proffers a very strong conviction. Very strong convictions means there is no listening to any argument right off the bat, which locks out any reasonable discussion before it even begins.
For better or worse modern computing units have focused on 32- and 64-bit float, leaving 80-bit floats neglected. I think it's time to accept that simple fact and act on it, instead of claiming we're the best in the world at FP math while everybody else speeds by.
Andrei
|
June 28, 2014 Re: std.math performance (SSE vs. real) | ||||
---|---|---|---|---|
| ||||
Posted in reply to Andrei Alexandrescu | On Saturday, 28 June 2014 at 14:01:13 UTC, Andrei Alexandrescu wrote: > On 6/28/14, 3:42 AM, Walter Bright wrote: >> Inverting matrices is commonplace for solving N equations with N >> unknowns. > > Actually nobody does that. > > Also, one consideration is that the focus of numeric work changes with time; nowadays it's all about machine learning It's the most actively publicised frontier, perhaps, but there's a huge amount of solid work happening elsewhere. People still need better fluid, molecular dynamics etc. simulations, numerical PDE solvers, finite element modelling and so on. There's a whole world out there :) That doesn't diminish your main point though. > For better or worse modern computing units have focused on 32- and 64-bit float, leaving 80-bit floats neglected. I think it's time to accept that simple fact and act on it, instead of claiming we're the best in the world at FP math while everybody else speeds by. > > > Andrei +1 |
June 28, 2014 Re: std.math performance (SSE vs. real) | ||||
---|---|---|---|---|
| ||||
Posted in reply to Walter Bright | On Saturday, 28 June 2014 at 10:42:19 UTC, Walter Bright wrote:
> It happens with both numerical integration and inverting matrices. Inverting matrices is commonplace for solving N equations with N unknowns.
>
> Errors accumulate very rapidly and easily overwhelm the significance of the answer.
if one wants better precision with solving linear equation he/she
at least would use QR-decomposition.
|
June 28, 2014 Re: std.math performance (SSE vs. real) | ||||
---|---|---|---|---|
| ||||
Posted in reply to Alex_Dovhal | On Sat, Jun 28, 2014 at 03:31:36PM +0000, Alex_Dovhal via Digitalmars-d wrote: > On Saturday, 28 June 2014 at 10:42:19 UTC, Walter Bright wrote: > >It happens with both numerical integration and inverting matrices. Inverting matrices is commonplace for solving N equations with N unknowns. > > > >Errors accumulate very rapidly and easily overwhelm the significance of the answer. > > if one wants better precision with solving linear equation he/she at least would use QR-decomposition. Yeah, inverting matrices is generally not the preferred method for solving linear equations, precisely because of accumulated roundoff errors. Usually one would use a linear algebra library which has dedicated algorithms for solving linear systems, which extracts the solution(s) using more numerically-stable methods than brute-force matrix inversion. They are also more efficient than inverting the matrix and then doing a matrix multiplication to get the solution vector. Mathematically, they are equivalent to matrix inversion, but numerically they are more stable and not as prone to precision loss issues. Having said that, though, added precision is always welcome, particularly when studying mathematical objects (as opposed to more practical applications like engineering, where 6-8 digits of precision in the result is generally more than good enough). Of course, the most ideal implementation would be to use algebraic representations that can represent quantities exactly, but exact representations are not always practical (they are too slow for very large inputs, or existing libraries only support hardware floating-point types, or existing code requires a lot of effort to support software arbitrary-precision floats). In such cases, squeezing as much precision out of your hardware as possible is a good first step towards a solution. T -- Time flies like an arrow. Fruit flies like a banana. |
June 29, 2014 Re: std.math performance (SSE vs. real) | ||||
---|---|---|---|---|
| ||||
Posted in reply to John Colvin | On Saturday, 28 June 2014 at 09:07:17 UTC, John Colvin wrote:
> On Saturday, 28 June 2014 at 06:16:51 UTC, Walter Bright wrote:
>> On 6/27/2014 10:18 PM, Walter Bright wrote:
>>> On 6/27/2014 4:10 AM, John Colvin wrote:
>>>> *The number of algorithms that are both numerically stable/correct and benefit
>>>> significantly from > 64bit doubles is very small.
>>>
>>> To be blunt, baloney. I ran into these problems ALL THE TIME when doing
>>> professional numerical work.
>>>
>>
>> Sorry for being so abrupt. FP is important to me - it's not just about performance, it's also about accuracy.
>
> I still maintain that the need for the precision of 80bit reals is a niche demand. Its a very important niche, but it doesn't justify having its relatively extreme requirements be the default. Someone writing a matrix inversion has only themselves to blame if they don't know plenty of numerical analysis and look very carefully at the specifications of all operations they are using.
>
> Paying the cost of moving to/from the fpu, missing out on increasingly large SIMD units, these make everyone pay the price.
>
> inclusion of the 'real' type in D was a great idea, but std.math should be overloaded for float/double/real so people have the choice where they stand on the performance/precision front.
Would thar make sense to have std.mast and std.fastmath, or something along these lines ?
|
June 29, 2014 Re: std.math performance (SSE vs. real) | ||||
---|---|---|---|---|
| ||||
Posted in reply to Russel Winder | On 6/28/2014 3:57 AM, Russel Winder via Digitalmars-d wrote: > I wonder if programmers should only be allowed to use floating point > number sin their code if they have studied numerical analysis? Be that as it may, why should a programming language make it harder for them to get right than necessary? The first rule in doing numerical calculations, hammered into me at Caltech, is use the max precision available at every step. Rounding error is a major problem, and is very underappreciated by engineers until they have a big screwup. The idea that "64 fp bits ought to be enough for anybody" is a pernicious disaster, to put it mildly. > Or indeed when calculating anything to do with money. You're better off using 64 bit longs counting cents to represent money than using floating point. But yeah, counting money has its own special problems. |
June 29, 2014 Re: std.math performance (SSE vs. real) | ||||
---|---|---|---|---|
| ||||
Posted in reply to francesco cattoglio | On 6/28/2014 4:27 AM, francesco cattoglio wrote: > We are talking about paying a price when you don't need it. More than that - the suggestion has come up here (and comes up repeatedly) to completely remove support for 80 bits. Heck, Microsoft has done so with VC++ and even once attempted to completely remove it from 64 bit Windows (I talked them out of it, you can thank me!). > With the correct > approach, solving numerical problems with double precision floats yelds > perfectly fine results. And it is, in fact, commonplace. Presuming your average mechanical engineer is well versed in how to do matrix inversion while accounting for precision problems is an absurd pipe dream. Most engineers only know their math book algorithms, not comp sci best practices. Heck, few CS graduates know how to do it. > Again, I've not read yet a research paper in which it was clearly stated that > 64bit floats were not good enough for solving a whole class of PDE problem. I'm > not saying that real is useless, quite the opposite: I love the idea of having > an extra tool when the need arises. I think the focus should be about not paying > a price for what you don't use I used to work doing numerical analysis on airplane parts. I didn't need a research paper to discover how much precision matters and when my results fell apart. |
June 29, 2014 Re: std.math performance (SSE vs. real) | ||||
---|---|---|---|---|
| ||||
Posted in reply to Andrei Alexandrescu | On 6/28/2014 7:01 AM, Andrei Alexandrescu wrote: > On 6/28/14, 3:42 AM, Walter Bright wrote: >> Inverting matrices is commonplace for solving N equations with N >> unknowns. > > Actually nobody does that. I did that at Boeing when doing analysis of the movement of the control linkages. The traditional way it had been done before was using paper and pencil with drafting tools - I showed how it could be done with matrix math. > I have an alarm go off when someone proffers a very strong conviction. Very > strong convictions means there is no listening to any argument right off the > bat, which locks out any reasonable discussion before it even begins. So far, everyone here has dismissed my experienced out of hand. You too, with "nobody does that". I don't know how anyone here can make such a statement. How many of us have worked in non-programming engineering shops, besides me? > For better or worse modern computing units have focused on 32- and 64-bit float, > leaving 80-bit floats neglected. Yep, for the game/graphics industry. Modern computing has also produced crappy trig functions with popular C compilers, because nobody using C cares about accurate answers (or they just assume what they're getting is correct - even worse). > I think it's time to accept that simple fact > and act on it, instead of claiming we're the best in the world at FP math while > everybody else speeds by. Leaving us with a market opportunity for precision FP. I note that even the title of this thread says nothing about accuracy, nor did the benchmark attempt to assess if there was a difference in results. |
June 29, 2014 Re: std.math performance (SSE vs. real) | ||||
---|---|---|---|---|
| ||||
Posted in reply to H. S. Teoh | On 6/28/2014 11:16 AM, H. S. Teoh via Digitalmars-d wrote:
> (as opposed to more
> practical applications like engineering, where 6-8 digits of precision
> in the result is generally more than good enough).
Of the final result, sure, but NOT for the intermediate results. It is an utter fallacy to conflate required precision of the result with precision of the intermediate results.
|
June 29, 2014 Re: std.math performance (SSE vs. real) | ||||
---|---|---|---|---|
| ||||
Posted in reply to Russel Winder | On 6/28/2014 3:33 AM, Russel Winder via Digitalmars-d wrote: > By being focused on Intel chips, D has failed to get floating point > correct in avery analogous way to C failing to get floating point types > right by focusing on PDP. Sorry, I do not follow the reasoning here. > Yes using 80-bit on Intel is good, but no-one > else has this. Floating point sizes should be 32-, 64-, 128-, 256-bit, > etc. D needs to be able to handle this. So does C, C++, Java, etc. Go > will be able to handle it when it is ported to appropriate hardware as > they use float32, float64, etc. as their types. None of this float, > double, long double, double double rubbish. > > So D should perhaps make a breaking change and have types int32, int64, > float32, float64, float80, and get away from the vagaries of bizarre > type relationships with hardware? D's spec says that the 'real' type is the max size supported by the FP hardware. How is this wrong? |
Copyright © 1999-2021 by the D Language Foundation