May 17, 2016
On 09.05.2016 22:20, Walter Bright wrote:
> On 5/9/2016 12:39 PM, tsbockman wrote:
>> Educating programmers who've never studied how to write correct FP
>> code is too
>> complex of a task to implement via compiler warnings. The warnings
>> should be
>> limited to cases that are either obviously wrong, or where the warning
>> is likely
>> to be a net positive even for FP experts.
>
> I've seen a lot of proposals which try to hide the reality of how FP
> works. The cure is worse than the disease. The same goes for hiding
> signed/unsigned, and the autodecode mistake of pretending that code
> units aren't there.

I feel the same way about automated enhancement of precision for intermediate computations behind the back of the programmer. In the best case, you are pretending that the algorithm has better numerical properties than it actually has, in the worst case you are destroying the accuracy of the result.
May 17, 2016
On 05/17/2016 02:13 PM, Timon Gehr wrote:
> On 09.05.2016 22:20, Walter Bright wrote:
>> On 5/9/2016 12:39 PM, tsbockman wrote:
>>> Educating programmers who've never studied how to write correct FP
>>> code is too
>>> complex of a task to implement via compiler warnings. The warnings
>>> should be
>>> limited to cases that are either obviously wrong, or where the warning
>>> is likely
>>> to be a net positive even for FP experts.
>>
>> I've seen a lot of proposals which try to hide the reality of how FP
>> works. The cure is worse than the disease. The same goes for hiding
>> signed/unsigned, and the autodecode mistake of pretending that code
>> units aren't there.
>
> I feel the same way about automated enhancement of precision for
> intermediate computations behind the back of the programmer. In the best
> case, you are pretending that the algorithm has better numerical
> properties than it actually has, in the worst case you are destroying
> the accuracy of the result.

That's an interesting assessment, thanks. -- Andrei
May 17, 2016
On Tuesday, 17 May 2016 at 18:08:47 UTC, Timon Gehr wrote:
> Right. Hence, the 80-bit CTFE results have to be converted to the final precision at some point in order to commence the runtime computation. This means that additional rounding happens, which was not present in the original program. The additional total roundoff error this introduces can exceed the roundoff error you would have suffered by using the lower precision in the first place, sometimes completely defeating precision-enhancing improvements to an algorithm.
>

WAT ? Is that really possible ?

May 17, 2016
On 5/16/2016 7:47 AM, Max Samukha wrote:
> On Monday, 16 May 2016 at 14:21:34 UTC, Ola Fosheim Grøstad wrote:
>
>> C++17 is getting hex literals for floating point for a reason: accurate bit
>> level representation.
>
> D has had hex FP literals for ages.


Since the first version, if I recall correctly. Of course, C++ had the idea first!

(Actually, the idea came from the NCEG (Numerical C Extensions Group) work in the early 90's, which was largely abandoned. C++ has been very slow to adapt to the needs of numerics programmers.)
May 17, 2016
On 5/16/2016 8:15 PM, Era Scarecrow wrote:
>  Speed in theory shouldn't be that big of a problem. As I recall the FPU *was* a
> separate processor; Sending the instructions took like 3 cycles. Following that
> you could do other stuff before returning for the result(s), but that assumes
> you aren't *only* doing FPU work. The issue would then come up when you are
> waiting for the result after the fact (and that's only for really slow
> operations, most operations are very fast, but my knowledge/experience is
> probably more than a decade out of date).

With some clever programming, you could execute other instructions in parallel with the x87.

May 17, 2016
On 5/17/2016 11:08 AM, Timon Gehr wrote:
> Right. Hence, the 80-bit CTFE results have to be converted to the final
> precision at some point in order to commence the runtime computation. This means
> that additional rounding happens, which was not present in the original program.
> The additional total roundoff error this introduces can exceed the roundoff
> error you would have suffered by using the lower precision in the first place,
> sometimes completely defeating precision-enhancing improvements to an algorithm.

I'd like to see an example of double rounding "completely defeating" an algorithm, and why an unusual case of producing a slightly worse answer trumps the usual case of producing better answers.


> There are other reasons why I think that this kind of implementation-defined
> behaviour is a terribly bad idea, eg.:
>
> - it breaks common assumptions about code, especially how it behaves under
> seemingly innocuous refactorings, or with a different set of compiler flags.

As pointed out, this already happens with just about every language. It happens with all C/C++ compilers I'm aware of. It happens as the default behavior of the x86. And as pointed out, refactoring (x+y)+z to x+(y+z) often produces different results, and surprises a lot of people.


> - it breaks reproducibility, which is sometimes more important that being close
> to the infinite precision result (which you cannot guarantee with any finite
> floating point type anyway).
>   (E.g. in a game, it is enough if the result seems plausible, but it should be
> the same for everyone. For some scientific experiments, the ideal case is to
> have 100% reproducibility of the computation, even if it is horribly wrong, such
> that other scientists can easily uncover and diagnose the problem, for example.)

Nobody is proposing a D feature that does not produce reproducible results with the same program on the same inputs. This complaint is a strawman, as I've pointed out multiple times.

In fact, the results would be MORE portable than with C/C++, because the FP behavior is completely implementation defined, and compilers take advantage of that.

May 17, 2016
On 5/16/2016 7:00 PM, Manu via Digitalmars-d wrote:
> If Ethan and Remedy want to expand their use of D, the compiler CAN
> NOT emit x87 code. It's just a matter of time before a loop is in a
> hot path.

dmd no longer emits x87 code for float/double on 64 bit models, and hasn't for years.

May 17, 2016
On 5/16/2016 1:43 PM, Guillaume Piolat wrote:
> So how about using SSE like in 64-bit code?

It does for 32 bits on OSX, because SSE is guaranteed to be available on Macs. But this is not true in general, and so it does not for other x86 systems.

May 17, 2016
On 5/17/2016 7:08 AM, Wyatt wrote:
> On Monday, 16 May 2016 at 12:37:58 UTC, Walter Bright wrote:
>>
>> 7. 80 bit reals are there and they work. The support is mature, and is rarely
>> worked on, i.e. it does not consume resources.
>>
> This may not be true for too much longer-- both Intel and AMD are slowly phasing
> the x86 FPU out.  I think Intel already announced a server chip that omits it
> entirely, though I can't find the corroborating link.


Consider that clang and g++ support 80 bit long doubles. D is hardly unique. It's VC++ that does not.
May 17, 2016
On Tue, May 17, 2016 at 02:07:21PM -0700, Walter Bright via Digitalmars-d wrote:
> On 5/17/2016 11:08 AM, Timon Gehr wrote:
[...]
> >- it breaks reproducibility, which is sometimes more important that being close to the infinite precision result (which you cannot guarantee with any finite floating point type anyway).  (E.g. in a game, it is enough if the result seems plausible, but it should be the same for everyone. For some scientific experiments, the ideal case is to have 100% reproducibility of the computation, even if it is horribly wrong, such that other scientists can easily uncover and diagnose the problem, for example.)
> 
> Nobody is proposing a D feature that does not produce reproducible results with the same program on the same inputs. This complaint is a strawman, as I've pointed out multiple times.

Wasn't Manu's original complaint that, given a particular piece of FP code that uses floats, evaluating that code at compile-time may produce different results than evaluating it at runtime, because (as you're proposing) the compiler will use higher precision than specified for intermediate results?  Of course, the compile-time answer is arguably "more correct" because it has less roundoff error, but the point here is not how accurate that answer is, but that it *doesn't match the runtime results*. This mismatch, from what I understand, is what causes the graphical glitches that Manu was referring to.

According to your prescription, then, the runtime code should be "fixed" to use higher precision, so that it will also produce the same, "more correct" answer.  But unfortunately, that's not workable because of the performance implications. At the end of the day, nobody cares whether a game draws a polygon with the most precise coordinates, what people *do* care is that there's a mismatch between the "more correct" and "less correct" rendering of the polygon (produced, respectively, from CTFE and from runtime) that causes a visually noticeable glitch. It *looks* wrong, no matter how much you may argue that it's "more correct". You are probably right scientifically, but in a game, people are concerned about what they see, not whether polygon coordinates have less roundoff error at CTFE vs. at runtime.


T

-- 
We are in class, we are supposed to be learning, we have a teacher... Is it too much that I expect him to teach me??? -- RL