September 17, 2015
On Wednesday, 16 September 2015 at 23:28:23 UTC, H. S. Teoh wrote:
> I'm not so sure how well this will work in practice, though, unless we have a working prototype that proves the benefits.  What if you have a 10*10 unum matrix, and during some operation the size of the unums in the matrix changes?  Assuming the worst case, you could have started out with 10*10 unums with small exponent/mantissa, maybe fitting in 2-3 cache lines, but after the operation most of the entries expand to 7-bit exponent and 31-bit mantissa, so now your matrix doesn't fit into the allocated memory anymore.  So now your hardware has to talk to druntime to have it allocate new memory for storing the resulting unum matrix?

Let's not make it so complicated. The internal CPU format could just be 32 and 64 bit. The key concept is about recording closed/open intervals and precision. If you spend 16 cores of a 256 core tiled coprocessor on I/O you still have 240 cores left.

For the external format, it depends on your algorithm. If you are using map reduce you load/unload working sets, let the coprocessor do most of the work and combine the results. Like an actor based pipeline.

The problem is more that average programmers will have real trouble making good use of it, since the know-how isn't there.

> The author proposed GC, but I have a hard time imagining a GC implemented in *CPU*, no less, colliding with
> the rest of the world where it's the *software* that controls DRAM allocation.  (GC too slow for your application? Too bad, gotta upgrade your CPU...)

That's a bit into the future, isn't it? But local memory is probably less that 256K and designed for the core, so… who knows what extras you could build in? If you did it, the effect would be local, but it sounds too complicated to be worth it.

But avoid thinking that the programmer address memory directly. CPU+Compiler is one package. Your interface is the compiler, not the CPU as such.

> The way I see it from reading the PDF slides, is that what the author is proposing would work well as a *software* library, perhaps backed up by hardware support for some of the lower-level primitives.  I'm a bit skeptical of the claims of

First you would need to establish that there are numerical advantages that scientists require in some specific fields.

Then you need to build it into scientific software and accelerate it. For desktop CPUs, nah... most people don't care about accuracy that much.

Standards like IEEE1788 might also make adoption of unum less likely.

September 17, 2015
On Tuesday, 15 September 2015 at 05:16:53 UTC, deadalnix wrote:
> On Saturday, 11 July 2015 at 03:02:24 UTC, Nick B wrote:
>> On Thursday, 20 February 2014 at 10:10:13 UTC, Nick B wrote:
...
>>
>> If you are at all interested in computer arithmetic or numerical methods, read this book. It is destined to be a classic.
>
> To be honest, that sound like snake oil salesman speech to me rather than science. It's all hand waving and nothing concrete is provided, the whole thing wrapped in way too much superlatives.
>
> The guy seems to have good credential. Why should I read that book ?

I read the whole book and did not regret it at all, but I was already looking for good interval arithmetic implementations. I found that the techniques are not too different (though improved in important ways) from what is mainstream in verified computing. I found signs that techniques like this are standard in beam physics. (Caveat, I am not a beam physicist, but my friend at CERN is.) And the bibliography for MSU's COSY INFINITY, a verified computing tool from the beam physics community, provided a lot of interesting background information: http://www.bt.pa.msu.edu/pub/

What is perhaps most surprising is not that the techniques work well but that they can be implemented efficiently in a sensible amount of hardware, little more expensive than bare floating point hardware. Even maybe the most surprising results about search-based global optimization with unums have precedent, see e.g. "Rigorous Global Search using Taylor Models," M. Berz, K. Makino, Symbolic Numeric Computation 2009, (2009) 11-19, http://bt.pa.msu.edu/cgi-bin/display.pl?name=GOSNC09 (Taylor model techniques are similar to direct computation with intervals but add a bit more sophistication.)

So, from my perspective, I think a unum library would at least be an interesting and useful library, and roughly in the style of the Mathematica / Python libraries could reduce unum interval computations to floating point computations with a modest amount of overhead. There is a sense in which we might expect the small overhead up front to be well worth it in the overall system: less haste, more speed. Hacks to try to compensate for incorrect behavior after the fact may end up being more costly overall, certainly to the programmer but perhaps also to the machine. For example, the common need to stick a loop over an entire vector or matrix into the inner loop of an iterative method to renormalize to prevent floating point rounding errors from accumulating.

Whether this library should be part of the standard library, I don't know. It would seem to depend on how much people want the standard library to support verified numerical computing. If it is clear that verified numerical computing needs good support in the standard library, something like unums should be there, maybe even with some other techniques built on top of them (Taylor model or Levi-Civita for example).

Anthony
September 18, 2015
On Thursday, 17 September 2015 at 23:53:30 UTC, Anthony Di Franco wrote:

>
> I read the whole book and did not regret it at all, but I was already looking for good interval arithmetic implementations. I found that the techniques are not too different (though improved in important ways) from what is mainstream in verified computing.

 It would seem to depend on how much people want the
> standard library to support verified numerical computing.


Anthony

Good to know that you enjoyed reading the book.

Can you describe what YOU mean by 'verified numerical computing', as I could not find a good description of it, and why is it important to have it.

Nick
September 18, 2015
On Friday, 18 September 2015 at 03:19:26 UTC, Nick B wrote:
> Can you describe what YOU mean by 'verified numerical computing', as I could not find a good description of it, and why is it important to have it.

Verified numerical computations provide results that are guaranteed to be without roundoff errors. A bit misleading term, perhaps.



September 18, 2015
On Thursday, 17 September 2015 at 23:53:30 UTC, Anthony Di Franco wrote:
> Whether this library should be part of the standard library, I don't know. It would seem to depend on how much people want the standard library to support verified numerical computing. If it is clear that verified numerical computing needs good support in the standard library, something like unums should be there, maybe even with some other techniques built on top of them (Taylor model or Levi-Civita for example).

I don't think you should expect D to support verifiable programming. The only person here that has pushed for it consistently is Bearophile, but he is not a dev (and where is he?).

Andrei has previously voiced the opinion that interval arithmetics as defined is ad-hoc and that D should do it differently:

http://forum.dlang.org/post/l8su3p$g4o$1@digitalmars.com

Walter, Andrei and many others have previously argued that you can turn asserts into assumes (basically assuming that they hold) without wrecking total havoc to the correctness of the program after optimization.

It has also been argued that signalling NaNs are useless and that reproducible floating point math (IEEE754-2008) is not going in based on some pragmatic assumptions that I never quite understood. The current definition of D floats is fundamentally incompatible with IEEE 754-2008. So I am not even sure if you can implement IEEE 1788 (interval arithmetics) as a plain D library.

D also only have modular integer math so you cannot detect overflow by adding a  compiler switch since libraries may depend on modular arithmetic behaviour.

D is created by hackers who enjoy hacking. They don't have the focus on correctness that verifiable-anything requires. So if you enjoy hacking you'll have fun. If are into reliability, stability and correctness you'll get frustrated. I'm not even sure you can have it both ways (both have a hacker mindset and a correctness mindset in the same design process).

September 18, 2015
On Friday, 18 September 2015 at 09:25:00 UTC, Ola Fosheim Grøstad wrote:
> D is created by hackers who enjoy hacking. They don't have the focus on correctness that verifiable-anything requires. So if you enjoy hacking you'll have fun. If are into reliability, stability and correctness you'll get frustrated. I'm not even sure you can have it both ways (both have a hacker mindset and a correctness mindset in the same design process).

You forgot to mention that D is quite attractive for people who just want to complain on forums.
September 18, 2015
On Friday, 18 September 2015 at 13:39:24 UTC, skoppe wrote:
> You forgot to mention that D is quite attractive for people who just want to complain on forums.

Yes, but that does not define the language.

That's just a consequence of people having expectations and caring about where it is heading. If you want to avoid that you have to be upfront about where it is at and where it is going.

If people didn't care about D and where it is heading, then they would not complain.

September 18, 2015
Also keep in mind that people who care about the language complain only in the forums.

People who no longer care about the language and are upset because they had too high expectations complain not on the forums, but on reddit, slashdot and blogs...

So setting expectations where they belong pays off. D really need to improve on that aspect. I basically just involves a focus on honest and objective communication throughout.

November 08, 2015
On Friday, 18 September 2015 at 03:19:26 UTC, Nick B wrote:
> On Thursday, 17 September 2015 at 23:53:30 UTC, Anthony Di Franco wrote:
>
>>
>> I read the whole book and did not regret it at all, but I was already looking for good interval arithmetic implementations. I found that the techniques are not too different (though improved in important ways) from what is mainstream in verified computing.
>

Hi,

I haven't finished the book but have read over half of it and browsed the rest. I wanted to add that an implementation of unums would have advantages beyond verifiable computing. Some examples that spring to mind are:

Using low precision (8-bit) unums to determine if an answer exists before using a higher precision representation to do the calculation (example briefly discussed in the book is ray tracing).

More generally, unums can self-tune their precision which may be generally useful in getting high precision answers efficiently.

It is possible for the programmer to specify the level of accuracy so that unums don't waste time calculating bits that have no meaning.

Parallelisation - floating point ops are not associative but unum ops are.

Tighter bounds on results than interval arithmetic or significance arithmetic.

These are just a few areas where a software implementation could be useful. If you've ever had any issues with floating point, I'd recommend reading the book, not just because of the approach it proposes to solve these but also because it's very clearly written and quite entertaining (given the subject matter).

Richard





November 15, 2015
On 09/11/15 04:38, Richard Davies wrote:
> On Friday, 18 September 2015 at 03:19:26 UTC, Nick B wrote:
>> On Thursday, 17 September 2015 at 23:53:30 UTC, Anthony Di Franco wrote:
>>
>>>
>>> I read the whole book and did not regret it at all, but I was already
>>> looking for good interval arithmetic implementations. I found that
>>> the techniques are not too different (though improved in important
>>> ways) from what is mainstream in verified computing.
>>
>
> Hi,
>
> I haven't finished the book but have read over half of it and browsed
> the rest. I wanted to add that an implementation of unums would have
> advantages beyond verifiable computing. Some examples that spring to
> mind are:
>
> Using low precision (8-bit) unums to determine if an answer exists
> before using a higher precision representation to do the calculation
> (example briefly discussed in the book is ray tracing).
>
> More generally, unums can self-tune their precision which may be
> generally useful in getting high precision answers efficiently.
>
> It is possible for the programmer to specify the level of accuracy so
> that unums don't waste time calculating bits that have no meaning.
>
> Parallelisation - floating point ops are not associative but unum ops are.
>
> Tighter bounds on results than interval arithmetic or significance
> arithmetic.
>
> These are just a few areas where a software implementation could be
> useful. If you've ever had any issues with floating point, I'd recommend
> reading the book, not just because of the approach it proposes to solve
> these but also because it's very clearly written and quite entertaining
> (given the subject matter).
>
> Richard

Yeah, I got curious too. I spend some time on it yesterday and had a stab at writing it in D. I was playing with the idea of using native floating point types to store Unums. This would not have the full benefit of the dynamically sized Unums, but would allow for the accuracy benefits.

Still need to implement the basic arithmetic (interval) stuff:
https://github.com/lionello/unumd

L.