February 26, 2018
On Monday, February 26, 2018 17:49:21 H. S. Teoh via Digitalmars-d-learn wrote:
> On Tue, Feb 27, 2018 at 12:26:56AM +0000, psychoticRabbit via Digitalmars-
d-learn wrote:
> > On Tuesday, 27 February 2018 at 00:04:59 UTC, H. S. Teoh wrote:
> > > A 64-bit double can only hold about 14-15 decimal digits of precision.  Anything past that, and there's a chance your "different" numbers are represented by exactly the same bits and the computer can't tell the difference.
> > >
> > > T
> >
> > I really miss not having a (C# like) decimal type.
>
> Didn't somebody write a decimal library type recently?  Try searching on code.dlang.org, you can probably find it there.

http://code.dlang.org/packages/decimal

It got a lot of positive feedback.

https://forum.dlang.org/post/mutegviphsjwqzqfouhs@forum.dlang.org

- Jonathan M Davis

February 26, 2018
On Mon, Feb 26, 2018 at 05:18:00PM -0700, Jonathan M Davis via Digitalmars-d-learn wrote:
> On Monday, February 26, 2018 16:04:59 H. S. Teoh via Digitalmars-d-learn wrote:
[...]
> > (There *are* exact representations for certain subsets of irrationals that allow fast computation that does not lose precision. But generally, they are only useful for specific applications where you know beforehand what form(s) your numbers will take. For general arithmetic, you have to compromise between speed and accuracy.)
> 
> No. No. No. Floating point values are just insane. ;)

Ah, but I wasn't talking about floating point values. :-D  I was thinking more along the lines of:

	https://github.com/quickfur/qrat


[...]
> In all seriousness, floating point values do tend to be highly frustrating to deal with, and personally, I've gotten to the point that I generally avoid them unless I need them. And much as that talks is titled "Using Floating Point Without Losing Your Sanity," I came away from it even more convinced that avoiding floating point values is pretty much the only sane solution - though unfortunately, sometimes, you really don't have a choice.
[...]

Well, the way I deal with floating point is, design my code with the assumption that things will be inaccurate, and compensate accordingly. :-P  For the most part, it works reasonably well.

Though, granted, there are still those odd cases where you have total precision loss. So yeah, it does get frustrating to deal with.  But my usual Large Blunt Object approach to it is to just arbitrarily rewrite expressions until they stop exhibiting pathological behaviour.  I suppose ultimately if it really absolutely matters that the code works in all possible cases with the best results, I'd sit down and actually work through how to minimize precision loss. But generally, I just don't have the patience to do that except in the few important places where it matters.

At one point, I even threw in the towel and write a hackish
"accuratizer" program that basically matches the first 7-8 digits of
numbers and substituted them from a pregenerated list of known values
results can take on. :-P  This is highly context-sensitive, of course,
and depends on outside knowledge that only works for specific cases.
But it was very satisfying to run the program on data that's about 6-8
digits accurate, and have it spit out linear combinations of √2 accurate
to 14 digits. :-D  (Of course, this only works when the data is known to
be combinations of √2. Otherwise it will only produce nonsensical
output.)


T

-- 
I think Debian's doing something wrong, `apt-get install pesticide', doesn't seem to remove the bugs on my system! -- Mike Dresser
February 26, 2018
On Monday, February 26, 2018 18:33:13 H. S. Teoh via Digitalmars-d-learn wrote:
> Well, the way I deal with floating point is, design my code with the assumption that things will be inaccurate, and compensate accordingly.

The way that I usually deal with it is to simply not use floating point numbers, because for most things that I do, I don't need them. Now, for those where I do need them, I'm forced to do like you're discussing and plan for the fact that FP math is inaccurate, but fortunately, I rarely need floating point values. Obviously, that's not true for everyone.

But it's certainly the case that I advocate avoiding FP when it's not actually necessary, whereas some folks use it frequently when it's not necessary.

One case that I found interesting was that in writing core.time.convClockFreq so that it didn't require floating point values, it not only avoided the inaccuracies caused by using FP, but it even was able to cover a larger range of values than FP because of how it splits out the whole number and fractional portions to do the math.

- Jonathan M Davis

February 26, 2018
On Mon, Feb 26, 2018 at 07:50:10PM -0700, Jonathan M Davis via Digitalmars-d-learn wrote: [...]
> One case that I found interesting was that in writing core.time.convClockFreq so that it didn't require floating point values, it not only avoided the inaccuracies caused by using FP, but it even was able to cover a larger range of values than FP because of how it splits out the whole number and fractional portions to do the math.
[...]

It would be nice to have a standard rational / fractional number library in Phobos.  It's a pretty common need.  IIRC, somebody wrote one some time ago.  I wonder if it's worth pushing it into Phobos.


T

-- 
War doesn't prove who's right, just who's left. -- BSD Games' Fortune
February 27, 2018
On 2/26/18 6:34 PM, psychoticRabbit wrote:
> On Sunday, 25 February 2018 at 14:52:19 UTC, Steven Schveighoffer wrote:
>>
>> 1 == 1.0, no?
> 
> no. at least, not when a language forces you to think in terms of types.

But you aren't. You are thinking in terms of text representation of values (which is what a literal is). This works just fine:

double x = 1;
double y = 1.0;
assert(x == y);

The same generated code to store 1 into x is used to store 1.0 into y. There is no difference to the compiler, it's just different in the source code.

> I admit, I've never printed output without using format specifiers, but still, if I say write(1.0), it should not go off and print what looks to me, like an int.

If it didn't, I'm sure others would complain about it :)

> Inheriting crap from C is no excuse ;-)
> 
> and what's going on here btw?
> 
> assert( 1 == 1.000000000000000001 );  // assertion error in DMD but not in LDC
> assert( 1 == 1.0000000000000000001 );  // no assertion error??

Floating point is not exact. In fact, even the one that asserts, cannot be accurately represented internally. At some decimal place, it cannot store any more significant digits, so it just approximates.

You may want to just get used to this, it's the way floating point has

-Steve
1 2 3
Next ›   Last »