February 21, 2014
On Friday, 21 February 2014 at 05:21:53 UTC, Frustrated wrote:
> I think though adding a "repeating" bit would make it even more
> accurate so that repeating decimals within the bounds of maximum
> bits used could be represented perfectly. e.g., 1/3 = 0.3333...
> could be represented perfectly with such a bit and sliding fp
> type. With proper cpu support one could have 0.3333... * 3 = 1
> exactly.

I believe that the repeating decimals, or better, repeating binary fractions, will hardly be more useful than a rational representation like p/q.  The reason is, if we take a reciprocal 1/q and represent it as a binary, decimal, or other fixed-base fraction, the representation is surely periodic, but the upper bound on the length of its period is as high as q-1, and it is not unlikely to be the exact bound.

For example, at http://en.wikipedia.org/wiki/Binary_number#Fractions , note that the binary fraction for 1/11 has period 10, and for 1/13 the period is 12.

Thus repeating decimal for a fraction p/q will take up to q-1 bits when we store it as a repeating decimal, but log(p)+log(q) bits when stored as a rational number (numerator and denominator).

Ivan Kazmenko.
February 21, 2014
On Friday, 21 February 2014 at 09:04:40 UTC, Ivan Kazmenko wrote:
> I believe that the repeating decimals, or better, repeating binary fractions, will hardly be more useful than a rational representation like p/q.

Yeah, in retrospect I would say that a binary layout like:

numberator length | +- | numerator | divisor

Might be a stronger solution, or at least one which offers different advantages over what we have today. It still wouldn't be able to represent pi accurately, since it isn't a rational number, but I wouldn't be surprised if a rational number exists that could be represented than can be represented in IEEE format.
February 21, 2014
On Friday, 21 February 2014 at 07:42:36 UTC, francesco cattoglio
wrote:
> On Friday, 21 February 2014 at 05:21:53 UTC, Frustrated wrote:
>>
>> I think though adding a "repeating" bit would make it even more
>> accurate so that repeating decimals within the bounds of maximum
>> bits used could be represented perfectly. e.g., 1/3 = 0.3333...
>> could be represented perfectly with such a bit and sliding fp
>> type. With proper cpu support one could have 0.3333... * 3 = 1
>> exactly.
>>
>> By having two extra bits one could represent constants to any
>> degree of accuracy. e.g., the last bit says the value represents
>> the ith constant in some list. This would allow very common
>> irrational constants to be used: e, pi, sqrt(2), etc...
> Unfortunately maths (real world maths) isn't made by "common" constants. More, such a "repeating bit" should become a "repeating counter", since you usually get a certain number of repeating digits, not just a single one.
>

I disagree. Many computations done by computers involve constants
in the calculation... pi, e, sqrt(2), sqrt(-1), many physical
constants, etc. The cpu could store these constants at a higher
bit resolution than standard, say 128 bits or whatever.

>> For floating points, the 3rd last bit could represent a repeating
>> decimal or they could be used in the constants for common
>> repeating decimals. (since chances are, repeated calculations
>> would not produce repeating decimals)
> Things like those are cool and might have their application (I'm thinking mostly at message passing via TCP/IP), but have no real use in scientific computation. If you want good precision, you might as well be better off with bignum numbers.

Simply not true. Maple, for example, uses constants and can
compute using constants. It might speed up the calculations
significantly if one could compute at the cpu level using
constants. e.g. pi*sqrt(2) is just another constant. 2*pi is
another constant. Obviously there is a limit to the number of
constants one can store but one can encode many constants easily
without storage. (where the "index" itself is used as the value)

Also, the cpu could reduce values to constants... so if you have
a fp value that is 3.14159265..... or whatever, the cpu could
convert this to the constant pi since for all purposes it is
pi(even if it is just an approximation or pi - 1/10^10).

Basically the benefit of this(and potential) outweigh the cost of
1 bit out of 64-bits. I doubt anyone would miss it. In fact,
probably no real loss.... NaN could be a constant, in which case
you would have only one rather than the 2 billion or whatever you
have now).

February 21, 2014
On Friday, 21 February 2014 at 19:12:39 UTC, Frustrated wrote:
> Simply not true. Maple, for example, uses constants and can
> compute using constants.
You are mixing symbolic calculus and numerical computations. The two are completely unrelated.
> Basically the benefit of this(and potential) outweigh the cost of
> 1 bit out of 64-bits.
Unless I'm completely mistaken, what you are proposing is basically creating a tagged union type. It's pretty much unrelated from the "unum" proposed by the PDF.
February 21, 2014
On Friday, 21 February 2014 at 19:59:36 UTC, Francesco Cattoglio
wrote:
> On Friday, 21 February 2014 at 19:12:39 UTC, Frustrated wrote:
>> Simply not true. Maple, for example, uses constants and can
>> compute using constants.
> You are mixing symbolic calculus and numerical computations. The two are completely unrelated.
>> Basically the benefit of this(and potential) outweigh the cost of
>> 1 bit out of 64-bits.
> Unless I'm completely mistaken, what you are proposing is basically creating a tagged union type. It's pretty much unrelated from the "unum" proposed by the PDF.

Yes, I mentioned that would could extend floating points to
include other representations that are more efficient and reduce
errors and such.
February 21, 2014
On Friday, 21 February 2014 at 09:04:40 UTC, Ivan Kazmenko wrote:
> On Friday, 21 February 2014 at 05:21:53 UTC, Frustrated wrote:
>> I think though adding a "repeating" bit would make it even more
>> accurate so that repeating decimals within the bounds of maximum
>> bits used could be represented perfectly. e.g., 1/3 = 0.3333...
>> could be represented perfectly with such a bit and sliding fp
>> type. With proper cpu support one could have 0.3333... * 3 = 1
>> exactly.
>
> I believe that the repeating decimals, or better, repeating binary fractions, will hardly be more useful than a rational representation like p/q.  The reason is, if we take a reciprocal 1/q and represent it as a binary, decimal, or other fixed-base fraction, the representation is surely periodic, but the upper bound on the length of its period is as high as q-1, and it is not unlikely to be the exact bound.
>
> For example, at http://en.wikipedia.org/wiki/Binary_number#Fractions , note that the binary fraction for 1/11 has period 10, and for 1/13 the period is 12.
>
> Thus repeating decimal for a fraction p/q will take up to q-1 bits when we store it as a repeating decimal, but log(p)+log(q) bits when stored as a rational number (numerator and denominator).
>

No, you are comparing apples to oranges. The q is not the same in
both equations.

The number of bits for p/q when p and q are stored separately is
log[2](p) + log[2](q). But when p/q is stored as a repeating
decimal(assuming it repeats), then it is a fixed constant
dependent on the number of bits.

So in some cases the rational expression is cheaper and in other
cases the repeating decimal is cheaper.

e.g., .1.... in theory could take 1 bit to represent wile 1/9
takes 1 + 4 = 5.

(the point being is that the rational representation sometimes
takes way more bits than the repeating decimal representation,
other times it doesn't. Basically if we use n bits to represent
the repeating decimal we can't represent repetitions that require
more than n bits, in this case a rational expression may be
cheaper. If we use some type of compression then rational
expressions would, in general, be cheaper the larger n is.

e.g.,

1/9 takes 5 bits to represent as as a rational faction(4 if we
require the numerator to be 1).

0.1.... takes 1 bit to represent in theory but for n bits, it
takes n bits.
e.g.,

1/9 = 11111111/10^8 = 0.000111... in decimal. Which takes 6 bits
to store(000111).

If n was large and fixed then I think statistically it would be
best to use rational expressions rather than repeating decimals.
But rational expressions are not unique and that may cause some
problems with representation... unless the cpu implemented some
algorithms for reduction. 1/9 = 10/90 but 10/90 takes way more
bits to represent than 1/9 even though they represent the same
repeating decimal and the repeating decimals bits are fixed.

In any case, if the fpu had a history of calculations it could
potentially store the calculations in an expression tree and
attempt to reduce them. e.g., p/q*(q/p) = 1. A history could also
be useful for constants in that multiplying several constants
together could produce a new constant which would not introduce
any new accumulation errors.





February 22, 2014
On Friday, 21 February 2014 at 20:27:18 UTC, Frustrated wrote:
> On Friday, 21 February 2014 at 09:04:40 UTC, Ivan Kazmenko wrote:
>> Thus repeating decimal for a fraction p/q will take up to q-1 bits when we store it as a repeating decimal, but log(p)+log(q) bits when stored as a rational number (numerator and denominator).
>
> No, you are comparing apples to oranges. The q is not the same in both equations.

Huh?  I believe my q is actually the same q in both.

What I am saying is, more formally:

1. Representing p/q as a rational number asymptotically takes on the order of log(p)+log(q) bits.

2. Representing p/q as a repeating binary fraction asymptotically takes on the order of log(p)+q bits in the worst case, and this worst case is common.

Make that a decimal fraction instead of a binary fraction if you wish, the statement remains the same.

> The number of bits for p/q when p and q are stored separately is
> log[2](p) + log[2](q). But when p/q is stored as a repeating
> decimal(assuming it repeats), then it is a fixed constant
> dependent on the number of bits.

A rational represented as a fixed-base (binary, decimal, etc.) fraction always has a period.  Sometimes the period is degenerate, that is, consists of a single zero: 1/2 = 0.100000... = 0.1 as a binary fraction.  Sometimes there is a non-degenerate pre-period part before the period: 13/10 = 1.0100110011 = 1.0(1001) as a binary fraction, the "1.0" part being the pre-period and the "(1001)" part the period.  The same goes for decimal fractions.

> So in some cases the rational expression is cheaper and in other
> cases the repeating decimal is cheaper.

Sure, it depends on the p and q, I was talking about the asymptotic case.  Say, for p and q uniformly distributed in [100..999], I believe the rational representation is much shorter on average.  We can measure that if the practical need arises... :)

In practice, one may argue that large prime denominators don't occur too often; well, for such applications, fine then.

> If n was large and fixed then I think statistically it would be
> best to use rational expressions rather than repeating decimals.

Yeah, that's similar to what I was trying to state.

> But rational expressions are not unique and that may cause some
> problems with representation... unless the cpu implemented some
> algorithms for reduction. 1/9 = 10/90 but 10/90 takes way more
> bits to represent than 1/9 even though they represent the same
> repeating decimal and the repeating decimals bits are fixed.

Given a rational, bringing it to lowest terms is as easy (yet costly) as calculating the GCD.

> In any case, if the fpu had a history of calculations it could
> potentially store the calculations in an expression tree and
> attempt to reduce them. e.g., p/q*(q/p) = 1. A history could also be useful for constants in that multiplying several constants
> together could produce a new constant which would not introduce
> any new accumulation errors.

As far as I know, most compilers do that for constants known at compile time.  As for the run-time usage of constants, note that the pool of constants will still have to be stored somewhere, and addressed somehow, and this may cancel out the performance gains.

Ivan Kazmenko.
February 22, 2014
On Saturday, 22 February 2014 at 14:17:23 UTC, Ivan Kazmenko wrote:
> Sometimes there is a non-degenerate pre-period part before the period:
> 13/10 = 1.0100110011 = 1.0(1001) as a binary fraction, the "1.0" part being the pre-period and the "(1001)" part the period.  The same goes for decimal fractions.

Sorry, I meant 13/10 = 1.0100110011..., that is, the "1001" part repeating forever - as opposed to just stopping at 10-th bit of the fraction.
February 23, 2014
Hi

I will ask my question again.

Is there any interest in this format within the D community ?

Nick
February 23, 2014
On 20 February 2014 22:46, bearophile <bearophileHUGS@lycos.com> wrote:
> Iain Buclaw:
>
>
>> Most encoding formats use a form of discrete cosine transform, which involves floating point maths.
>
>
> I don't believe this much :-(
>


:-(

I looked up vorbis (very, very) briefly way of example.  Most structs that deal with encoding/decoding use integers, floats and doubles to represent values such as decay, attenuation, thresholds, amplification etc.