Jump to page: 1 2
Thread overview
std.math.TAU
Jul 05, 2011
James Fisher
Jul 05, 2011
James Fisher
Jul 05, 2011
James Fisher
Jul 05, 2011
Don
Jul 05, 2011
James Fisher
Jul 05, 2011
Don
Jul 05, 2011
Walter Bright
Jul 06, 2011
KennyTM~
Jul 06, 2011
Walter Bright
Jul 05, 2011
James Fisher
Jul 05, 2011
Nick Sabalausky
July 05, 2011
Hopefully this won't be taken as frivolous.  I (and possibly some of you) have been convinced by the argument at http://tauday.com/.  It's very convincing, and I won't rehash it here.

The use of τ instead of π will only become really convenient when one does not have to preface everything with "let τ = 2π".

For example, in D, in order to think in terms of τ instead of π, one must define `enum real TAU = std.math.PI * 2;`, and possibly also TAU_2, TAU_4, etc.

As well as being a typing inconvenience, I also think things are not that easy due to loss of precision (though I'm far from an expert on intricacies of floating point).

There is an initiative to add TAU to the Python standard library: http://www.python.org/dev/peps/pep-0628/

To this end, I suggest adding the constant TAU to std.math, and possibly also TAU_2 as an alias for PI, TAU_4 as an alias for PI_2, TAU_8 as PI_4.

In any case, I'd like to know what's necessary in order for me to define
these constants without loss of precision.
d


July 05, 2011
On Tue, 05 Jul 2011 04:31:09 -0400, James Fisher <jameshfisher@gmail.com> wrote:

> Hopefully this won't be taken as frivolous.  I (and possibly some of you)
> have been convinced by the argument at http://tauday.com/.  It's very
> convincing, and I won't rehash it here.
>
> The use of τ instead of π will only become really convenient when one does
> not have to preface everything with "let τ = 2π".
>
> For example, in D, in order to think in terms of τ instead of π, one must
> define `enum real TAU = std.math.PI * 2;`, and possibly also TAU_2, TAU_4,
> etc.
>
> As well as being a typing inconvenience, I also think things are not that
> easy due to loss of precision (though I'm far from an expert on intricacies
> of floating point).
>
> There is an initiative to add TAU to the Python standard library:
> http://www.python.org/dev/peps/pep-0628/
>
> To this end, I suggest adding the constant TAU to std.math, and possibly
> also TAU_2 as an alias for PI, TAU_4 as an alias for PI_2, TAU_8 as PI_4.
>
> In any case, I'd like to know what's necessary in order for me to define
> these constants without loss of precision.
> d

I read an article about this recently, it's definitely interesting.  The one place where I haven't seen it mentioned is what happens when you want the area of a circle, since that necessarily involves the radius.  I'd guess you'd have to use τ/2 * r^2, but even then, that's one formula vs. the rest.  It's probably a good tradeoff.  I can definitely see the advantage when using radians.  Never thought I'd have to re-learn trig again...

One thing I like about Pi vs Tau is that it cannot be mistaken for a normal character.

I'm not a floating point expert, but I would expect since floating point is stored in binary, dividing or multiplying by 2 loses no precision at all.  But I could be wrong...

-Steve
July 05, 2011
On Tue, Jul 5, 2011 at 12:15 PM, Steven Schveighoffer <schveiguy@yahoo.com>wrote:

> On Tue, 05 Jul 2011 04:31:09 -0400, James Fisher <jameshfisher@gmail.com> wrote:
>
>  Hopefully this won't be taken as frivolous.  I (and possibly some of you)
>> have been convinced by the argument at http://tauday.com/.  It's very convincing, and I won't rehash it here.
>>
>> The use of τ instead of π will only become really convenient when one does not have to preface everything with "let τ = 2π".
>>
>> For example, in D, in order to think in terms of τ instead of π, one must define `enum real TAU = std.math.PI * 2;`, and possibly also TAU_2, TAU_4, etc.
>>
>> As well as being a typing inconvenience, I also think things are not that
>> easy due to loss of precision (though I'm far from an expert on
>> intricacies
>> of floating point).
>>
>> There is an initiative to add TAU to the Python standard library: http://www.python.org/dev/**peps/pep-0628/<http://www.python.org/dev/peps/pep-0628/>
>>
>> To this end, I suggest adding the constant TAU to std.math, and possibly also TAU_2 as an alias for PI, TAU_4 as an alias for PI_2, TAU_8 as PI_4.
>>
>> In any case, I'd like to know what's necessary in order for me to define
>> these constants without loss of precision.
>> d
>>
>
> I read an article about this recently, it's definitely interesting.  The
> one place where I haven't seen it mentioned is what happens when you want
> the area of a circle, since that necessarily involves the radius.  I'd guess
> you'd have to use τ/2 * r^2, but even then, that's one formula vs. the rest.
>  It's probably a good tradeoff.  I can definitely see the advantage when
> using radians.  Never thought I'd have to re-learn trig again...
>
> One thing I like about Pi vs Tau is that it cannot be mistaken for a normal character.
>
> I'm not a floating point expert, but I would expect since floating point is
> stored in binary, dividing or multiplying by 2 loses no precision at all.
>  But I could be wrong...
>

Sorry, I didn't state this very clearly.  Multiplying the approximation of PI in std.math should yield the exact double of that approximation, as it should just involve increasing the exponent by 1.  However, [double the approximation of the constant] is not necessarily equal to [the approximation of double the constant].  Does that make sense?


July 05, 2011
On Tue, Jul 5, 2011 at 12:31 PM, James Fisher <jameshfisher@gmail.com>wrote:
>
> Sorry, I didn't state this very clearly.  Multiplying the approximation of PI in std.math should yield the exact double of that approximation, as it should just involve increasing the exponent by 1.  However, [double the approximation of the constant] is not necessarily equal to [the approximation of double the constant].  Does that make sense?
>

(I think this is why the constants in math.d<https://github.com/D-Programming-Language/phobos/blob/master/std/math.d#L206>are each defined separately rather than in terms of each other.)


July 05, 2011
"James Fisher" <jameshfisher@gmail.com> wrote in message news:mailman.1426.1309854678.14074.digitalmars-d@puremagic.com...
>Hopefully this won't be taken as frivolous.  I (and possibly some of you) have been convinced by the argument at http://tauday.com/.  It's very convincing, and I won't rehash it here.

He had me at "TAU == 2PI"

I'm sold.


July 05, 2011
On Tue, Jul 5, 2011 at 12:15 PM, Steven Schveighoffer <schveiguy@yahoo.com>wrote:
>
> I read an article about this recently, it's definitely interesting.  The
> one place where I haven't seen it mentioned is what happens when you want
> the area of a circle, since that necessarily involves the radius.  I'd guess
> you'd have to use τ/2 * r^2, but even then, that's one formula vs. the rest.
>  It's probably a good tradeoff.  I can definitely see the advantage when
> using radians.  Never thought I'd have to re-learn trig again...
>

It embarasses me to say that, after many years, working with radians and pi still makes my head hurt.  "So I have to multiply -- no wait, divide -- no wait, multiply that by 2 ..."


July 05, 2011
James Fisher wrote:
> On Tue, Jul 5, 2011 at 12:31 PM, James Fisher <jameshfisher@gmail.com <mailto:jameshfisher@gmail.com>> wrote:
> 
>     Sorry, I didn't state this very clearly.  Multiplying the
>     approximation of PI in std.math should yield the exact double of
>     that approximation, as it should just involve increasing the
>     exponent by 1.  However, [double the approximation of the constant]
>     is not necessarily equal to [the approximation of double the
>     constant].  Does that make sense?

I understand what you're getting at, but actually multiplication by powers of 2 is always exact for binary floating point numbers.
The reason is that the rounding is based on the values after the lowest bit of the _significand_. The exponent plays no role.
Multiplication or division by two doesn't change the significand at all, only the exponent, so if the rounding was correct before, it is still correct after the multiplication.

Or to put it another way: PI in binary is a infinitely long string of 1s and zeros. Multiplying it by two only shifts the string left and right, it doesn't change any of the 1s to 0s, etc, so the approximation doesn't change either.


> (I think this is why the constants in math.d <https://github.com/D-Programming-Language/phobos/blob/master/std/math.d#L206> are each defined separately rather than in terms of each other.)

Hmm. I'm not sure why PI_2 and PI_4 are there. They should be defined in terms of PI. Probably should fix that.
July 05, 2011
On Tue, Jul 5, 2011 at 8:49 PM, Don <nospam@nospam.com> wrote:

> James Fisher wrote:
>
>  On Tue, Jul 5, 2011 at 12:31 PM, James Fisher <jameshfisher@gmail.com<mailto:
>> jameshfisher@gmail.com**>> wrote:
>>
>>    Sorry, I didn't state this very clearly.  Multiplying the
>>    approximation of PI in std.math should yield the exact double of
>>    that approximation, as it should just involve increasing the
>>    exponent by 1.  However, [double the approximation of the constant]
>>    is not necessarily equal to [the approximation of double the
>>    constant].  Does that make sense?
>>
>
> I understand what you're getting at, but actually multiplication by powers
> of 2 is always exact for binary floating point numbers.
> The reason is that the rounding is based on the values after the lowest bit
> of the _significand_. The exponent plays no role.
> Multiplication or division by two doesn't change the significand at all,
> only the exponent, so if the rounding was correct before, it is still
> correct after the multiplication.
>
> Or to put it another way: PI in binary is a infinitely long string of 1s and zeros. Multiplying it by two only shifts the string left and right, it doesn't change any of the 1s to 0s, etc, so the approximation doesn't change either.
>

Great explanation, thanks.

 (I think this is why the constants in math.d <https://github.com/D-**
>> Programming-Language/phobos/**blob/master/std/math.d#L206<https://github.com/D-Programming-Language/phobos/blob/master/std/math.d#L206>>
>> are each defined separately rather than in terms of each other.)
>>
>
> Hmm. I'm not sure why PI_2 and PI_4 are there. They should be defined in terms of PI. Probably should fix that.
>

Another thing -- why are some constants defined in decimal, others in hex, and one (E) with the long 'L' suffix?  And is there a significance to the number of decimal/hexadecimal places -- e.g., is this the minimum places required to ensure the closest floating point value for all common hardware accuracies?


July 05, 2011
James Fisher wrote:
> On Tue, Jul 5, 2011 at 8:49 PM, Don <nospam@nospam.com <mailto:nospam@nospam.com>> wrote:
> 
>     James Fisher wrote:
> 
>         On Tue, Jul 5, 2011 at 12:31 PM, James Fisher
>         <jameshfisher@gmail.com <mailto:jameshfisher@gmail.com>
>         <mailto:jameshfisher@gmail.com
>         <mailto:jameshfisher@gmail.com>__>> wrote:
> 
>            Sorry, I didn't state this very clearly.  Multiplying the
>            approximation of PI in std.math should yield the exact double of
>            that approximation, as it should just involve increasing the
>            exponent by 1.  However, [double the approximation of the
>         constant]
>            is not necessarily equal to [the approximation of double the
>            constant].  Does that make sense?
> 
> 
>     I understand what you're getting at, but actually multiplication by
>     powers of 2 is always exact for binary floating point numbers.
>     The reason is that the rounding is based on the values after the
>     lowest bit of the _significand_. The exponent plays no role.
>     Multiplication or division by two doesn't change the significand at
>     all, only the exponent, so if the rounding was correct before, it is
>     still correct after the multiplication.
> 
>     Or to put it another way: PI in binary is a infinitely long string
>     of 1s and zeros. Multiplying it by two only shifts the string left
>     and right, it doesn't change any of the 1s to 0s, etc, so the
>     approximation doesn't change either.
> 
> 
> Great explanation, thanks.
> 
>         (I think this is why the constants in math.d
>         <https://github.com/D-__Programming-Language/phobos/__blob/master/std/math.d#L206
>         <https://github.com/D-Programming-Language/phobos/blob/master/std/math.d#L206>>
>         are each defined separately rather than in terms of each other.)
> 
> 
>     Hmm. I'm not sure why PI_2 and PI_4 are there. They should be
>     defined in terms of PI. Probably should fix that.
> 
> 
> Another thing -- why are some constants defined in decimal, others in hex, and one (E) with the long 'L' suffix?  

The ones defined in decimal are obsolete, they haven't had a conversion to hex yet.

> And is there a significance
> to the number of decimal/hexadecimal places -- e.g., is this the minimum places required to ensure the closest floating point value for all common hardware accuracies?

Yes, it's 80 bit. Currently there's a problem with DMC's floating-point parser, all those numbers should really be 128 bit (we should be ready for 128 bit quads).
July 05, 2011
On 7/5/2011 3:45 PM, Don wrote:
>> Another thing -- why are some constants defined in decimal, others in hex, and
>> one (E) with the long 'L' suffix?
>
> The ones defined in decimal are obsolete, they haven't had a conversion to hex yet.

The ones in hex I got out of a book that helpfully printed them as octal values. I wanted exact bit patterns, not decimal conversions that might suffer if there's a flaw in the lexer.

It's hard to come by textbook values for some of these that are high precision.

It's definitely not good enough to just write some simple fp program to generate them.
« First   ‹ Prev
1 2