December 17, 2019
On Tuesday, 17 December 2019 at 18:41:01 UTC, Ola Fosheim Grøstad wrote:
> On Tuesday, 17 December 2019 at 17:31:41 UTC, Timon Gehr wrote:
>> Haha. pow(0.0,0.0) is either 1.0 or NaN, but pow(1.0,∞) is guaranteed to be 1.0.
>
> The limits for 0^0 does not exist and floating point does not represent exactly zero, but approximately 0.

That's precisely why it is funny that the two cases are handled differently!
December 17, 2019
On Tuesday, 17 December 2019 at 19:41:22 UTC, Timon Gehr wrote:
> That's precisely why it is funny that the two cases are handled differently!

I wish I could see the humour in this. I want to laugh as well... :-/

But all I see there is pragmatism.

Anyway, for numeric programming one should in general stay away from 0.0. Some people add noise to their calculations just to avoid issues that arise close to 0.0.
December 17, 2019
On Tuesday, 17 December 2019 at 18:49:37 UTC, Ola Fosheim Grøstad wrote:
> On Tuesday, 17 December 2019 at 16:48:42 UTC, Martin Tschierschke wrote:
>> But 0^^0 in general, is very often replaced by lim x-> 0 x^^x
>
> Well, but if you do the lim of x^^y you either get 1 or 0 depending on how you approach it.

No, you can get any real value at all. Anything you want:

For x>0, lim[t→0⁺] (x^(-1/t))^(-t) = x.

lim[t→0⁺] 0^t = 0.

For x<0, lim[n→∞] (x^(-(2·n+1))^(-1/(2·n+1)) = x

You can also get infinity or negative infinity. pow for real arguments is maximally discontinuous at (0,0) (and it does not matter at all). The following wikipedia article, which was helpfully pasted earlier and you clearly did not read, clearly states this:
https://en.wikipedia.org/wiki/Zero_to_the_power_of_zero

It also says that if you restrict yourself to analytic functions f, g: ℝ_{≥0} → ℝ with f(0)=g(0)=0 and f(x)≠0 for x in some neighbourhood around 0, then we actually do have lim[t→0⁺] f(t)^g(t) = 1. I.e., while possible, in many cases it is actually unlikely that your computation does not want a result of 1, even if you are using floating point operations. There are multiple functions defined in the floating-point standard that use different conventions, and 1 is the default, for good reason.
December 17, 2019
On Tuesday, 17 December 2019 at 19:12:00 UTC, Ola Fosheim Grøstad wrote:
> On Tuesday, 17 December 2019 at 18:41:01 UTC, Ola Fosheim Grøstad wrote:
>> On Tuesday, 17 December 2019 at 17:31:41 UTC, Timon Gehr wrote:
>>> Haha. pow(0.0,0.0) is either 1.0 or NaN, but pow(1.0,∞) is guaranteed to be 1.0.
>
> Besides, that is not what it said on the page.

Yes, this is precisely what is says on the page. It may not be what you read, and that is because you cut corners and didn't read the entire page. I take a lot of care to validate my own statements. Please do the same.

> It said that 0^0 may lead to a domain error. [...]

"If a domain error occurs, an implementation-defined value is returned (NaN where supported)"
December 17, 2019
On Tuesday, 17 December 2019 at 19:53:06 UTC, Ola Fosheim Grøstad wrote:
> On Tuesday, 17 December 2019 at 19:41:22 UTC, Timon Gehr wrote:
>> That's precisely why it is funny that the two cases are handled differently!
>
> I wish I could see the humour in this. I want to laugh as well... :-/
>
> ...

pow(1-ε,∞) is 0.
pow(1+ε,∞) is ∞.

pow is unstable at ∞ as much as at 0. It's plain weird to think 0.0 is rounded garbage but 1.0 is not, as 1.0+0.0 = 1.0.
December 17, 2019
On Tuesday, 17 December 2019 at 20:35:33 UTC, Timon Gehr wrote:
>> Besides, that is not what it said on the page.
>
> Yes, this is precisely what is says on the page.

Er.. No. As I said, it is an ISO standard, and thus exists to codify existing practice. That means that  some representatives from countries can block decisions. So first the webpage say that you may get a domain error. Then it refers to an IEC standard from 1989.

The may part is usually there to not make life difficult for existing implementations. So the foundation is IEC, but to bring all on board they probably put in openings that _MAY_ be used.

This is what you get from standardization. The purpose of ISO standardization is not create something new and pretty, but to reduce tendencies towards diverging ad hoc or proprietary standards. It is basically there to support international markets and fair competition... Not to create beautiful objects.

The process isn't really suited for programming language design, I think C++ is an outlier.

December 17, 2019
On Tuesday, 17 December 2019 at 20:43:20 UTC, Timon Gehr wrote:
> pow is unstable at ∞ as much as at 0. It's plain weird to think 0.0 is rounded garbage but 1.0 is not, as 1.0+0.0 = 1.0.

You need to look at this from the standardization POV, for instance: what do exisiting machine language instructions produce.  There are many angles to this, some implementors will use hardware instructions that trap on low accuracy results and then switch to a software implementation.

However in practice, inifinity is much less of an issue and relatively easy to avoid. And low accuracy around 0.0 that leads to instability is much more frequent.

But there are various tricks that can be used to increase accuracy. For instance you can convert

a*b*c*…   to log(a)+log(b)+log(c)+…

and so on.
December 17, 2019
On Tuesday, 17 December 2019 at 20:55:07 UTC, Ola Fosheim Grøstad wrote:
> On Tuesday, 17 December 2019 at 20:35:33 UTC, Timon Gehr wrote:
>>> Besides, that is not what it said on the page.
>>
>> Yes, this is precisely what is says on the page.
>
> Er.. No.

It says implementations that support NaN may choose to return NaN instead of 1. If we agree, as I intended, to not consider implementations with no NaN support, how exactly is this not what it says on the page? (Please no more pointless elaborations on what common terms mean, ideally just tell me what, in your opinion, is a concise description of what results an implementation is _allowed_ to give me for std::pow(0.0,0.0) on x86. For instance, if I added a single special case for std::pow(0.0,0.0) to a standards-compliant C++17 implementation for x86-64 with floating-point support, which values could I return without breaking C++17 standard compliance?)

> As I said, it is an ISO standard, and thus exists to codify existing practice. That means that  some representatives from countries can block decisions.

(I'm aware.)

> So first the webpage say that you may get a domain error. Then it refers to an IEC standard from 1989.
> ...

They don't say that C++ `std::pow` itself is supposed to satisfy the constraints of `pow` in that standard, and as far as I can tell it is either not the case, or that constraint was not in the floating point standard at the time, as this article states that C++ leaves the result unspecified:

https://en.wikipedia.org/wiki/Zero_to_the_power_of_zero#Programming_languages

If you think this is inaccurate, you should probably take the fight to whoever wrote that article, as this is where I double-checked my claim and the article has been linked in this thread before I made that claim, but it seems like it is right as it also says that the C99 standard was explicitly amended to require pow(0.0,0.0)==1.0.

> The may part is usually there to not make life difficult for existing implementations. So the foundation is IEC, but to bring all on board they probably put in openings that _MAY_ be used.
>
> This is what you get from standardization. The purpose of ISO standardization is not create something new and pretty, but to reduce tendencies towards diverging ad hoc or proprietary standards. It is basically there to support international markets and fair competition... Not to create beautiful objects.
> ...

I didn't say that the result was _supposed_ to be beautiful, just that on face value it is ugly and funny. In any case, you will probably agree that it's not a place to draw inspiration from for the subject matter of this thread.

> The process isn't really suited for programming language design, I think C++ is an outlier.

Indeed, however it is still somewhat common for very popular languages:
https://en.wikipedia.org/wiki/Category:Programming_languages_with_an_ISO_standard

December 18, 2019
On Tuesday, 17 December 2019 at 23:29:53 UTC, Timon Gehr wrote:
> what it says on the page? (Please no more pointless elaborations on what common terms mean,

Well, «may» have other connotations in standard texts that in oridinary language, so I read such texts differently than you, obviously.


> on x86. For instance, if I added a single special case for std::pow(0.0,0.0) to a standards-compliant C++17 implementation for x86-64 with floating-point support, which values could I return without breaking C++17 standard compliance?)

Whatever you like.  It is implementation defined.  That does not mean it is encouraged to return something random.

According to the standard x^y  is defined as:

exp(y * log(x))


The problem with floating point is that what you want depends on the application. If you want to be (more) certain that you don't return inaccurate calculations then you want NaN or some other "exception" for all inaccurate operations. So that you can switch to a different algorithm. If you do something real time you probably just want something "reasonable".


> Indeed, however it is still somewhat common for very popular languages:

Yes, but some have built up the standard under less demanding regimes like ECMA, then improve on it under ISO.

I am quite impressed that C++ ISO moves anywhere (and mostly in the right direction) given how hard it is to reach consensus on anything related to langauge design and changes! :-)
December 18, 2019
On Wednesday, 18 December 2019 at 00:03:14 UTC, Ola Fosheim Grøstad wrote:
> On Tuesday, 17 December 2019 at 23:29:53 UTC, Timon Gehr wrote:
>> what it says on the page? (Please no more pointless elaborations on what common terms mean,
>
> Well, «may» have other connotations in standard texts that in oridinary language, so I read such texts differently than you, obviously.
>
>
>> on x86. For instance, if I added a single special case for std::pow(0.0,0.0) to a standards-compliant C++17 implementation for x86-64 with floating-point support, which values could I return without breaking C++17 standard compliance?)

Please note that you can test for 60559.212 conformance at compile time using:

static constexpr bool is_iec559;

57 true if and only if the type adheres to ISO/IEC/IEEE 60559.212
58 Meaningful for all floating-point types.

Which gives guarantees:

For the pown function (integral exponents only):
pown(x, 0) is 1 for any x (even a zero, quiet NaN, or infinity)
pown(±0, n) is ±∞ and signals the divideByZero exception for odd integral n<0
pown(±0, n) is +∞ and signals the divideByZero exception for even integral n<0
pown(±0, n) is +0 for even integral n>0
pown(±0, n) is ±0 for odd integral n>0.

For the pow function (integral exponents get special treatment):
pow(x, ±0) is 1 for any x (even a zero, quiet NaN, or infinity)
pow(±0, y) is ±∞ and signals the divideByZero exception for y an odd integer <0
pow(±0, −∞) is +∞ with no exception
pow(±0, +∞) is +0 with no exception
pow(±0, y) is +∞ and signals the divideByZero exception for finite y<0 and not an odd integer
pow(±0, y) is ±0 for finite y>0 an odd integer
pow(±0, y) is +0 for finite y>0 and not an odd integer
pow(−1, ±∞) is 1 with no exception
pow(+1, y) is 1 for any y (even a quiet NaN)
pow(x, y) signals the invalid operation exception for finite x<0 and finite non-integer y.

For the powr function (derived by considering only exp(y×log(x))):
powr(x, ±0) is 1 for finite x>0
powr(±0, y) is +∞ and signals the divideByZero exception for finite y<0
powr(±0, −∞) is +∞
powr(±0, y) is +0 for y>0
powr(+1, y) is 1 for finite y
powr(x, y) signals the invalid operation exception for x<0
powr(±0, ±0) signals the invalid operation exception
powr(+∞, ±0) signals the invalid operation exception
powr(+1, ±∞) signals the invalid operation exception
powr(x, qNaN) is qNaN for x≥0
powr(qNaN, y) is qNaN.