January 03, 2014
On 03/01/14 01:04, Lars T. Kyllingstad wrote:
> Mathematically, the real numbers are a ring, whereas the purely imaginary
> numbers are not. (I've been out of the abstract algebra game for a couple of
> years now, so please arrest me if I've remembered the terminology wrongly.)
> What it boils down to is that they are not closed under multiplication, which
> gives them radically different properties - or lack thereof.

I've been out of the abstract algebra game for rather longer :-) but regardless of terminology, I understand what you mean.

My point was meant to be somewhat simpler: if you have

     x * (a + bi)

(i.e. real * complex in terms of the computer representation) then, this will come out with higher precision than

     (x + 0i) * (a + bi)

... because you can avoid unnecessary multiplications by zero and other such things.  You can also avoid some nasty nan's that may arise in the latter case.

You get to enjoy this extra-precision-and-avoid-nasty-errors because we already have built-in numerical types and std.complex.Complex defines binary operations relative to them as well as to other Complex types.  I'm simply suggesting that the same opportunities to avoid those calculation errors should be available when you're dealing with purely-imaginary types.

It's an implementation issue AFAICS, not a question of mathematical theory, although like you and Don I find the lack of closure for imaginary op imaginary to be very annoying.  (I got round it simply by not defining opOpAssign for operations that could not be assigned back to an Imaginary type.)
January 03, 2014
On Thursday, 2 January 2014 at 23:43:47 UTC, Lars T. Kyllingstad wrote:
> Not at all. ;)

I am never certain about anything related to FP. It is a very pragmatic hack, which is kind of viral. (but fun to talk about ;).

> I just think we should keep in mind why FP semantics are defined the way they are.

Yes, unfortunately they are just kind-of-defined. 0.0 could represent anything from the minimum-denormal number to zero (Intel) to maximum-denormal number to zero (some other vendors). Then we have all the rounding-modes. And it gets worse with single than with double. I think the semantics of IEEE favours double over single, since detecting overflow is less important for double (it occurs seldom for doubles in practice so conflating overflow with 1.0/0.0 matters less for them than for single precision).

> Take 0.0*inf, for example.  As has been mentioned, 0.0 may represent a positive real number arbitrarily close to zero, and inf may represent an arbitrarily large real number. The product of these is ill-defined, and hence represented by a NaN.

Yes, and it is consistent with having 0.0/0.0 evaluate to NaN.
( 0.0*(1.0/0.0) ought to give NaN as well )

> 0.0+1.0i, on the other hand, represents a number which is arbitrarily close to i. Multiplying it with a very large real number gives you a number which has a very large imaginary part, but which is arbitrarily close to the imaginary axis, i.e. 0.0 + inf i. I think this is very consistent with FP semantics, and may be worth making a special case in std.complex.Complex.

I am too tired to figure out if you are staying within the max-min interval of potential values that can be represented (if you had perfect precision). I think that is the acid test. In order to reduce the unaccounted-for errors it is better to have a "wider" interval for each step to cover inaccuracies, and a bit dangerous if it gets more "narrow" than it should. I find it useful to try to think of floating point numbers as conceptual intervals of potential values (that get conceptually wider and wider the more you compute) and the actual FP value to be a "random" sample of that interval.

For all I know, maybe some other implementations do what you suggest already, but my concern was more general than this specific issue. I think it would be a good idea to mirror a reference implementation that is widely used for scientific computation. Just to make sure that it is accepted. Imagine a team where the old boys cling to Fortran and the young guy wants D, if he can show the old boys that D produce the same results for what they do they are more likely to be impressed.

Still, it is in the nature of FP that you should be able to configure and control expressions in order to overcome FP-related shortcomings. Like setting rounding-mode etc. So stuff like this ought to be handled the same way if it isn't standard practice. Not only for this stuff, but also for dealing with overflow/underflow and other "optional" aspects of FP computations.

> I agree, but there is also a lot to be said for not repeating old mistakes, if we deem them to be such.

With templates you probably can find a clean way to throw in a compile-time switch for exception generation and other things that can be configured.
January 03, 2014
On 03/01/2014 00:03, Joseph Rushton Wakeling wrote:
> On 03/01/14 00:33, Stewart Gordon wrote:
>> Please be specific.
>
> You know, I'm starting to find your tone irritating.  You are the one
> who's asking for functionality that goes beyond any Complex
> implementation that I'm aware of in any other language, and claiming
> that these things would be trivial to implement.

I wasn't asking for it to go beyond the existing complex implementation or any other.  I was proposing that the arbitrary restriction be removed so that the implementation we already have would work on them.  As I said originally:

> I don't understand. At the moment Complex appears to me to be
> type-agnostic - as long as a type supports the standard arithmetic
> operators and assignment of the value 0 to it, it will work. The
> only thing preventing it from working at the moment is this line
>
>     struct Complex(T)  if (isFloatingPoint!T)
>
> So why do you need an appropriate application in order not to have
> this arbitrary restriction? Or have I missed something?

OK, so division has now been mentioned.  And you have now mentioned the use of FPTemporary.  However....

> I would expect a person who claims with confidence that something is
> trivial, to actually know the internals of the code well enough to
> understand what parts of it would need to be modified. On the basis
> of what you've written, I have no reason to believe that you do.

I had read the code before I made that comment, and from reading it I did believe at the time that the implementation was ready for types other than float, real and double.  So the use of FPTemporary is something I missed, but it's taken you this long to point it out to me.  However....

> There are numerous places inside current std.complex.Complex where
> temporary values are used mid-calculation.  Those are all of type
> FPTemporary (which in practice means real).So, to handle library
> types (whether library floating-point types such as a BigFloat
> implementation, or a Complex!T so as to support hypercomplex numbers)
> you'd either have to special-case those functions or you'd have to
> provide an alternative Temporary template to handle the temporary
> internal values in the different cases.

FPTemporary is a template.  At the moment it's defined only for the built-in floating point types.  So what we really need is to define FPTemporary for other types.  For int and smaller integral types, we can define it as real just as we do for float/double/real.  Whether it's adequate for long would be platform-dependent.  For other types, I suppose we can reference a property of the type, or just use the type itself if such a property isn't present.

One possible idea (untested):
-----
template FPTemporary(F)
        if (is(typeof(-F.init + F.init * F.init - F.init))) {
    static if (isFloatingPoint!F || isIntegral!F) {
        alias real FPTemporary;
    } else static if (is(F.FPTemporary)) {
        alias F.FPTemporary FPTemporary;
    } else {
        alias F FPTemporary;
    }
}
-----

Of course, this isn't a completely general solution, and I can see now that whatever type we use would need to have trigonometric functions in order to support exponentiation.  So unless we conditionalise the inclusion of this operation....

> You'd also need to address questions of closure under operations
> (already an issue for the Imaginary type), and precision-related issues
> -- see e.g. the remarks by Craig Dillabaugh and Ola Fosheim Grøstad
> elsewhere in this thread.

Oh yes, and it would be crazy to try and make it work for unsigned integer types.  But even if we were to resolve the issues with FPTemporary and that, it would still fall under my earlier suggestion of making it so that if people want to use Complex on an unsupported type then they can explicitly suppress the type restriction, but should understand that it might not work properly.

<snip>
> I don't personally see any rationale for implementing hypercomplex
> numbers as a specialization of Complex given that they can be just as
> well implemented as a type in their own right.

Really, I was just thinking that somebody who wants a quick-and-dirty hypercomplex number implementation for some app might try to do it that way.

Stewart.
January 03, 2014
On 03/01/14 14:32, Stewart Gordon wrote:
> I wasn't asking for it to go beyond the existing complex implementation or any
> other.  I was proposing that the arbitrary restriction be removed so that the
> implementation we already have would work on them.

Yes, but it isn't an arbitrary restriction.  Template constraints are fundamentally a promise to users about what can be expected to work.  Integral types, or library types, won't work without significant modifications to the internals of the code.  It would be a false promise to relax those constraints.

> FPTemporary is a template.  At the moment it's defined only for the built-in
> floating point types.  So what we really need is to define FPTemporary for other
> types.  For int and smaller integral types, we can define it as real just as we
> do for float/double/real.  Whether it's adequate for long would be
> platform-dependent.  For other types, I suppose we can reference a property of
> the type, or just use the type itself if such a property isn't present.
>
> One possible idea (untested):
> -----
> template FPTemporary(F)
>          if (is(typeof(-F.init + F.init * F.init - F.init))) {
>      static if (isFloatingPoint!F || isIntegral!F) {
>          alias real FPTemporary;
>      } else static if (is(F.FPTemporary)) {
>          alias F.FPTemporary FPTemporary;
>      } else {
>          alias F FPTemporary;
>      }
> }
> -----

Yes, it ought to be possible to redefine FPTemporary (or define an alternative) to determine proper internal temporaries for any "float-esque" case.  I was toying with something along the lines of,

    template FPTemporary(F)
        if (isNumeric!F || isFloatLike!F)
    {
        alias typeof(real.init * F.init) FPTemporary;
    }

... where isFloatLike would test for appropriate floating-point-like properties of F -- although this is probably far too simplistic.  E.g. how do you handle the case of a float-like library type implemented as a class, not a struct?

In any case, absent an appropriate test-case in Phobos, it would be premature to generalize the constraints for Complex or the design of FPTemporary.

> Oh yes, and it would be crazy to try and make it work for unsigned integer
> types.  But even if we were to resolve the issues with FPTemporary and that, it
> would still fall under my earlier suggestion of making it so that if people want
> to use Complex on an unsupported type then they can explicitly suppress the type
> restriction, but should understand that it might not work properly.

People who want to use Complex on an unsupported type can quite readily copy-paste the code and remove the constraints, if that's what they want to do.  I think that's better than giving them an option which is essentially an invitation to shoot themselves in the foot, and which has very little chance of actually working.

It doesn't matter if you document it as "This might not work", by providing the option you are still essentially saying, "This is an OK way to use the type."

I think that's essentially an encouragement of bad code and a violation of the design principle that the easy thing to do should be the right thing to do.

> Really, I was just thinking that somebody who wants a quick-and-dirty
> hypercomplex number implementation for some app might try to do it that way.

I understand that, but quick-and-dirty solutions are often bad ones, and in this case, it just wouldn't work given the internals of Complex.

If you would like to revise std.complex to support this approach, I'm sure your pull request will be considered carefully, but personally I don't see it as an effective way to pursue hypercomplex number support when there are other options on the table.
January 03, 2014
On 03/01/2014 17:04, Joseph Rushton Wakeling wrote:
> On 03/01/14 14:32, Stewart Gordon wrote:
>> I wasn't asking for it to go beyond the existing complex
>> implementation or any other. I was proposing that the arbitrary
>> restriction be removed so that the implementation we already have
>> would work on them.
>
> Yes, but it isn't an arbitrary restriction.  Template constraints are
> fundamentally a promise to users about what can be expected to work.
> Integral types, or library types, won't work without significant
> modifications to the internals of the code.  It would be a false promise
> to relax those constraints.

But I believed the restriction to be arbitrary at the time I made my original point, hence my point.

<snip>
> Yes, it ought to be possible to redefine FPTemporary (or define an
> alternative) to determine proper internal temporaries for any
> "float-esque" case.  I was toying with something along the lines of,
>
>      template FPTemporary(F)
>          if (isNumeric!F || isFloatLike!F)
>      {
>          alias typeof(real.init * F.init) FPTemporary;
>      }
>
> ... where isFloatLike would test for appropriate floating-point-like
> properties of F -- although this is probably far too simplistic.

How can isFloatLike be implemented?

And how can we test for bigint or Galois field types?

> E.g. how do you handle the case of a float-like library type
> implemented as a class, not a struct?

What's the difficulty here?

<snip>
> It doesn't matter if you document it as "This might not work", by
> providing the option you are still essentially saying, "This is an OK
> way to use the type."
>
> I think that's essentially an encouragement of bad code and a violation
> of the design principle that the easy thing to do should be the right
> thing to do.

There are already violations of this principle in D, such as being able to cast away const.

>> Really, I was just thinking that somebody who wants a quick-and-dirty
>> hypercomplex number implementation for some app might try to do it
>> that way.
>
> I understand that, but quick-and-dirty solutions are often bad ones, and
> in this case, it just wouldn't work given the internals of Complex.

Addition, subtraction and multiplication would work.  So the programmer could just copy the code and reimplement division and exponentiation so that they work (or just get rid of them if they aren't needed).

Stewart.
January 03, 2014
On 03/01/14 21:21, Stewart Gordon wrote:
> How can isFloatLike be implemented?

I'm not sure.  It's something that needs to be thought about and of course it also depends on whether you want it to test just for basic properties, or whether it is supported by mathematical operations (sin, cos, abs, exp, etc.).

I think testing for basic properties should be enough, because if mathematical functions are needed but not supported, there will be a compilation error anyway.

> And how can we test for bigint or Galois field types?

For BigInt, it's conceivable to define an isIntegerLike template (David Simcha did this for his std.rational) that will handle library as well as built-in integral types.

For Galois field types, I suggest that there needs to be an implementation before one talks about how to support them in std.complex.

> There are already violations of this principle in D, such as being able to cast
> away const.

Casting away const has valid applications and is a bit different from allowing the user to manually ignore template constraints on a library type, which will most likely almost always lead to problematic behaviour.  I'm not aware of any library type in Phobos that does this, and if you _really_ want to override the template constraints, you can always copy and modify the code.  Then, if it turns out to work well, you can submit patches to allow that case in Phobos too.

> Addition, subtraction and multiplication would work.

They wouldn't, because when you relaxed the template constraints to allow Complex!(Complex!T), the code would fail to compile.  And I don't think you should correct that by stripping out basic arithmetic operations.

> So the programmer could just copy the code and reimplement division and exponentiation so that they work
> (or just get rid of them if they aren't needed).

As I keep saying, you're asking for extra complication to be added to a type that supports its intended (probably the vast majority of) use cases well, for the sake of a niche use case _that can be implemented without any problem as an entirely independent type_.

What you perceive as conceptual/notational elegance -- Complex!(Complex!T) -- isn't worth it.
1 2 3 4 5 6 7 8 9 10
Next ›   Last »