Jump to page: 1 25  
Page
Thread overview
Non-commuting product operator
Apr 21, 2004
Norbert Nemec
Apr 21, 2004
Matthew
Apr 21, 2004
Norbert Nemec
Apr 21, 2004
Jan-Eric Duden
Apr 21, 2004
Norbert Nemec
Apr 21, 2004
Jan-Eric Duden
Apr 21, 2004
Norbert Nemec
Apr 21, 2004
Jan-Eric Duden
Apr 21, 2004
J Anderson
Apr 21, 2004
J Anderson
Apr 21, 2004
Jan-Eric Duden
Apr 21, 2004
J Anderson
Apr 21, 2004
Norbert Nemec
Apr 21, 2004
J Anderson
Apr 21, 2004
Norbert Nemec
Apr 21, 2004
J Anderson
Apr 21, 2004
Jan-Eric Duden
Apr 21, 2004
J Anderson
Apr 21, 2004
Norbert Nemec
Apr 21, 2004
J Anderson
Apr 21, 2004
Norbert Nemec
Apr 21, 2004
J Anderson
Apr 21, 2004
Hauke Duden
Apr 21, 2004
Norbert Nemec
Apr 21, 2004
Hauke Duden
Apr 21, 2004
Norbert Nemec
Apr 21, 2004
J Anderson
Apr 21, 2004
Norbert Nemec
Apr 21, 2004
Jan-Eric Duden
Apr 21, 2004
Jan-Eric Duden
Apr 21, 2004
J Anderson
Apr 21, 2004
Jan-Eric Duden
Apr 21, 2004
Jan-Eric Duden
Apr 21, 2004
Norbert Nemec
Apr 21, 2004
Jan-Eric Duden
Apr 21, 2004
J Anderson
Apr 21, 2004
J Anderson
Apr 21, 2004
Jan-Eric Duden
Apr 21, 2004
Norbert Nemec
Apr 21, 2004
Jan-Eric Duden
Apr 21, 2004
J Anderson
Apr 21, 2004
Norbert Nemec
Jan 29, 2005
Walter
Jan 30, 2005
Norbert Nemec
Feb 04, 2005
Walter
Apr 21, 2004
J Anderson
Apr 21, 2004
Ben Hinkle
Apr 21, 2004
Norbert Nemec
Apr 21, 2004
Ben Hinkle
Apr 21, 2004
Norbert Nemec
April 21, 2004
Hi there,

the assumption that multiplications are always commutative really is restricting the use of the language in a rather serious way.

If I were to design a numerical library for linear algebra, the most natural
thing to do would be to use the multiplication operator for matrix
multiplications, allowing to write
        Matrix A, B;
        Matrix C = A * B;

In the current definition of the language, there would be now way to do such
a thing, forcing library writers do resort to stuff like:
        Matrix X = mult(A,B);
which gets absolutely ugly for large expressions.

In this aspect the minimal simplification for library writers (who can drop one of two opMul definitions in a few cases) actually means a major inconvenience for the library users!

This question does not affect optimizability at all. One could still say in the language definition that *for builtin float/int multiplications* the order of subexpressions is not guaranteed.

Therefore I strongly urge the language designers to reconsider this matter if they have any interest in creating a serious tool for scientific computing.

(The question whether the assumed associativity of multiplications/additions is such a good idea is a completely different matter. I doubt it, but I cannot yet discuss it.)

Ciao,
Nobbi
April 21, 2004
I'm afraid I don't follow your argument. Can you be specific about what is wrong, and about what changes you propose?

:)

"Norbert Nemec" <Norbert.Nemec@gmx.de> wrote in message news:c656ip$2k8e$1@digitaldaemon.com...
> Hi there,
>
> the assumption that multiplications are always commutative really is restricting the use of the language in a rather serious way.
>
> If I were to design a numerical library for linear algebra, the most
natural
> thing to do would be to use the multiplication operator for matrix
> multiplications, allowing to write
>         Matrix A, B;
>         Matrix C = A * B;
>
> In the current definition of the language, there would be now way to do
such
> a thing, forcing library writers do resort to stuff like:
>         Matrix X = mult(A,B);
> which gets absolutely ugly for large expressions.
>
> In this aspect the minimal simplification for library writers (who can
drop
> one of two opMul definitions in a few cases) actually means a major inconvenience for the library users!
>
> This question does not affect optimizability at all. One could still say
in
> the language definition that *for builtin float/int multiplications* the order of subexpressions is not guaranteed.
>
> Therefore I strongly urge the language designers to reconsider this matter if they have any interest in creating a serious tool for scientific computing.
>
> (The question whether the assumed associativity of
multiplications/additions
> is such a good idea is a completely different matter. I doubt it, but I cannot yet discuss it.)
>
> Ciao,
> Nobbi



April 21, 2004
The problem is, that matrices do not commute. That is
        A*B != B*A

To be more exact: commutativity is a special feature of real and complex numbers, but mathematically there are tons of other objects where it makes sense to define a product, but this product does not commute.

Many of these object might not be useful for numerics, but matrices and vectors are a basic tool for every engineer and scientist. For me, a powerful and comfortable matrix library is one of the key features of a good language for scientific computing.

My request, to be specific, would simply be:
--> drop the assumption that opMul is commutative in the language definition

Of course, the compiler may still optimize code by commuting products of plain numbers, but not by commuting user-defined objects.

The additional overhead for library authors will only be, that they have to define two, mostly identical opMul implementations, but only for multiplications of two different types, but then, the second one can still refer to the first one.

Whether this should be done for additions as well, I do not know, but it might be the safe decision. I know of no practically used mathematical objects where the addition does not commute, but who knows what mathematicians may come up with. Also, non-mathematicians might find it hard to understand why addition and multiplication are handled differently.

Other operators are not affected, since they either are commutative by
mathematical definition (like "==") or have now mathematical meaning beyone
the boolean one ("&")

As I said, the matter of associativity should be considered as well. Most of
the practically used structures in mathematics are associative, but then,
floating point operations are not really associative. (Just try to compare
(1e-30+1e30)-1e30 and 1e-30+(1e30-1e30) and you'll see the problem.)

Anyhow: Fortran, which still is the preferred language by many scientists doing heavy numerics, assumes associativity and demands that programmers explicitely demand a certain order by splitting expressions where this is necessary.

Ciao,
Nobbi


Matthew wrote:

> I'm afraid I don't follow your argument. Can you be specific about what is wrong, and about what changes you propose?
> 
> :)
> 
> "Norbert Nemec" <Norbert.Nemec@gmx.de> wrote in message news:c656ip$2k8e$1@digitaldaemon.com...
>> Hi there,
>>
>> the assumption that multiplications are always commutative really is restricting the use of the language in a rather serious way.
>>
>> If I were to design a numerical library for linear algebra, the most
> natural
>> thing to do would be to use the multiplication operator for matrix
>> multiplications, allowing to write
>>         Matrix A, B;
>>         Matrix C = A * B;
>>
>> In the current definition of the language, there would be now way to do
> such
>> a thing, forcing library writers do resort to stuff like:
>>         Matrix X = mult(A,B);
>> which gets absolutely ugly for large expressions.
>>
>> In this aspect the minimal simplification for library writers (who can
> drop
>> one of two opMul definitions in a few cases) actually means a major inconvenience for the library users!
>>
>> This question does not affect optimizability at all. One could still say
> in
>> the language definition that *for builtin float/int multiplications* the order of subexpressions is not guaranteed.
>>
>> Therefore I strongly urge the language designers to reconsider this matter if they have any interest in creating a serious tool for scientific computing.
>>
>> (The question whether the assumed associativity of
> multiplications/additions
>> is such a good idea is a completely different matter. I doubt it, but I cannot yet discuss it.)
>>
>> Ciao,
>> Nobbi

April 21, 2004
This is not only true for matrices,
but for all groups and rings, that are not commutative.
Matices are a nice example that shows that forcing operators + and *  to be
commutative is an obstacle in the use of the language.

But i guess this is not new to Walter...
-- 
Jan-Eric Duden

"Norbert Nemec" <Norbert.Nemec@gmx.de> wrote in message news:c65981$2p7m$1@digitaldaemon.com...
> The problem is, that matrices do not commute. That is
>         A*B != B*A
>
> To be more exact: commutativity is a special feature of real and complex numbers, but mathematically there are tons of other objects where it makes sense to define a product, but this product does not commute.
>
> Many of these object might not be useful for numerics, but matrices and vectors are a basic tool for every engineer and scientist. For me, a powerful and comfortable matrix library is one of the key features of a good language for scientific computing.
>
> My request, to be specific, would simply be:
> --> drop the assumption that opMul is commutative in the language
definition
>
> Of course, the compiler may still optimize code by commuting products of plain numbers, but not by commuting user-defined objects.
>
> The additional overhead for library authors will only be, that they have
to
> define two, mostly identical opMul implementations, but only for multiplications of two different types, but then, the second one can still refer to the first one.
>
> Whether this should be done for additions as well, I do not know, but it might be the safe decision. I know of no practically used mathematical objects where the addition does not commute, but who knows what mathematicians may come up with. Also, non-mathematicians might find it hard to understand why addition and multiplication are handled
differently.
>
> Other operators are not affected, since they either are commutative by mathematical definition (like "==") or have now mathematical meaning
beyone
> the boolean one ("&")
>
> As I said, the matter of associativity should be considered as well. Most
of
> the practically used structures in mathematics are associative, but then,
> floating point operations are not really associative. (Just try to compare
> (1e-30+1e30)-1e30 and 1e-30+(1e30-1e30) and you'll see the problem.)
>
> Anyhow: Fortran, which still is the preferred language by many scientists doing heavy numerics, assumes associativity and demands that programmers explicitely demand a certain order by splitting expressions where this is necessary.
>
> Ciao,
> Nobbi
>
>
> Matthew wrote:
>
> > I'm afraid I don't follow your argument. Can you be specific about what
is
> > wrong, and about what changes you propose?
> >
> > :)
> >
> > "Norbert Nemec" <Norbert.Nemec@gmx.de> wrote in message news:c656ip$2k8e$1@digitaldaemon.com...
> >> Hi there,
> >>
> >> the assumption that multiplications are always commutative really is restricting the use of the language in a rather serious way.
> >>
> >> If I were to design a numerical library for linear algebra, the most
> > natural
> >> thing to do would be to use the multiplication operator for matrix
> >> multiplications, allowing to write
> >>         Matrix A, B;
> >>         Matrix C = A * B;
> >>
> >> In the current definition of the language, there would be now way to do
> > such
> >> a thing, forcing library writers do resort to stuff like:
> >>         Matrix X = mult(A,B);
> >> which gets absolutely ugly for large expressions.
> >>
> >> In this aspect the minimal simplification for library writers (who can
> > drop
> >> one of two opMul definitions in a few cases) actually means a major inconvenience for the library users!
> >>
> >> This question does not affect optimizability at all. One could still
say
> > in
> >> the language definition that *for builtin float/int multiplications*
the
> >> order of subexpressions is not guaranteed.
> >>
> >> Therefore I strongly urge the language designers to reconsider this matter if they have any interest in creating a serious tool for scientific computing.
> >>
> >> (The question whether the assumed associativity of
> > multiplications/additions
> >> is such a good idea is a completely different matter. I doubt it, but I cannot yet discuss it.)
> >>
> >> Ciao,
> >> Nobbi
>


April 21, 2004
Jan-Eric Duden wrote:

> But i guess this is not new to Walter...

True, but with the discussions happening on a newsgroup without an easily searchable archive, it is hard to avoid bringing up topics again. (And even if the topic has been discussed, I would probably bring it up again...)
April 21, 2004
 :)  I like it if those issues pop up again and again.
Maybe it convinces Walter that there is a good reason to change D in that
aspect..

-- 
Jan-Eric Duden
"Norbert Nemec" <Norbert.Nemec@gmx.de> wrote in message
news:c65gou$42f$2@digitaldaemon.com...
> Jan-Eric Duden wrote:
>
> > But i guess this is not new to Walter...
>
> True, but with the discussions happening on a newsgroup without an easily searchable archive, it is hard to avoid bringing up topics again. (And
even
> if the topic has been discussed, I would probably bring it up again...)


April 21, 2004
Norbert Nemec wrote:

>The problem is, that matrices do not commute. That is
>        A*B != B*A
>
>  
>
Dig (undig on my webpage) has a nice example of matrices.   This example is not a problem for D, because you are multiplying a matrix by a matrix.  The problem comes when you multiple a privative by another type.  But I can't think of any non-comunitive scalar/object operations, can u?  And if your disparate you can wrap (box) the privative in a class.

-- 
-Anderson: http://badmama.com.au/~anderson/
April 21, 2004
J Anderson wrote:

> Norbert Nemec wrote:
>
>> The problem is, that matrices do not commute. That is
>>        A*B != B*A
>>
>>  
>>
> Dig (undig on my webpage) has a nice example of matrices.   This example is not a problem for D, because you are multiplying a matrix by a matrix.  The problem comes when you multiple a privative by another type.  But I can't think of any non-comunitive scalar/object operations, can u?  And if your disparate you can wrap (box) the privative in a class.
>
privative = primitive (ie int, float ect...)

-- 
-Anderson: http://badmama.com.au/~anderson/
April 21, 2004
>This example
> is not a problem for D, because you are multiplying a matrix by a matrix.
No.
think of following matrix operation:
Translate(-1.0,-1.0,-1.0)*RotateX(30.0)*Translate(1.0,1.0,1.0)
which is different from
Translate(1.0,1.0,1.0)*RotateX(30.0)*Translate(-1.0,-1.0,-1.0)
or
RotateX(30.0)*Translate(1.0,1.0,1.0)*Translate(-1.0,-1.0,-1.0)


-- 
Jan-Eric Duden
"J Anderson" <REMOVEanderson@badmama.com.au> wrote in message
news:c65hqm$5d1$2@digitaldaemon.com...
> Norbert Nemec wrote:
>
> >The problem is, that matrices do not commute. That is
> >        A*B != B*A
> >
> >
> >
> Dig (undig on my webpage) has a nice example of matrices.   This example
> is not a problem for D, because you are multiplying a matrix by a
> matrix.  The problem comes when you multiple a privative by another
> type.  But I can't think of any non-comunitive scalar/object operations,
> can u?  And if your disparate you can wrap (box) the privative in a class.
>
> -- 
> -Anderson: http://badmama.com.au/~anderson/


April 21, 2004
Jan-Eric Duden wrote:

>>This example
>>is not a problem for D, because you are multiplying a matrix by a
>>matrix.
>>    
>>
>No.
>think of following matrix operation:
>Translate(-1.0,-1.0,-1.0)*RotateX(30.0)*Translate(1.0,1.0,1.0)
>which is different from
>Translate(1.0,1.0,1.0)*RotateX(30.0)*Translate(-1.0,-1.0,-1.0)
>or
>RotateX(30.0)*Translate(1.0,1.0,1.0)*Translate(-1.0,-1.0,-1.0)
>  
>
Have you tried dig matrices yet.  They work this way.  I mean, with a matrix class you have access to both sides of the equation.   It's only with privative types that there is this problem.  Commute is more then possible for matrices.

//From dig:
   /** Multiply matrices. */
   mat3 opMul (mat3 mb)
   {
       mat3 mo;
       float [] a = array ();
       float [] b = mb.array ();
       float [] o = mo.array ();

       for (int i; i < 3; i ++)
       {
           o [i + 0] = a [i] * b [0] + a [i + 3] * b [1] + a [i + 6] * b [2];
           o [i + 3] = a [i] * b [3] + a [i + 3] * b [4] + a [i + 6] * b [5];
           o [i + 6] = a [i] * b [6] + a [i + 3] * b [7] + a [i + 6] * b [8];
       }

       return mo;
   }

I'm afraid you've miss-understood what the documentation means.

Try it.  Send in a dig example that doesn't compute the correct result.

//(Untested)
mat3 n, o;
...
if (n * o == o * m)
{
   printf("multiplication is equal\n");
}

-- 
-Anderson: http://badmama.com.au/~anderson/
« First   ‹ Prev
1 2 3 4 5