April 21, 2004
I figured out why your ouput is different.
the matrices are up side down in your output...
I typed them in the standard mathematical way...

-- 
Jan-Eric Duden
"Jan-Eric Duden" <jeduden@whisset.com> wrote in message
news:c65nha$fd9$1@digitaldaemon.com...
> Uhm. That depends how you interpret the matrices - as row vectors or as
> column vectors.
> In any case, your program proofs as well that b*a != a*b !
>
> -- 
> Jan-Eric Duden
> "J Anderson" <REMOVEanderson@badmama.com.au> wrote in message
> news:c65mvr$e8h$1@digitaldaemon.com...
> > Jan-Eric Duden wrote:
> >
> > >for example: :)
> > >a:
> > >[ 3    3    1]
> > >[-3    2    2]
> > >[ 5    2    3]
> > >b:
> > >[1    0    4]
> > >[0    1    0]
> > >[0    0    1]
> > >
> > >a*b
> > >[ 3    3     13]
> > >[-3    2    -10]
> > >[ 5    2     23]
> > >
> > >b*a
> > >[23    11    13]
> > >[-3     2     2]
> > >[ 5     2     3]
> > >
> > >
> > >
> > >
> > I don't know why I waste my time proving what you say is nonsense <g>
> >
> > import std.c.stdio;
> > import net.BurtonRadons.dig.common.math;
> > import std.process;
> >
> > void main()
> > {
> >     mat3 a= mat3.create(3, 3, 1, -3, 2, 2, 5, 2, 3);
> >     mat3 b = mat3.create(1, 0,  4,  0, 1, 0, 0, 0, 1);
> >
> >     printf("a\n");
> >     a.print();
> >     printf("b\n");
> >     b.print();
> >
> >     printf("a * b\n");
> >     mat3 c = a * b;
> >     c.print();
> >
> >     printf("b * a\n");
> >     mat3 d = b * a;
> >     d.print();
> >
> >    std.process.system("pause");
> > }
> >
> > output:
> > a
> > [  3 3 1 ]
> > [ -3 2 2 ]
> > [  5 2 3 ]
> >
> > b
> > [ 1 0 4 ]
> > [ 0 1 0 ]
> > [ 0 0 1 ]
> >
> > a * b
> > [  3 3  13 ]
> > [ -3 2 -10 ]
> > [  5 2  23 ]
> >
> > b * a
> > [ 23 11 13 ]
> > [ -3  2  2 ]
> > [  5  2  3 ]
> >
> >
> > -- 
> > -Anderson: http://badmama.com.au/~anderson/
>
>


April 21, 2004
Jan-Eric Duden wrote:

> Uhm. That depends how you interpret the matrices - as row vectors or as
> column vectors.
> In any case, your program proofs as well that b*a != a*b !

Guess, we can end this thread. Obviously, everybody knows that matrices do not commute in general. Obviously, the program by J. Anderson works correctly. My question was only whether it is guaranteed to work correctly by the language definition, or whether it just works for the current implementation.

Now, I found out that I first misunderstood the language definition and that it actually guarantees correctness (as long as you don't leave out any operand definition for non-commuting objects)

Hope everyone is satisfied and nobody is mad at me for starting this pointless thread...

Ciao,
Nobbi
April 21, 2004
I dont think so.
The opMul is supposed behave commutative according to the D specification:

As far as I understand the docs, this is not correct:
opAdd and opMul are supposed to be commutative. There is no opAdd_r or
opMul_r.
see http://www.digitalmars.com/d/operatoroverloading.html : Overloadable
Binary Operators


-- 
Jan-Eric Duden
"Norbert Nemec" <Norbert.Nemec@gmx.de> wrote in message
news:c65o7t$gmn$1@digitaldaemon.com...
> Jan-Eric Duden wrote:
>
> > Uhm. That depends how you interpret the matrices - as row vectors or as
> > column vectors.
> > In any case, your program proofs as well that b*a != a*b !
>
> Guess, we can end this thread. Obviously, everybody knows that matrices do not commute in general. Obviously, the program by J. Anderson works correctly. My question was only whether it is guaranteed to work correctly by the language definition, or whether it just works for the current implementation.
>
> Now, I found out that I first misunderstood the language definition and
that
> it actually guarantees correctness (as long as you don't leave out any operand definition for non-commuting objects)
>
> Hope everyone is satisfied and nobody is mad at me for starting this pointless thread...
>
> Ciao,
> Nobbi


April 21, 2004
Jan-Eric Duden wrote:

>As far as I understand the docs, this is not correct:
>opAdd and opMul are supposed to be commutative. There is no opAdd_r or
>opMul_r.
>see http://www.digitalmars.com/d/operatoroverloading.html : Overloadable
>Binary Operators
>
It's a miss-understanding.  Sure if the other class doesn't define the opposite overload then D will swap the things around to make it work (this is a feature rather then a con). But that's just a semantic bug on the programmers part.

class A { opMul(B) {} }

class B
{
   opMul(A) {} //If this is omitted then D will make things commutative. Otherwise you essentially have non-commutative.
}

Therefore opMul_r isn't nessary.  This question has been asked a couple of times in the group.  So far I've seen no one show a coded example (or a mathematical type) that defeats this stradagie.  And matrices definatly aren't one.

Now one problem I do see is if the programmer doesn't have access type A and B to add there own types (of course they can overload) but that's a different issue.   I guess it be solved done using delegates though.

-- 
-Anderson: http://badmama.com.au/~anderson/
April 21, 2004
Jan-Eric Duden wrote:

> As far as I understand the docs, this is not correct:
> opAdd and opMul are supposed to be commutative. There is no opAdd_r or
> opMul_r.
> see http://www.digitalmars.com/d/operatoroverloading.html : Overloadable
> Binary Operators
> 

True!! Ouch!! This actually is a detail that should be fixed!

Looking at the section:
-------------
 The following sequence of rules is applied, in order, to determine which
form is used:
(...)
2. If b is a struct or class object reference that contains a member named
opfunc_r and the operator op is not commutative, the expression is
rewritten as:

        b.opfunc_r(a)
------------- 
the phrase "and the operator op is not commutative" should be dropped.
b.opMul_r(a) should be checked in any case before b.opMul(a)

If the factors actually do commute, opMul_r need not be defined, but it should still be possible to define it for cases like Matrix arithmetic.

I guess this request is small enough to convince Walter? The only question now is, how to get him to read any mail about this subject at all...

B.t.w.: would it be possible to define b.opfunc_r(a) as "illegal" in some
way to create a compiler error when a special combination (like
columnvector*matrix) is used?
April 21, 2004
Jan-Eric Duden wrote:

>Uhm. That depends how you interpret the matrices - as row vectors or as
>column vectors.
>  
>
You just make a choice and stick with it (row/col is pretty standard ie x/y).  It all depends on how you see the array.  You'll have that *problem* in any language.
a * vec3
vec3 * a

>In any case, your program proofs as well that b*a != a*b !
>  
>
And so it should.  Sorry I'm not quite sure if you understand that D works for matrices yet or not?

Really, download undig from my website and look at the math.d file.   It's really quite complete.

-- 
-Anderson: http://badmama.com.au/~anderson/
April 21, 2004
Jan-Eric Duden wrote:

>I dont think so.
>The opMul is supposed behave commutative according to the D specification:
>
>As far as I understand the docs, this is not correct:
>opAdd and opMul are supposed to be commutative. There is no opAdd_r or
>opMul_r.
>see http://www.digitalmars.com/d/operatoroverloading.html : Overloadable
>Binary Operators
>  
>
Commutative only if you don't define the other operator in the other class. Get it?

-- 
-Anderson: http://badmama.com.au/~anderson/
April 21, 2004
It think this should be done with opAdd too....
-- 
Jan-Eric Duden

"Norbert Nemec" <Norbert.Nemec@gmx.de> wrote in message news:c65opb$hj6$1@digitaldaemon.com...
> Jan-Eric Duden wrote:
>
> > As far as I understand the docs, this is not correct:
> > opAdd and opMul are supposed to be commutative. There is no opAdd_r or
> > opMul_r.
> > see http://www.digitalmars.com/d/operatoroverloading.html : Overloadable
> > Binary Operators
> >
>
> True!! Ouch!! This actually is a detail that should be fixed!
>
> Looking at the section:
> -------------
>  The following sequence of rules is applied, in order, to determine which
> form is used:
> (...)
> 2. If b is a struct or class object reference that contains a member named
> opfunc_r and the operator op is not commutative, the expression is
> rewritten as:
>
>         b.opfunc_r(a)
> ------------- 
> the phrase "and the operator op is not commutative" should be dropped.
> b.opMul_r(a) should be checked in any case before b.opMul(a)
>
> If the factors actually do commute, opMul_r need not be defined, but it should still be possible to define it for cases like Matrix arithmetic.
>
> I guess this request is small enough to convince Walter? The only question now is, how to get him to read any mail about this subject at all...
>
> B.t.w.: would it be possible to define b.opfunc_r(a) as "illegal" in some
> way to create a compiler error when a special combination (like
> columnvector*matrix) is used?


April 21, 2004
Norbert Nemec wrote:

> Hi there,
> 
> the assumption that multiplications are always commutative really is restricting the use of the language in a rather serious way.
> 
> If I were to design a numerical library for linear algebra, the most
> natural thing to do would be to use the multiplication operator for matrix
> multiplications, allowing to write
>         Matrix A, B;
>         Matrix C = A * B;
> 
> In the current definition of the language, there would be now way to do
> such a thing, forcing library writers do resort to stuff like:
>         Matrix X = mult(A,B);
> which gets absolutely ugly for large expressions.

Eventually array arithmetic will act element-wise, so I would design Matrix so that * mean element-wise multiply. That would work fine with commutativity. I suspect another operator will eventually be added to D to mean "non-commutative multiply" something like ** so that Matrix multiply is the regular sense would use **.

For example, the language R uses %*% for matrix mult, %o% for inner product, %x% for outer product.

-Ben
April 21, 2004
OK, now, this actually boils down the problem even further. It only leaves cases where one of the two operand types is not accessible to add the multiplication. I can think of three possible reasons for that:

* one of the types is a primitive. (I don't know of any mathematical object that does not commute with scalars, but who knows?)

* you cannot or do not want to touch the sourcecode of one of the types - here I have not enough insight whether delegates might solve this and if this solution is elegant and efficient enough to justify the inhibition of opMult_r

Allowing opMult_r would not cost anything than a minor change in the language definition and the implementation. It would break no existing code at all and finish this kind of discussion.



J Anderson wrote:
> It's a miss-understanding.  Sure if the other class doesn't define the opposite overload then D will swap the things around to make it work (this is a feature rather then a con). But that's just a semantic bug on the programmers part.
> 
> class A { opMul(B) {} }
> 
> class B
> {
>     opMul(A) {} //If this is omitted then D will make things
> commutative. Otherwise you essentially have non-commutative.
> }
> 
> Therefore opMul_r isn't nessary.  This question has been asked a couple of times in the group.  So far I've seen no one show a coded example (or a mathematical type) that defeats this stradagie.  And matrices definatly aren't one.
> 
> Now one problem I do see is if the programmer doesn't have access type A and B to add there own types (of course they can overload) but that's a different issue.   I guess it be solved done using delegates though.