August 27, 2001 Re: Operator overloading... an idea | ||||
---|---|---|---|---|
| ||||
Posted in reply to Eric Gerlach | Eric Gerlach wrote:
> ...
> Unicode! D supports Unicode! Again excited that I had found somthing completely unoriginal, I rushed to my computer, and went to www.unicode.org. I'm not an expert on Unicode, but I know what it does. I thought to myself: "Unicode is perfect! It must have an entire section devoted to mathematical symbols!" And lo-and-behold! It does! For the interested, you can look here:
>
> http://www.unicode.org/charts/PDF/U2200.pdf (PDF)
> or here:
> http://charts.unicode.org/Web/U2200.html (GIFs, but I couldn't connect)
>
> If you haven't seen the vast extent of possibilities with this block, go check out that PDF file. Neato.
>
> So, the end result of my proposition is this: Allow the definition of new infix and unary operators from the Unicode block 2200 to 22FF, mathematical operators. Also, symbols from this block are *reserved* for new operators. (The allowance of colon delimeted operators was something I hadn't thought of... but could be another allowed grammar for new ops)
> ...
>
I prefer symbols that are easier to type on a standard keyboard, but I do find your suggestion acceptable. It would have the side effect (benefit?) of making the symbols harder to enter, and thus less commonly used. But I feel that it might be carying things a bit to extremes, unless there is some obviously easy way to remember how to enter them.
|
August 28, 2001 Re: Operator overloading... an idea | ||||
---|---|---|---|---|
| ||||
Posted in reply to Russell Bornschlegel | > For the record, I'm in favor of D incorporating > C++'s operator overloading capability (though not necessarily using > the same definition syntax) _and_ in extending this to arbitrary > single-unicode-above-ASCII-character operators. I'm not too picky about operator overload being in or not (I was just proposing a solution) but arbitrary unicode operators is a no-no. A big no-no. Have you ween how many characters there are in unicode? It's crazy to allow all those for operators. Go look here: http://www.unicode.org/charts/PDF/U4E00.pdf There are 0x9FAF - 0x4E00 == 0x51AF == 20911 symbols in that section of unicode alone. I don't want a single one of them being an operator! The main reason behind that is some people read those as words, and would be very confused. If that isn't enough, there are dozens of other scrips that could be redefined as operators with a generic unicode solution. If it exists, it should be relegated to the block of symbols *called* Mathematical Operators. |
August 28, 2001 Re: Operator overloading... an idea | ||||
---|---|---|---|---|
| ||||
Posted in reply to Charles Hixson | > I prefer symbols that are easier to type on a standard keyboard, but I do find your suggestion acceptable. It would have the side effect (benefit?) of making the symbols harder to enter, and thus less commonly used. But I feel that it might be carying things a bit to extremes, unless there is some obviously easy way to remember how to enter them.
Yes, that was the downside to the whole idea... not many people have unicode-enabled text editors... but maybe this would encourage proliferation of them! :)
|
August 28, 2001 Re: Operator overloading... an idea | ||||
---|---|---|---|---|
| ||||
Posted in reply to Walter | > Thanks for the entertaining and excellent summary. BTW, what is > "meta-programming" as opposed to generics? It sounds like you are entreating me to treat you with another treatise. Hopefully I'll be able to keep the length of this down... though on a topic so near and dear to my heart, I sincerely doubt it. :) Meta-programming... meta-proramming... where to start. Well, in a nutshell it's a way to write a program that is run at compile-time by the compiler, of which the output is code, which is then compiled. Hmmm.... that probably makes little sense. Allow me to illustrate with an example in C++: The factorial. template <int N> class Factorial { public: enum { val = N*Factorial<N-1>::value }; } tmeplate <1> class Factorial { public: enum { value = 1; } } void main() { int i = Factorial<5>::value; } So what is going on here? Well, when the 'Factorial<5>::value' is compiled, the compiler tries to resolve it. But it can't without computing Factorial<4>::value, and so on down the recursion. Then the compiler computes the result of the expression (because it has to be constant, it's an enum), and uses it *as a constant* in the program. You get the time savings of not having to compute it at runtime. Now, I'll admit that was a pretty useless example. It makes for somewhat more readable code, but otherwise isn't great. Where template metaprogramming can really be useful is in things like Matricies. Now hold on, and I'll see if I can get this right first shot. template <int I, int J, int N> static class MatrixComp { static void add(const Matrix<N> & m1, const Matrix<N> & m2, Matrix<N> & result) { result.m[I][J] = m1.m[I][J] + m2.m[I][J]; MatrixComp<I-1,J,N>::add(m1, m2, result); } } template <0, int J, int N> static class MatrixComp { static void add(const Matrix<N> & m1, const Matrix<N> & m2, Matrix<N> & result) { result.m[0][J] = m1.m[0][J] + m2.m[0][J]; MatrixComp<N-1,J-1,N>::add(m1, m2, result); } } template <0, 0, int N> static class MatrixComp { static void add(const Matrix<N> & m1, const Matrix<N> & m2, Matrix<N> & result) { result.m[0][0] = m1.m[0][0] + m2.m[0][0]; } } template <int N> class Matrix { public: float m[N][N]; void add(const Matrix<N> & m) { Matrix<N> temp; MatrixComp<N-1,N-1,N>::add(*this, m, temp); this->m = temp.m; // assuming this was legal :) } } void main() { Matrix<3> m1, m2; /* fill matricies */ m1.add(m2); } So, what does all that jibber-jabber do for you? Well, the add() call in main gets expanded at compile time to this: add(m2) { Matrix<3> temp; temp.m[2][2] = (*this).m[2][2] + m2[2][2]; temp.m[2][1] = (*this).m[2][1] + m2[2][1]; temp.m[2][0] = (*this).m[2][0] + m2[2][0]; temp.m[1][2] = (*this).m[1][2] + m2[1][2]; temp.m[1][1] = (*this).m[1][1] + m2[1][1]; temp.m[1][0] = (*this).m[1][0] + m2[1][0]; temp.m[0][2] = (*this).m[0][2] + m2[0][2]; temp.m[0][1] = (*this).m[0][1] + m2[0][1]; temp.m[0][0] = (*this).m[0][0] + m2[0][0]; this->m = temp.m; } And voila! All this done *before* the code is even compiled! Incredible! Now, I can hear some people saying: "But the optimiser will handle all that for you!" But that isn't completely true. First of all, you've just given the compiler a head-start. It might be able to squeeze additional optimisations out because you've already unraveled the loops. Second, consider matrix multiplication, which requires several nested loops. Written properly using template meta-programming, *all* of those loops are unraveled at compile-time, and you get a straight function such as the one above, which consists solely of multiplications and additions. No loops, no counters, just wonderfully optimised code. Of course, the more complex the procedure, the greater the runtime savings. However, this comes at increased compile-time cost. Doing template meta-programming takes a lot of time to compile, as the compiler is acting as an interpreter... in fact, it's acting very similar to LISP. Also, this generates much larger code files. But if you need the speed it can be well worth the trouble. I think that's a decent intro to the topic. I only found out about it recently myself. But it was love at first sight. Truly, properly done metaprogramming is a thing of beauty. My advantage was that I learned it from someone who learned it from the guy who wrote the book on template metaprogramming, Todd Veldhuizen. You can find his page on the subject here: http://www.extreme.indiana.edu/~tveldhui/papers/Template-Metaprograms/meta-art.html If you want to see a good metaprogramming library, check out blitz++, I've never used it myself, but apparantly it's quite good at what it does. Anywho, that's it for me for now. I don't know if stuff like this could ever make it into D (and in fact Walter has stated D isn't for real-time, so this doesn't fit the schema anyways), but it's neat stuff anyways. Eric P.S. An interesting tidbit: I wrote a template metaprogrammed matrix inversion routine. It's got 3-5 levels of recursion... I'd have to count them again. GCC, in attempting to compile and optimise a 6x6 matrix with this inversion routine, ate up >600MB of RAM. It crashed before it finished. Fortunately, we only have to use 4x4 matricies in our code. |
August 30, 2001 Re: Operator overloading... an idea | ||||
---|---|---|---|---|
| ||||
Posted in reply to Eric Gerlach | Ok, I see. Thanks for the explanation. I'm not sure I'm ready to try it yet, though! Eric Gerlach wrote in message <3B8B0F5E.7030305@canada.com>... > > Thanks for the entertaining and excellent summary. BTW, what is "meta-programming" as opposed to generics? > >It sounds like you are entreating me to treat you with another treatise. > Hopefully I'll be able to keep the length of this down... though on a >topic so near and dear to my heart, I sincerely doubt it. :) > >Meta-programming... meta-proramming... where to start. Well, in a nutshell it's a way to write a program that is run at compile-time by the compiler, of which the output is code, which is then compiled. Hmmm.... that probably makes little sense. Allow me to illustrate with an example in C++: The factorial. > >template <int N> >class Factorial { > public: > enum { val = N*Factorial<N-1>::value }; >} > >tmeplate <1> >class Factorial { > public: > enum { value = 1; } >} > >void main() >{ > int i = Factorial<5>::value; >} > >So what is going on here? Well, when the 'Factorial<5>::value' is compiled, the compiler tries to resolve it. But it can't without computing Factorial<4>::value, and so on down the recursion. Then the compiler computes the result of the expression (because it has to be constant, it's an enum), and uses it *as a constant* in the program. You get the time savings of not having to compute it at runtime. > >Now, I'll admit that was a pretty useless example. It makes for somewhat more readable code, but otherwise isn't great. Where template metaprogramming can really be useful is in things like Matricies. Now hold on, and I'll see if I can get this right first shot. > >template <int I, int J, int N> >static class MatrixComp { > static void add(const Matrix<N> & m1, const Matrix<N> & m2, Matrix<N> >& result) > { > result.m[I][J] = m1.m[I][J] + m2.m[I][J]; > MatrixComp<I-1,J,N>::add(m1, m2, result); > } >} > >template <0, int J, int N> >static class MatrixComp { > static void add(const Matrix<N> & m1, const Matrix<N> & m2, Matrix<N> >& result) > { > result.m[0][J] = m1.m[0][J] + m2.m[0][J]; > MatrixComp<N-1,J-1,N>::add(m1, m2, result); > } >} > >template <0, 0, int N> >static class MatrixComp { > static void add(const Matrix<N> & m1, const Matrix<N> & m2, Matrix<N> >& result) > { > result.m[0][0] = m1.m[0][0] + m2.m[0][0]; > } >} > >template <int N> >class Matrix { > public: > float m[N][N]; > void add(const Matrix<N> & m) > { > Matrix<N> temp; > MatrixComp<N-1,N-1,N>::add(*this, m, temp); > this->m = temp.m; // assuming this was legal :) > } >} > >void main() >{ > Matrix<3> m1, m2; > /* fill matricies */ > m1.add(m2); >} > >So, what does all that jibber-jabber do for you? Well, the add() call in main gets expanded at compile time to this: > >add(m2) >{ > Matrix<3> temp; > temp.m[2][2] = (*this).m[2][2] + m2[2][2]; > temp.m[2][1] = (*this).m[2][1] + m2[2][1]; > temp.m[2][0] = (*this).m[2][0] + m2[2][0]; > temp.m[1][2] = (*this).m[1][2] + m2[1][2]; > temp.m[1][1] = (*this).m[1][1] + m2[1][1]; > temp.m[1][0] = (*this).m[1][0] + m2[1][0]; > temp.m[0][2] = (*this).m[0][2] + m2[0][2]; > temp.m[0][1] = (*this).m[0][1] + m2[0][1]; > temp.m[0][0] = (*this).m[0][0] + m2[0][0]; > this->m = temp.m; >} > >And voila! All this done *before* the code is even compiled! Incredible! Now, I can hear some people saying: "But the optimiser will handle all that for you!" But that isn't completely true. First of all, you've just given the compiler a head-start. It might be able to squeeze additional optimisations out because you've already unraveled the loops. > >Second, consider matrix multiplication, which requires several nested loops. Written properly using template meta-programming, *all* of those loops are unraveled at compile-time, and you get a straight function such as the one above, which consists solely of multiplications and additions. No loops, no counters, just wonderfully optimised code. > >Of course, the more complex the procedure, the greater the runtime savings. However, this comes at increased compile-time cost. Doing template meta-programming takes a lot of time to compile, as the compiler is acting as an interpreter... in fact, it's acting very similar to LISP. Also, this generates much larger code files. But if you need the speed it can be well worth the trouble. > >I think that's a decent intro to the topic. I only found out about it recently myself. But it was love at first sight. Truly, properly done metaprogramming is a thing of beauty. My advantage was that I learned it from someone who learned it from the guy who wrote the book on template metaprogramming, Todd Veldhuizen. You can find his page on the subject here: > >http://www.extreme.indiana.edu/~tveldhui/papers/Template-Metaprograms/meta- art.html > >If you want to see a good metaprogramming library, check out blitz++, I've never used it myself, but apparantly it's quite good at what it does. > >Anywho, that's it for me for now. I don't know if stuff like this could ever make it into D (and in fact Walter has stated D isn't for real-time, so this doesn't fit the schema anyways), but it's neat stuff anyways. > >Eric > >P.S. An interesting tidbit: I wrote a template metaprogrammed matrix inversion routine. It's got 3-5 levels of recursion... I'd have to count them again. GCC, in attempting to compile and optimise a 6x6 matrix with this inversion routine, ate up >600MB of RAM. It crashed before it finished. Fortunately, we only have to use 4x4 matricies in our code. > |
September 06, 2001 Re: Operator overloading... an idea | ||||
---|---|---|---|---|
| ||||
Posted in reply to Eric Gerlach | Eric Gerlach wrote:
> There are 0x9FAF - 0x4E00 == 0x51AF == 20911 symbols in that section of unicode alone. I don't want a single one of them being an operator! The main reason behind that is some people read those as words, and would be very confused.
But as I point out, that arbitrary restriction doesn't make it particularly hard to confuse people. The ability to overload the mathematical-operator-asterisk to mean something different from the ASCII asterisk is far more likely to be abused than, say, something in arabic script.
|
September 06, 2001 Re: Operator overloading... an idea | ||||
---|---|---|---|---|
| ||||
Posted in reply to Russell Borogove | Russell Borogove wrote:
> Eric Gerlach wrote:
>
>>There are 0x9FAF - 0x4E00 == 0x51AF == 20911 symbols in that section of
>>unicode alone. I don't want a single one of them being an operator!
>>The main reason behind that is some people read those as words, and
>>would be very confused.
>>
>
> But as I point out, that arbitrary restriction doesn't make it particularly hard to confuse people. The ability to overload the mathematical-operator-asterisk to mean something different from the ASCII asterisk is far more likely to be abused than, say, something in arabic script.
>
You can also confuse people by naming a routine that takes the RMS of two numbers add. The idea is to not confuse them by accident. The possibility of obfuscation is always with us.
|
September 09, 2001 Re: Operator overloading. | ||||
---|---|---|---|---|
| ||||
Posted in reply to Brendan McCane | My 2c:
Operator overlaoding is purely syntactic sugar, it adds no new functionality to a language. In many cases it makes the code harder to read and to debug. It is aimed at the problem domain of science/math modelling, which makes up a rather small part of computing today.
> > quaternions. For this overloading of operators such as +, -, +=, etc means that top level code can be easily written and readable.
> >
the difference between a += b; and a.append(b); is minor and really not worth adding big features to the language to support.
|
September 10, 2001 Re: Operator overloading. | ||||
---|---|---|---|---|
| ||||
Posted in reply to Anthony Steele | Anthony Steele wrote: > My 2c: > > Operator overlaoding is purely syntactic sugar, it adds no new functionality > to a language. In many cases it makes the code harder to read and to debug. > It is aimed at the problem domain of science/math modelling, which makes up > a rather small part of computing today. > > >>>quaternions. For this overloading of operators such as +, -, +=, etc >>>means that top level code can be easily written and readable. >>> >>> > > the difference between a += b; and a.append(b); is minor and really not > worth adding big features to the language to support. > > > > > a) It's not a big feature if you already have overloaded functions. When done properly it's rather simple to replace operator calls with a rewrite into functional notation. It's just harder for people to get correct. b) You can be obfuscated with nearly any language feature. Just because some people have gone hog-wild is no reason to denigrate the whole concept. OTOH, I do support a clear distinction between user defined operators and system defined operators. But I also know of languages that don't support that disctinction, and which don't suffer excessively because of that. c) The math folks have had longer to develop their opeartors. String processing might be expected to develop standard operators over time. Presumable other domains would also be appropriate. Certainly sets have readily defineable operators, and it would be desireable for them to be useable. And one would, e.g., want to use the same operators for sets composed of lists and for sets generated by rules, though of course the implementations would need to be quite different. The extends into SQL processing (not such a small field anymore, perhaps?). etc. And that's just what occurs off the top of my head. I'm sure that most people have some domain that they would use operators with, if they were possible. And the domains are probably not identical. If you are going to do a small isolated operation, then you are correct. The amount gained by overloaded operators is small. But if one is composing operations on data structrues, then the operation will not necessarily be small. To take a minor and simple example: strVal = salute :+: firstName :+: midInit :+: lastName :+: perhapsComma :+: jr_etc :+: titlePunct :+: title can probably be understood without any explanation. Replacing it with the re-written code: strVal = salute.append (firstName.append (midInit.append (lastName.append(perhapsComma.append(jr_etc.append (titlePunct.append (title... is not only more difficult to read ... but I didn't finish it becuase I didn't want to count how many parenthesis to put at the end. (And on looking it over, I corrected one typo of a missing open paren...nothing similar happened with the first version.) |
September 10, 2001 Re: Operator overloading. | ||||
---|---|---|---|---|
| ||||
Posted in reply to Charles Hixson | > strVal = salute :+: firstName :+: midInit :+: lastName :+: perhapsComma :+: jr_etc :+: titlePunct :+: title
Well = will also be most like an overload assignment operator so:
strVal :=: salute :+: firstName :+: midInit :+: lastName :+: perhapsComma :+: jr_etc :+: titlePunct :+: title
I can already estimate that people will not be pleased to realise that a = b :+: c;
will do something fundamently different than,
a :=: b :+: c;
This results into something error prone :/
|
Copyright © 1999-2021 by the D Language Foundation