December 30, 2010
On 12/30/10 9:00 AM, Steven Schveighoffer wrote:
> On Wed, 29 Dec 2010 16:14:11 -0500, Andrei Alexandrescu
> <SeeWebsiteForEmail@erdani.org> wrote:
>
>> On 12/29/10 2:58 PM, Steven Schveighoffer wrote:
>>> On Wed, 29 Dec 2010 15:38:27 -0500, Andrei Alexandrescu
>>> <SeeWebsiteForEmail@erdani.org> wrote:
>>>
>>>> On 12/29/10 2:10 PM, Steven Schveighoffer wrote:
>>>>> On Wed, 29 Dec 2010 14:42:53 -0500, Andrei Alexandrescu
>>>>> <SeeWebsiteForEmail@erdani.org> wrote:
>>>>>
>>>>>> On 12/27/10 6:55 PM, Andrei Alexandrescu wrote:
>>>>>>> On 12/27/10 12:35 PM, bearophile wrote:
>>>>>>>> Through Reddit I have found a link to some information about the
>>>>>>>> Clay
>>>>>>>> language, it wants to be (or it will be) a C++-class language, but
>>>>>>>> it's not tied to C syntax. It shares several semantic similarities
>>>>>>>> with D too. It looks like a cute language:
>>>>>>>> https://github.com/jckarter/clay/wiki/
>>>>>>> [snip]
>>>>>>>
>>>>>>> FWIW I just posted a response to a question asking for a comparison
>>>>>>> between Clay and D2.
>>>>>>>
>>>>>>> http://www.reddit.com/r/programming/comments/es2jx/clay_programming_language_wiki/
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>> That thread is shaping up more and more interesting because it's
>>>>>> turning into a discussion of generic programming at large.
>>>>>
>>>>> I wanted to address your post in the reddit discussion regarding the
>>>>> issue of operator overloads not being virtual:
>>>>>
>>>>> "This non-issue has been discussed in the D newsgroup. You can
>>>>> implement
>>>>> virtuals on top of non-virtuals efficiently, but not vice versa."
>>>>>
>>>>> I've found some very real problems with that, when implementing
>>>>> operator
>>>>> overloads in dcollections. It's forced me to use the (yet to be
>>>>> deprecated) opXXX forms. Specifically, you cannot use covariance with
>>>>> templated functions without repeating the entire implementation in the
>>>>> derived class.
>>>>
>>>> Glad you're bringing that up. Could you please post an example that
>>>> summarizes the issue?
>>>
>>> With D1:
>>>
>>> interface List
>>> {
>>> List opCat(List other);
>>> }
>>>
>>> class LinkList : List
>>> {
>>> LinkList opCat(List other) {...}
>>> }
>>>
>>> With D2:
>>>
>>> interface List
>>> {
>>> List doCat(List other); // implement this in derived class
>>> List opBinary(string op)(List other) if (op == "~")
>>> { return doCat(other); }
>>> }
>>>
>>> class LinkList : List
>>> {
>>> LinkList doCat(List other) {...}
>>> }
>>>
>>> // usage;
>>>
>>> LinkList ll = new LinkList(1, 2, 3);
>>> ll = ll ~ ll; // works with D1, fails on D2, "can't assign List to
>>> LinkList"
>>>
>>> Solution is to restate opBinary in all dervied classes with *exact same
>>> code* but different return type. I find this solution unacceptable.
>>
>> I understand, thanks for taking the time to share. The solution to
>> this matter as I see it is integrated with another topic - usually you
>> want to define groups of operators, which means you'd want to define
>> an entire translation layer from static operators to overridable ones.
>>
>> Here's the code I suggest along those lines. I used a named function
>> instead of a template to avoid 4174:
>>
>> template translateOperators()
>> {
>> auto ohPeeCat(List other) { return doCat(other); }
>> }
>>
>> interface List
>> {
>> List doCat(List other); // implement this in derived class
>> }
>>
>> class LinkList : List
>> {
>> LinkList doCat(List other) { return this; }
>> mixin translateOperators!();
>> }
>>
>> void main(string[] args)
>> {
>> LinkList ll = new LinkList;
>> ll = ll.ohPeeCat(ll);
>> }
>>
>> The translateOperators template would generally define a battery of
>> operators depending on e.g. whether appropriate implementations are
>> found in the host class (in this case LinkList).
>
> I'm assuming you meant this (once the bug is fixed):
>
> template translateOperators()
> {
> auto opBinary(string op)(List other) {return doCat(other);} if (op == "~")
> }
>
> and adding this mixin to the interface?

In fact if the type doesn't define doCat the operator shouldn't be generated.

  auto opBinary(string op, T)(T other) {return doCat(other);}
    if (op == "~" && is(typeof(doCat(other))))

The other thing that I didn't mention and that I think it would save you some grief is that this is meant to be a once-for-all library solution, not code that needs to be written by the user. In fact I'm thinking the mixin should translate from the new scheme to the old one. So for people who want to use operator overloading with inheritance we can say: just import std.typecons and mixin(translateOperators()) in your class definition. I think this is entirely reasonable.

> I find this solution extremely convoluted, not to mention bloated, and
> how do the docs work? It's like we're going back to C macros! This
> operator overloading scheme is way more trouble than the original.

How do you mean bloated? For documentation you specify in the documentation of the type what operators it supports, or for each named method you specify that operator xxx forwards to it.

> The thing I find ironic is that with the original operator overloading
> scheme, the issue was that for types that define multiple operator
> overloads in a similar fashion, forcing you to repeat boilerplate code.
> The solution to it was a mixin similar to what you are suggesting.
> Except now, even mundane and common operator overloads require verbose
> template definitions (possibly with mixins), and it's the uncommon case
> that benefits.

Not at all. The common case is shorter and simpler. I wrote the chapter on operator overloading twice, once for the old scheme and once for the new one. It uses commonly-encountered designs for its code samples. The chapter and its code samples got considerably shorter in the second version. You can't blow your one example into an epic disaster.

> So really, we haven't made any progress (mixins are still
> required, except now they will be more common). I think this is one area
> where D has gotten decidedly worse. I mean, just look at the difference
> above between defining the opcat operator in D1 and your mixin solution!

I very strongly believe the new operator overloading is a vast improvement over the existing one and over most of today's languages. We shouldn't discount all of its advantages and focus exclusively on covariance, which is a rather obscure facility.

Using operator overloading in conjunction with class inheritance is rare. Rare as it is, we need to allow it and make it convenient. I believe this is eminently possible along the lines discussed in this thread.

> As a compromise, can we work on a way to forward covariance, or to have
> the compiler reevaluate the template in more derived types?

I understand. I've had this lure a few times, too. The concern there is that this is a potentially surprising change.


Andrei
December 30, 2010
On 12/30/10 9:22 AM, Michel Fortin wrote:
> On 2010-12-30 10:00:05 -0500, "Steven Schveighoffer"
> <schveiguy@yahoo.com> said:
>
>> The thing I find ironic is that with the original operator overloading
>> scheme, the issue was that for types that define multiple operator
>> overloads in a similar fashion, forcing you to repeat boilerplate
>> code. The solution to it was a mixin similar to what you are
>> suggesting. Except now, even mundane and common operator overloads
>> require verbose template definitions (possibly with mixins), and it's
>> the uncommon case that benefits. So really, we haven't made any
>> progress (mixins are still required, except now they will be more
>> common). I think this is one area where D has gotten decidedly worse.
>> I mean, just look at the difference above between defining the opcat
>> operator in D1 and your mixin solution!
>
> I'm with you, I preferred the old design.

This is water under the bridge now, but I am definitely interested. What are the reasons for which you find the old design better?

>> As a compromise, can we work on a way to forward covariance, or to
>> have the compiler reevaluate the template in more derived types?
>
> I stubbled upon this yesterday:
>
> Template This Parameters
>
> TemplateThisParameters are used in member function templates to pick up
> the type of the this reference.
> import std.stdio;
>
> struct S
> {
> const void foo(this T)(int i)
> {
> writeln(typeid(T));
> }
> }
>
> <http://www.digitalmars.com/d/2.0/template.html>
>
> Looks like you could return the type of this this way...

typeof(this) works too.


Andrei
December 30, 2010
On Thu, 30 Dec 2010 11:02:43 -0500, Andrei Alexandrescu <SeeWebsiteForEmail@erdani.org> wrote:

> On 12/30/10 9:22 AM, Michel Fortin wrote:
>> I stubbled upon this yesterday:
>>
>> Template This Parameters
>>
>> TemplateThisParameters are used in member function templates to pick up
>> the type of the this reference.
>> import std.stdio;
>>
>> struct S
>> {
>> const void foo(this T)(int i)
>> {
>> writeln(typeid(T));
>> }
>> }
>>
>> <http://www.digitalmars.com/d/2.0/template.html>
>>
>> Looks like you could return the type of this this way...
>
> typeof(this) works too.

Nope.  the template this parameter assumes the type of 'this' at the call site, not at declaration.  typeof(this) means the type at declaration time.

-Steve
December 30, 2010
On 12/30/10 10:10 AM, Steven Schveighoffer wrote:
> On Thu, 30 Dec 2010 11:02:43 -0500, Andrei Alexandrescu
> <SeeWebsiteForEmail@erdani.org> wrote:
>
>> On 12/30/10 9:22 AM, Michel Fortin wrote:
>>> I stubbled upon this yesterday:
>>>
>>> Template This Parameters
>>>
>>> TemplateThisParameters are used in member function templates to pick up
>>> the type of the this reference.
>>> import std.stdio;
>>>
>>> struct S
>>> {
>>> const void foo(this T)(int i)
>>> {
>>> writeln(typeid(T));
>>> }
>>> }
>>>
>>> <http://www.digitalmars.com/d/2.0/template.html>
>>>
>>> Looks like you could return the type of this this way...
>>
>> typeof(this) works too.
>
> Nope. the template this parameter assumes the type of 'this' at the call
> site, not at declaration. typeof(this) means the type at declaration time.

Got it. Now I'm just waiting for your next post's ire :o).

Andrei
December 30, 2010
On Thu, 30 Dec 2010 11:00:20 -0500, Andrei Alexandrescu <SeeWebsiteForEmail@erdani.org> wrote:

> On 12/30/10 9:00 AM, Steven Schveighoffer wrote:
>>
>> I'm assuming you meant this (once the bug is fixed):
>>
>> template translateOperators()
>> {
>> auto opBinary(string op)(List other) {return doCat(other);} if (op == "~")
>> }
>>
>> and adding this mixin to the interface?
>
> In fact if the type doesn't define doCat the operator shouldn't be generated.
>
>    auto opBinary(string op, T)(T other) {return doCat(other);}
>      if (op == "~" && is(typeof(doCat(other))))
>
> The other thing that I didn't mention and that I think it would save you some grief is that this is meant to be a once-for-all library solution, not code that needs to be written by the user. In fact I'm thinking the mixin should translate from the new scheme to the old one. So for people who want to use operator overloading with inheritance we can say: just import std.typecons and mixin(translateOperators()) in your class definition. I think this is entirely reasonable.

I'd have to see how it works.  I also thought the new operator overloading scheme was reasonable -- until I tried to use it.

Note this is even more bloated because you generate one function per pair of types used in concatenation, vs. one function per class defined.

>> I find this solution extremely convoluted, not to mention bloated, and
>> how do the docs work? It's like we're going back to C macros! This
>> operator overloading scheme is way more trouble than the original.
>
> How do you mean bloated? For documentation you specify in the documentation of the type what operators it supports, or for each named method you specify that operator xxx forwards to it.

I mean bloated because you are generating template functions that just forward to other functions.  Those functions are compiled in and take up space, even if they are inlined out.

Let's also realize that the mixin is going to be required *per interface* and *per class*, meaning even more bloat.

I agree if there is a "standard" way of forwarding with a library mixin, the documentation will be reasonable, since readers should be able to get used to looking for the 'atlernative' operators.

>> The thing I find ironic is that with the original operator overloading
>> scheme, the issue was that for types that define multiple operator
>> overloads in a similar fashion, forcing you to repeat boilerplate code.
>> The solution to it was a mixin similar to what you are suggesting.
>> Except now, even mundane and common operator overloads require verbose
>> template definitions (possibly with mixins), and it's the uncommon case
>> that benefits.
>
> Not at all. The common case is shorter and simpler. I wrote the chapter on operator overloading twice, once for the old scheme and once for the new one. It uses commonly-encountered designs for its code samples. The chapter and its code samples got considerably shorter in the second version. You can't blow your one example into an epic disaster.

The case for overloading a single operator is shorter and simpler with the old method:

auto opAdd(Foo other)

vs.

auto opBinary(string op)(Foo other) if (op == "+")

Where the new scheme wins in brevity (for written code at least, and certainly not simpler to understand) is cases where:

1. inheritance is not used
2. you can consolidate many overloads into one function.

So the question is, how many times does one define operator overloading on a multitude of operators *with the same code* vs. how many times does one define a few operators or defines the operators with different code?

In my experience, I have not yet defined a type that uses a multitude of operators with the same code.  In fact, I have only defined the "~=" and "~" operators for the most part.

So I'd say, while my example is not proof that this is a disaster, I think it shows the change in operator overloading cannot yet be declared a success.  One good example does not prove anything just like one bad example does not prove anything.

>> So really, we haven't made any progress (mixins are still
>> required, except now they will be more common). I think this is one area
>> where D has gotten decidedly worse. I mean, just look at the difference
>> above between defining the opcat operator in D1 and your mixin solution!
>
> I very strongly believe the new operator overloading is a vast improvement over the existing one and over most of today's languages.

I haven't had that experience.  This is just me talking.  Maybe others believe it is good.

I agree that the flexibility is good, I really think it should have that kind of flexibility.  Especially when we start talking about the whole opAddAssign mess that was in D1.  It also allows making wrapper types easier.

The problem with flexibility is that it comes with complexity.  Most programmers looking to understand how to overload operators in D are going to be daunted by having to use both templates and template constraints, and possibly mixins.

There once was a discussion on how to improve operators on the phobos mailing list (don't have the history, because i think it was on erdani.com).  Essentially, the two things were:

1) let's make it possible to easily specify template constraints for typed parameters (such as string) like this:

auto opBinary("+")(Foo other)

which would look far less complex and verbose than the current incarnation.  And simple to define when all you need is one or two operators.

2) make template instantiations that provably evaluate to a single instance virtual.  Or have a way to designate they should be virtual.  e.g. the above operator syntax can only have one instantiation.

> We shouldn't discount all of its advantages and focus exclusively on covariance, which is a rather obscure facility.

I respectfully disagree.  Covariance is very important when using class hierarchies, because to have something that returns itself degrade into a basic interface is very cumbersome.  I'd say dcollections would be quite clunky if it weren't for covariance (not just for operator overloads).  It feels along the same lines as inout -- where inout allows you to continue using your same type with the same constancy, covariance allows you to continue to use the most derived type that you have.

> Using operator overloading in conjunction with class inheritance is rare.

I don't use operator overloads and class inheritance, but I do use operator overloads with interfaces.  I think rare is not the right term, it's somewhat infrequent, but chances are if you do a lot of interfaces, you will encounter it at least once.  It certainly doesn't dominate the API being defined.

> Rare as it is, we need to allow it and make it convenient. I believe this is eminently possible along the lines discussed in this thread.

Convenience is good.  I hope we can do it at a lower exe footprint cost than what you have proposed.

>> As a compromise, can we work on a way to forward covariance, or to have
>> the compiler reevaluate the template in more derived types?
>
> I understand. I've had this lure a few times, too. The concern there is that this is a potentially surprising change.

Actually, the functionality almost exists in template this parameters.  At least, the reevaluation part is working.  However, you still must incur a performance penalty to cast to the derived type, plus the template nature of it adds unnecessary bloat.

-Steve
December 30, 2010
> In my experience, I have not yet defined a type that uses a multitude of operators with the same code.  In fact, I have only defined the "~=" and "~" operators for the most part.
>
> So I'd say, while my example is not proof that this is a disaster, I think it shows the change in operator overloading cannot yet be declared a success.  One good example does not prove anything just like one bad example does not prove anything.

Operator overloading shines on numeric code, which i guess the targeted audience for this feature.
In this case, you mostly change a single character and that is the operator.

> I haven't had that experience.  This is just me talking.  Maybe others believe it is good.

This new scheme is just pure win, again for numeric coding.

>> Using operator overloading in conjunction with class inheritance is rare.

So rare that if you see operator overloading and virtual inheritance, you'd better be sure there is not something fishy going on.

-- 
Using Opera's revolutionary email client: http://www.opera.com/mail/
December 30, 2010
On 2010-12-30 11:02:43 -0500, Andrei Alexandrescu <SeeWebsiteForEmail@erdani.org> said:

> On 12/30/10 9:22 AM, Michel Fortin wrote:
>> On 2010-12-30 10:00:05 -0500, "Steven Schveighoffer"
>> <schveiguy@yahoo.com> said:
>> 
>>> The thing I find ironic is that with the original operator overloading
>>> scheme, the issue was that for types that define multiple operator
>>> overloads in a similar fashion, forcing you to repeat boilerplate
>>> code. The solution to it was a mixin similar to what you are
>>> suggesting. Except now, even mundane and common operator overloads
>>> require verbose template definitions (possibly with mixins), and it's
>>> the uncommon case that benefits. So really, we haven't made any
>>> progress (mixins are still required, except now they will be more
>>> common). I think this is one area where D has gotten decidedly worse.
>>> I mean, just look at the difference above between defining the opcat
>>> operator in D1 and your mixin solution!
>> 
>> I'm with you, I preferred the old design.
> 
> This is water under the bridge now, but I am definitely interested. What are the reasons for which you find the old design better?

First it was simpler to understand. Second it worked well with inheritance.

The current design requires that you know of templates and template constrains, and it requires complicated workarounds if you're dealing with inheritance (as illustrated by this thread). Basically, we've made a simple, easy to understand feature into an expert-only one.

And for what sakes? Sure the new design has the advantage that you can define multiple operators in one go. But for all the cases where you don't define operators to be the same variation on a theme, and even more for those involving inheritance, it's more complicated now. And defining multiple operators in one go wouldn't have been so hard with the older regime either. All you needed was a mixin to automatically generate properly named functions for each operator the opBinary template can instantiate.

I was always skeptical of this new syntax, and this hasn't changed.


-- 
Michel Fortin
michel.fortin@michelf.com
http://michelf.com/

December 30, 2010
> First it was simpler to understand. Second it worked well with inheritance.
>
> The current design requires that you know of templates and template constrains, and it requires complicated workarounds if you're dealing with inheritance (as illustrated by this thread). Basically, we've made a simple, easy to understand feature into an expert-only one.
>
> And for what sakes? Sure the new design has the advantage that you can define multiple operators in one go. But for all the cases where you don't define operators to be the same variation on a theme, and even more for those involving inheritance, it's more complicated now. And defining multiple operators in one go wouldn't have been so hard with the older regime either. All you needed was a mixin to automatically generate properly named functions for each operator the opBinary template can instantiate.
>
> I was always skeptical of this new syntax, and this hasn't changed.

Old style was nothing but merely C++ with named operators.
Shortcomings were obvious and i have always thinking of a solution exactly like this one.
Now it is quite template friendly, as it should be.

For inheritance, i am unable to find a use case that makes sense.

-- 
Using Opera's revolutionary email client: http://www.opera.com/mail/
December 30, 2010
On 12/30/10 11:08 AM, Steven Schveighoffer wrote:
> I'd have to see how it works. I also thought the new operator
> overloading scheme was reasonable -- until I tried to use it.

You mean until you tried to use it /once/.

> Note this is even more bloated because you generate one function per
> pair of types used in concatenation, vs. one function per class defined.

That function is inlined and vanishes out of existence. I wish one day we'd characterize this bloating issue more precisely. Right now anything generic has the "bloated!!" alarm stuck to it indiscriminately.

>> How do you mean bloated? For documentation you specify in the
>> documentation of the type what operators it supports, or for each
>> named method you specify that operator xxx forwards to it.
>
> I mean bloated because you are generating template functions that just
> forward to other functions. Those functions are compiled in and take up
> space, even if they are inlined out.

I think we can safely leave this matter to compiler technology.

> Let's also realize that the mixin is going to be required *per
> interface* and *per class*, meaning even more bloat.

The bloating argument is a complete red herring in this case. I do agree that generally it could be a concern and I also agree that the compiler needs to be improved in that regard. But by and large I think we can calmly and safely think that a simple short function is not a source of worry.

> I agree if there is a "standard" way of forwarding with a library mixin,
> the documentation will be reasonable, since readers should be able to
> get used to looking for the 'atlernative' operators.

Whew :o).

>>> The thing I find ironic is that with the original operator overloading
>>> scheme, the issue was that for types that define multiple operator
>>> overloads in a similar fashion, forcing you to repeat boilerplate code.
>>> The solution to it was a mixin similar to what you are suggesting.
>>> Except now, even mundane and common operator overloads require verbose
>>> template definitions (possibly with mixins), and it's the uncommon case
>>> that benefits.
>>
>> Not at all. The common case is shorter and simpler. I wrote the
>> chapter on operator overloading twice, once for the old scheme and
>> once for the new one. It uses commonly-encountered designs for its
>> code samples. The chapter and its code samples got considerably
>> shorter in the second version. You can't blow your one example into an
>> epic disaster.
>
> The case for overloading a single operator is shorter and simpler with
> the old method:
>
> auto opAdd(Foo other)
>
> vs.
>
> auto opBinary(string op)(Foo other) if (op == "+")
>
> Where the new scheme wins in brevity (for written code at least, and
> certainly not simpler to understand) is cases where:
>
> 1. inheritance is not used
> 2. you can consolidate many overloads into one function.
>
> So the question is, how many times does one define operator overloading
> on a multitude of operators *with the same code* vs. how many times does
> one define a few operators or defines the operators with different code?
>
> In my experience, I have not yet defined a type that uses a multitude of
> operators with the same code. In fact, I have only defined the "~=" and
> "~" operators for the most part.

Based on extensive experience with operator overloading in C++ and on having read related code in other languages, I can firmly say both of (1) and (2) are the overwhelmingly common case.

> So I'd say, while my example is not proof that this is a disaster, I
> think it shows the change in operator overloading cannot yet be declared
> a success. One good example does not prove anything just like one bad
> example does not prove anything.

Many good examples do prove a ton though. Just off the top of my head:

- complex numbers

- checked integers

- checked floating point numbers

- ranged/constrained numbers

- big int

- big float

- matrices and vectors

- dimensional analysis (SI units)

- rational numbers

- fixed-point numbers

If I agree with something is that opCat is an oddity here as it doesn't usually group with others. Probably it would have helped if opCat would have been left named (just like opEquals or opCmp) but then uniformity has its advantages too. I don't think it's a disaster one way or another, but I do understand how opCat in particular is annoying to your case.

>>> So really, we haven't made any progress (mixins are still
>>> required, except now they will be more common). I think this is one area
>>> where D has gotten decidedly worse. I mean, just look at the difference
>>> above between defining the opcat operator in D1 and your mixin solution!
>>
>> I very strongly believe the new operator overloading is a vast
>> improvement over the existing one and over most of today's languages.
>
> I haven't had that experience. This is just me talking. Maybe others
> believe it is good.
>
> I agree that the flexibility is good, I really think it should have that
> kind of flexibility. Especially when we start talking about the whole
> opAddAssign mess that was in D1. It also allows making wrapper types
> easier.
>
> The problem with flexibility is that it comes with complexity. Most
> programmers looking to understand how to overload operators in D are
> going to be daunted by having to use both templates and template
> constraints, and possibly mixins.

Most programmers looking to understand how to overload operators in D will need to bundle them (see the common case argument above) and will go with the TDPL examples, which are clear, short, simple, and useful.

> There once was a discussion on how to improve operators on the phobos
> mailing list (don't have the history, because i think it was on
> erdani.com). Essentially, the two things were:
>
> 1) let's make it possible to easily specify template constraints for
> typed parameters (such as string) like this:
>
> auto opBinary("+")(Foo other)
>
> which would look far less complex and verbose than the current
> incarnation. And simple to define when all you need is one or two
> operators.

I don't see this slight syntactic special case a net improvement over what we have.

> 2) make template instantiations that provably evaluate to a single
> instance virtual. Or have a way to designate they should be virtual.
> e.g. the above operator syntax can only have one instantiation.

This may be worth exploring, but since template constraints are arbitrary expressions I fear it will become a mess of special cases designed to avoid the Turing tarpit.

>> We shouldn't discount all of its advantages and focus exclusively on
>> covariance, which is a rather obscure facility.
>
> I respectfully disagree. Covariance is very important when using class
> hierarchies, because to have something that returns itself degrade into
> a basic interface is very cumbersome. I'd say dcollections would be
> quite clunky if it weren't for covariance (not just for operator
> overloads). It feels along the same lines as inout -- where inout allows
> you to continue using your same type with the same constancy, covariance
> allows you to continue to use the most derived type that you have.

Okay, I understand.

>> Using operator overloading in conjunction with class inheritance is rare.
>
> I don't use operator overloads and class inheritance, but I do use
> operator overloads with interfaces. I think rare is not the right term,
> it's somewhat infrequent, but chances are if you do a lot of interfaces,
> you will encounter it at least once. It certainly doesn't dominate the
> API being defined.

Maybe a more appropriate characterization is that you use catenation with interfaces.

>> Rare as it is, we need to allow it and make it convenient. I believe
>> this is eminently possible along the lines discussed in this thread.
>
> Convenience is good. I hope we can do it at a lower exe footprint cost
> than what you have proposed.

We need to destroy Walter over that code bloating thing :o).

>>> As a compromise, can we work on a way to forward covariance, or to have
>>> the compiler reevaluate the template in more derived types?
>>
>> I understand. I've had this lure a few times, too. The concern there
>> is that this is a potentially surprising change.
>
> Actually, the functionality almost exists in template this parameters.
> At least, the reevaluation part is working. However, you still must
> incur a performance penalty to cast to the derived type, plus the
> template nature of it adds unnecessary bloat.

Saw that. I have a suspicion that we'll see a solid solution from you soon!


Andrei
December 30, 2010
On 12/30/10 11:37 AM, Michel Fortin wrote:
> On 2010-12-30 11:02:43 -0500, Andrei Alexandrescu
> <SeeWebsiteForEmail@erdani.org> said:
>
>> On 12/30/10 9:22 AM, Michel Fortin wrote:
>>> On 2010-12-30 10:00:05 -0500, "Steven Schveighoffer"
>>> <schveiguy@yahoo.com> said:
>>>
>>>> The thing I find ironic is that with the original operator overloading
>>>> scheme, the issue was that for types that define multiple operator
>>>> overloads in a similar fashion, forcing you to repeat boilerplate
>>>> code. The solution to it was a mixin similar to what you are
>>>> suggesting. Except now, even mundane and common operator overloads
>>>> require verbose template definitions (possibly with mixins), and it's
>>>> the uncommon case that benefits. So really, we haven't made any
>>>> progress (mixins are still required, except now they will be more
>>>> common). I think this is one area where D has gotten decidedly worse.
>>>> I mean, just look at the difference above between defining the opcat
>>>> operator in D1 and your mixin solution!
>>>
>>> I'm with you, I preferred the old design.
>>
>> This is water under the bridge now, but I am definitely interested.
>> What are the reasons for which you find the old design better?
>
> First it was simpler to understand. Second it worked well with inheritance.
>
> The current design requires that you know of templates and template
> constrains, and it requires complicated workarounds if you're dealing
> with inheritance (as illustrated by this thread). Basically, we've made
> a simple, easy to understand feature into an expert-only one.
>
> And for what sakes? Sure the new design has the advantage that you can
> define multiple operators in one go. But for all the cases where you
> don't define operators to be the same variation on a theme, and even
> more for those involving inheritance, it's more complicated now. And
> defining multiple operators in one go wouldn't have been so hard with
> the older regime either. All you needed was a mixin to automatically
> generate properly named functions for each operator the opBinary
> template can instantiate.
>
> I was always skeptical of this new syntax, and this hasn't changed.

Thanks for the feedback. So let me make sure I understand your arguments. First, you mention that the old design is simpler. Second, you mention that the old design worked better with inheritance and with cases in which each operator needs a separate definition.

I partially (only to a small extent) agree with the first and I disagree with the second. (Overall my opinion that the new design is a vast improvement hasn't changed.) But I didn't ask for your opinion to challenge or debate it - thanks again for taking the time to share.


Andrei