Jump to page: 1 27  
Page
Thread overview
Overloading/Inheritance issue
Aug 01, 2007
Derek Parnell
Aug 02, 2007
Christopher Wright
Aug 02, 2007
Derek Parnell
Aug 02, 2007
Aarti_pl
Aug 04, 2007
Walter Bright
Aug 04, 2007
Marcin Kuszczak
Aug 02, 2007
Regan Heath
Aug 02, 2007
Aarti_pl
Aug 02, 2007
Derek Parnell
Aug 02, 2007
Regan Heath
Aug 02, 2007
Regan Heath
Aug 02, 2007
Frits van Bommel
Aug 02, 2007
Sean Kelly
Aug 03, 2007
Bruno Medeiros
Aug 03, 2007
Derek Parnell
Aug 04, 2007
Walter Bright
Aug 04, 2007
Walter Bright
Aug 04, 2007
Walter Bright
Aug 04, 2007
Bruno Medeiros
Aug 05, 2007
Walter Bright
Aug 05, 2007
Jeff Nowakowski
Aug 05, 2007
Walter Bright
Aug 05, 2007
Sean Kelly
Aug 06, 2007
Daniel Keep
Aug 07, 2007
Walter Bright
Aug 06, 2007
Derek Parnell
Aug 07, 2007
Walter Bright
Aug 07, 2007
Jascha Wetzel
Aug 07, 2007
Bruno Medeiros
Aug 07, 2007
BCS
Aug 08, 2007
Walter Bright
Aug 09, 2007
Bruno Medeiros
Aug 10, 2007
Christopher Wright
Aug 11, 2007
Bruno Medeiros
Aug 08, 2007
Ary Manzana
Aug 05, 2007
Bruno Medeiros
Aug 05, 2007
Walter Bright
Aug 07, 2007
Derek Parnell
Aug 07, 2007
Sean Kelly
Aug 07, 2007
Walter Bright
Aug 07, 2007
Jascha Wetzel
Aug 08, 2007
Walter Bright
Aug 08, 2007
Jascha Wetzel
-profile output: Walter's profiler tips
Aug 10, 2007
Bill Baxter
Aug 07, 2007
Regan Heath
Aug 07, 2007
Jascha Wetzel
Aug 05, 2007
Ary Manzana
Aug 05, 2007
BCS
Aug 05, 2007
BCS
Aug 05, 2007
Bill Baxter
Aug 07, 2007
Jascha Wetzel
Aug 04, 2007
Charles D Hixson
Aug 01, 2007
Kirk McDonald
Aug 04, 2007
Walter Bright
Aug 05, 2007
Charles D Hixson
Aug 02, 2007
BCS
August 01, 2007
Hi,

I am wondering if the following behavior is intentional and if so, why.  Given the code below:

class X
{
  public int foo()
  {
    return foo(0);
  }

  public int foo(int y)
  {
    return 2;
  }
}

class Y : X
{
  public int foo(int y)
  {
    return 3;
  }
}

int main(char [][] argv)
{
  Y y = new Y;
  y.foo(); //does not compile, says that the argument type doesn't match
  y.foo(1);
  X x = y;
  x.foo();
  return 0;
}


How come the marked line above does not compile?  Clearly there is no ambiguity that I want to call the base's foo, which in turn should call Y's foo(int) with an argument of 0.  It's not that the method is not accessible, because I can clearly access it by casting to an X type (as I have done in the subsequent lines).

If you interpret the code, I'm defining a default behavior for foo() with no arguments.  A derived class which wants to keep the default behavior of foo() as calling foo(0), should only need to override foo(int).  However, the compiler does not allow this.  Why?  Is there a workaround (besides implementing a stub function which calls super.foo())?  Maybe there is a different method of defining in a base class one version of a function in terms of another?

-Steve
August 01, 2007
On Wed, 01 Aug 2007 16:47:12 -0400, Steve Schveighoffer wrote:

> I am wondering if the following behavior is intentional and if so, why.

This is intentional and is due to D's simplistic lookup rules. Basically, D will look for a matching signature only in the class it self and not in any parent classes (its a little more complex than this but ...)

To 'fix' this situation you need to explicitly identify methods from parent classes that you wish to access from objects of the child class. This is done using an alias statement.

Add "alias X.foo foo;" to your class definition of Y.

class Y : X
{
  alias X.foo foo; // pulls in class X's foo name into this scope.
  public int foo(int y)
  {
    return 3;
  }
}

Now it will compile and run.

-- 
Derek Parnell
Melbourne, Australia
skype: derek.j.parnell
August 01, 2007
Steve Schveighoffer wrote:
> Hi,
> 
> I am wondering if the following behavior is intentional and if so, why.  Given the code below:
> 
> class X
> {
>   public int foo()
>   {
>     return foo(0);
>   }
> 
>   public int foo(int y)
>   {
>     return 2;
>   }
> }
> 
> class Y : X
> {
>   public int foo(int y)
>   {
>     return 3;
>   }
> }
> 
> int main(char [][] argv)
> {
>   Y y = new Y;
>   y.foo(); //does not compile, says that the argument type doesn't match
>   y.foo(1);
>   X x = y;
>   x.foo();
>   return 0;
> }
> 
> 
> How come the marked line above does not compile?  Clearly there is no ambiguity that I want to call the base's foo, which in turn should call Y's foo(int) with an argument of 0.  It's not that the method is not accessible, because I can clearly access it by casting to an X type (as I have done in the subsequent lines).
> 
> If you interpret the code, I'm defining a default behavior for foo() with no arguments.  A derived class which wants to keep the default behavior of foo() as calling foo(0), should only need to override foo(int).  However, the compiler does not allow this.  Why?  Is there a workaround (besides implementing a stub function which calls super.foo())?  Maybe there is a different method of defining in a base class one version of a function in terms of another?
> 
> -Steve

I've never really 100% understood why this is the way that it is, but there /does/ exist a simple workaround.  Add this line to Y:
  alias super.foo foo;

Yep, alias the superclass's method into the current class's scope.  The problem has something to do with the lookup rules.  At the very least, I would think that declaring Y.foo with the 'override' attribute might give the wanted behavior, since surely a int(int) couldn't override a int(), yes?  But no, last I checked, it doesn't work either.

-- Chris Nicholson-Sauls
August 01, 2007
Chris Nicholson-Sauls wrote:
> Steve Schveighoffer wrote:
> 
>> Hi,
>>
>> I am wondering if the following behavior is intentional and if so, why.  Given the code below:
>>
>> class X
>> {
>>   public int foo()
>>   {
>>     return foo(0);
>>   }
>>
>>   public int foo(int y)
>>   {
>>     return 2;
>>   }
>> }
>>
>> class Y : X
>> {
>>   public int foo(int y)
>>   {
>>     return 3;
>>   }
>> }
>>
>> int main(char [][] argv)
>> {
>>   Y y = new Y;
>>   y.foo(); //does not compile, says that the argument type doesn't match
>>   y.foo(1);
>>   X x = y;
>>   x.foo();
>>   return 0;
>> }
>>
>>
>> How come the marked line above does not compile?  Clearly there is no ambiguity that I want to call the base's foo, which in turn should call Y's foo(int) with an argument of 0.  It's not that the method is not accessible, because I can clearly access it by casting to an X type (as I have done in the subsequent lines).
>>
>> If you interpret the code, I'm defining a default behavior for foo() with no arguments.  A derived class which wants to keep the default behavior of foo() as calling foo(0), should only need to override foo(int).  However, the compiler does not allow this.  Why?  Is there a workaround (besides implementing a stub function which calls super.foo())?  Maybe there is a different method of defining in a base class one version of a function in terms of another?
>>
>> -Steve
> 
> 
> I've never really 100% understood why this is the way that it is, but there /does/ exist a simple workaround.  Add this line to Y:
>   alias super.foo foo;
> 
> Yep, alias the superclass's method into the current class's scope.  The problem has something to do with the lookup rules.  At the very least, I would think that declaring Y.foo with the 'override' attribute might give the wanted behavior, since surely a int(int) couldn't override a int(), yes?  But no, last I checked, it doesn't work either.
> 
> -- Chris Nicholson-Sauls

That's not a workaround, nor is this considered a problem. This is the intended behavior.

-- 
Kirk McDonald
http://kirkmcdonald.blogspot.com
Pyd: Connecting D and Python
http://pyd.dsource.org
August 01, 2007
Kirk McDonald wrote:
> Chris Nicholson-Sauls wrote:
>> Steve Schveighoffer wrote:
>>
>>> Hi,
>>>
>>> I am wondering if the following behavior is intentional and if so, why.  Given the code below:
>>>
>>> class X
>>> {
>>>   public int foo()
>>>   {
>>>     return foo(0);
>>>   }
>>>
>>>   public int foo(int y)
>>>   {
>>>     return 2;
>>>   }
>>> }
>>>
>>> class Y : X
>>> {
>>>   public int foo(int y)
>>>   {
>>>     return 3;
>>>   }
>>> }
>>>
>>> int main(char [][] argv)
>>> {
>>>   Y y = new Y;
>>>   y.foo(); //does not compile, says that the argument type doesn't match
>>>   y.foo(1);
>>>   X x = y;
>>>   x.foo();
>>>   return 0;
>>> }
>>>
>>>
>>> How come the marked line above does not compile?  Clearly there is no ambiguity that I want to call the base's foo, which in turn should call Y's foo(int) with an argument of 0.  It's not that the method is not accessible, because I can clearly access it by casting to an X type (as I have done in the subsequent lines).
>>>
>>> If you interpret the code, I'm defining a default behavior for foo() with no arguments.  A derived class which wants to keep the default behavior of foo() as calling foo(0), should only need to override foo(int).  However, the compiler does not allow this.  Why?  Is there a workaround (besides implementing a stub function which calls super.foo())?  Maybe there is a different method of defining in a base class one version of a function in terms of another?
>>>
>>> -Steve
>>
>>
>> I've never really 100% understood why this is the way that it is, but there /does/ exist a simple workaround.  Add this line to Y:
>>   alias super.foo foo;
>>
>> Yep, alias the superclass's method into the current class's scope.  The problem has something to do with the lookup rules.  At the very least, I would think that declaring Y.foo with the 'override' attribute might give the wanted behavior, since surely a int(int) couldn't override a int(), yes?  But no, last I checked, it doesn't work either.
>>
>> -- Chris Nicholson-Sauls
> 
> That's not a workaround, nor is this considered a problem. This is the intended behavior.
> 

Intended, yes.  Wanted?  Not by all.  (Which is why it keeps coming up again, and again, and again... ad nauseum.)  I haven't personally scanned the specs to see if its at least now explicitly shown as the way to do this; if it isn't, it should be.

-- Chris Nicholson-Sauls
August 01, 2007
Derek Parnell Wrote:

> This is intentional and is due to D's simplistic lookup rules. Basically, D will look for a matching signature only in the class it self and not in any parent classes (its a little more complex than this but ...)

OK, I sort of saw this in the language specification, but I wasn't sure if it covered my case (in fact I tried an alias and it failed to compile, but I now see it was because I had a syntax error in my file.  Trying it now works).

So my concern here is that the given behavior does not gain anything, but just serves as an annoyance. Here is a scenario which I think could realistically happen:

Let's use the same examples of X and Y.  Let's say X, along with providing useful implementations of foo() and foo(int), provides another method bar(), which performs some other useful stuff.  A coder thinks of some new way to do bar(), and so derives a class Y from X, which then provides the new implementation of bar(), but leaves X's implementation of foo() and foo(int) to the base class.

People love the new Y class, and so start using it in all their projects.  Later on, the coder who implemented Y, finds another way of implementing foo(int), but thinks foo() cannot be improved.  So he releases a new version of Y which overrides only foo(int).  Now code that calls foo() from a Y derivative all breaks because the coder added a foo(int) method.  Everyone who used the Y derivative suddenly finds their code doesn't compile.

This is very counterintuitive.  Not only that, but I could easily see it slipping through unit tests.  Who thinks to unit test a class by calling all the base class members they didn't override?

My point basically is why should a coder be forced to declare an alias when realistically there is no reason they would NOT want to declare the alias?

-Steve
August 01, 2007
Chris Nicholson-Sauls Wrote:


> Intended, yes.  Wanted?  Not by all.  (Which is why it keeps coming up again, and again, and again... ad nauseum.)  I haven't personally scanned the specs to see if its at least now explicitly shown as the way to do this; if it isn't, it should be.

It's sort of in the spec, but the example shows the two overrides differing by paramter types int and long.  In this case, at least one can be implicitly casted to the other, but in my example, there is no doubt which method I intended to call.

-Steve
August 02, 2007
Steve Schveighoffer wrote:
> My point basically is why should a coder be forced to declare an alias when realistically there is no reason they would NOT want to declare the alias?

You've got my vote.

> -Steve
August 02, 2007
On Wed, 01 Aug 2007 17:37:58 -0400, Steve Schveighoffer wrote:
> This is very counterintuitive...

> My point basically is why should a coder be forced to declare an alias when realistically there is no reason they would NOT want to declare the alias?

Oh don't get me wrong, I too think that the current rules are totally daft. I mean, why does one go to the trouble of deriving one class from another if its not to take advantage of members in the parent class?

I vaguely recall Walter saying that he decided to do it this way because it is easy to implement this rule into the compiler and it makes the compiler run faster. (I am paraphrasing from memory, so I could be totally wrong).

If this is so, then it strikes me that the language is this way in order to make life easier for compiler writers than for D coders, which is just not right in my POV.

-- 
Derek
(skype: derek.j.parnell)
Melbourne, Australia
2/08/2007 2:14:09 PM
August 02, 2007
Derek Parnell pisze:
> On Wed, 01 Aug 2007 17:37:58 -0400, Steve Schveighoffer wrote:
>> This is very counterintuitive...
> 
>> My point basically is why should a coder be forced to declare
>> an alias when realistically there is no reason they would
>> NOT want to declare the alias?
> 
> Oh don't get me wrong, I too think that the current rules are totally daft.
> I mean, why does one go to the trouble of deriving one class from another
> if its not to take advantage of members in the parent class?
> 
> I vaguely recall Walter saying that he decided to do it this way because it
> is easy to implement this rule into the compiler and it makes the compiler
> run faster. (I am paraphrasing from memory, so I could be totally wrong).
> 
> If this is so, then it strikes me that the language is this way in order to
> make life easier for compiler writers than for D coders, which is just not
> right in my POV.
> 

... and it's even not consistent with one of D design rules:

"If something looks similar as in C++, it should behave similar to C++."

I think that breaking this rule is main source of confusion here. In my opinion it would be much better for D to implement inheritance in a way similar to other C-like languages.

Regards
Marcin Kuszczak
Aarti_pl
« First   ‹ Prev
1 2 3 4 5 6 7