October 08, 2002
>So was any decision reached about how interfaces
>should work?
Uh, no <g>.  Continuing from my last post...

If Walter implements the "no cover" rule, then to override an interface
necessarily requires the derived class to specify the interface, thereby
creating a new instance of the interface at that level in the hierarchy.
So the question becomes: does (D)(A)b.foo()==(D)b.foo() when both A and subclass
B specify interface D.

In the current implementation, the answer is "no."  You can only count on "most-derived" interface semantics if you confine yourself to interface references (converted directly from a most derived object reference).  If you want to keep a list of A's (containing both A's and B's), and then you convert to D's on demand, you will only get A's version of D.  I frankly don't find this so bothersome.

What I do care about, is that you get a new interface at each level of the hierarchy in which the interface is specified, and that there must be a way to refer to individual interfaces so that they can be passed to external programs. The main advantage to the "interface per level" approach is that one can use interfaces to communicate with external applications (COM) that access different versions of the component (interface) at the same time, while using inheritence for reuse among versions.  This is roughly how interfaces are implemented now. I would hate to lose this in an attempt to switch over to general "most-derived" semantics.

So to some up, either Walter makes interfaces work more intuitively (in a D
sense) and supports interface inheritence _allowing_ overloading without
re-specifying the interface, guaranteeing (A)b.foo()==(D)b.foo(), or Walter
makes covering interfaces _illegal_ when not explicitly specifying the
interface, and keeps everything else as it is--which better supports external
program interfacing.  [Wow, that's quite a sentence =]

I think it really depends on what people want to use interfaces for.  I want to use them to interface to external programs, or as OS calls/callbacks (a la OSKIT) in an embedded system.  This requires multiple, predictable interface definitions at the same time--exactly what we have.

I know! "Let's have a pragma or compiler switch decide between the two". [kidding...]


October 08, 2002
>So to some up...
There's a convincing way to to begin your summation. [forgive me the self-reply =]


October 08, 2002
Joe Battelle <Joe_member@pathlink.com> wrote in news:antj7j$que$1@digitaldaemon.com:

>>So was any decision reached about how interfaces
>>should work?
> Uh, no <g>.  Continuing from my last post...
> 
> In the current implementation, the answer is "no."  You can only count on "most-derived" interface semantics if you confine yourself to interface references (converted directly from a most derived object reference).  If you want to keep a list of A's (containing both A's and B's), and then you convert to D's on demand, you will only get A's version of D.  I frankly don't find this so bothersome.

I think I would find it bothersome. <g> For me, and the majority of purposes that I would use interfaces for, I would certainly desire the most derived object interface. In my view a B is always a B even if I cast it to an A and thus should always present the interfaces implemented on a B class.


> to external programs. The main advantage to the "interface per level" approach is that one can use interfaces to communicate with external applications (COM) that access different versions of the component (interface) at the same time, while using inheritance for reuse among versions.

It seems like this problem could be solved by mapping different internal interfaces to the same external one depending on component version.


> I think it really depends on what people want to use interfaces for.

I guess when I saw that D had interfaces I expected that they would be like Java's interfaces.  The following would be my preferred functionality.

interface D
{
  int foo();
}

class A : D
{
  int foo() { return 1; }
}

class B : A // Note don't need to redeclare D
{
  int foo() { return 2; }
}

A a = new A();
A b = new B();

D da = (D)a;
D db = (D)b;

da.foo(); // return 1
db.foo(); // return 2


However I expect that Walter would tell me that he doesn't want to implicitly create a D interface vtable for every class derived from A.

October 08, 2002
First, having one hierarchy declare multiple, yet similar interfaces is a mess. If the public interface hasn't changed, then it's much better if we can continue to declare a single interface implemented differently at different levels in the inheritance [got the spelling right finally =] hierarchy.

>However I expect that Walter would tell me that he doesn't want to implicitly create a D interface vtable for every class derived from A.
I don't think interfaces should be created implicitly.  I think if you want to change the interface, redeclaring it makes perfect sense.  This is analogous to the "inserting a call to super" thread.  I don't think the compiler should be creating new interfaces without the programmer knowing about it.  A seasoned programmer would know in any event with some investigating.  But isn't it just as easy to say to the newbie: "you want to change the interface?  redeclare it for the derived class.  nothing going on behind the scenes.  You want to reference a particular version of the interface you do this: (interface)(class) reference.  You want the most derived interface?  Manipulate interface references directly; don't cast through a base class first."

Having said this.  I would like to know from Walter how much work it is to generate "implicit" interfaces every time you overload a base classes declared interface.  And, how we would refer to these (so as to pass to external apis) if they are implicit.


October 08, 2002
In article <antvhr$17e7$1@digitaldaemon.com>, Joe Battelle says...
>
>First, having one hierarchy declare multiple, yet similar interfaces is a mess. If the public interface hasn't changed, then it's much better if we can continue to declare a single interface implemented differently at different levels in the inheritance [got the spelling right finally =] hierarchy.
>
>>However I expect that Walter would tell me that he doesn't want to implicitly create a D interface vtable for every class derived from A.
>I don't think interfaces should be created implicitly.  I think if you want to change the interface, redeclaring it makes perfect sense.  This is analogous to the "inserting a call to super" thread.  I don't think the compiler should be creating new interfaces without the programmer knowing about it.  A seasoned programmer would know in any event with some investigating.  But isn't it just as easy to say to the newbie: "you want to change the interface?  redeclare it for the derived class.  nothing going on behind the scenes.  You want to reference a particular version of the interface you do this: (interface)(class) reference.  You want the most derived interface?  Manipulate interface references directly; don't cast through a base class first."
>
>Having said this.  I would like to know from Walter how much work it is to generate "implicit" interfaces every time you overload a base classes declared interface.  And, how we would refer to these (so as to pass to external apis) if they are implicit.

I'm cool with all of that, except that it appears that I have no way to access most-derived interfaces in the current implementation.

I asked point blank about the example in the documentation that states that:

(D)b.foo; // calls A's implementation

That is wrong to me on so many counts that I can't even begin to talk coherently about it.  My major complaint is that A was not mentioned anywhere at all in that code, and B has a valid override of the D interface.  If the above statement is true, it is impossible for me to ever make a generic container via interfaces -- the standard polymorphic example of drawable objects leaps to mind.

Walter stated clearly that the documentation was correct.  I cannot envision how I should understand this system.

Sorry, gotta go to a teleconference...
Mac


October 08, 2002
Joe Battelle <Joe_member@pathlink.com> wrote in news:antvhr$17e7$1@digitaldaemon.com:

> First, having one hierarchy declare multiple, yet similar interfaces is a mess. If the public interface hasn't changed, then it's much better if we can continue to declare a single interface implemented differently at different levels in the inheritance [got the spelling right finally =] hierarchy.

I understand.

> 
>>However I expect that Walter would tell me that he doesn't want to implicitly create a D interface vtable for every class derived from A.
> I don't think interfaces should be created implicitly.  I think if you want to change the interface, redeclaring it makes perfect sense. This is analogous to the "inserting a call to super" thread.  I don't think the compiler should be creating new interfaces without the programmer knowing about it.  A seasoned programmer would know in any event with some investigating.

I don't think this behaviour is as unexpected as the "inserting a call to super" behaviour.  I look at an interface as an inheritable attribute of a class.  If I say class Foo is ISerializable then I expect that Foo and all of it's child classes are ISerializable also.  To change the behavior of the ISerializable in a child class I override the serialize function of the interface.  I don't think this is unexpected or bad because I already expect to change the behavoir of a child class by overriding functions, does it matter if they are part of an interface or not?

However, as I already stated, I don't thnk this is a battle I'm going to win and I can live with redefining the interface for base classes. I just think it redundant.

> But isn't it just as easy to say to
> the newbie: "you want to change the interface?  redeclare it for the
> derived class.  nothing going on behind the scenes.  You want to
> reference a particular version of the interface you do this:
> (interface)(class) reference.  You want the most derived interface?
> Manipulate interface references directly; don't cast through a base
> class first."

Easier said than done.  Suppose I have multiple hierarchies of
classes.  For the purpose of the program I store and manipulate
them in a containers that holds the base class type.  Now I want
to store all of them.  I would like to create an I/O object
and pass it to the containers ISerialize interface.  The container
in turn will call the ISerialize on all is objects.

I think this is a fairly common design pattern but it won't work with interfaces the way they are now.

> 
> Having said this.  I would like to know from Walter how much work it is to generate "implicit" interfaces every time you overload a base classes declared interface.  And, how we would refer to these (so as to pass to external apis) if they are implicit.
> 
> 
> 

October 08, 2002
>I'm cool with all of that, except that it appears that I have no way to access most-derived interfaces in the current implementation.
Not true. See below.

>I asked point blank about the example in the documentation that states that:
>
>(D)b.foo; // calls A's implementation
>
>That is wrong to me on so many counts that I can't even begin to talk coherently about it.  My major complaint is that A was not mentioned anywhere at all in that code, and B has a valid override of the D interface.

It does not have a valid override.  That is the whole point!  Walter let overrides cover interface methods without requiring redeclaring interfaces and that causes loads of trouble.  If he had not allowed the override, then B would _necessarily_ have had to redeclare D to overload foo and (D)b.foo would indeed call B's D.  So as long as you manipulate D's without casting through bases you always get most-derived semantics.


October 08, 2002
>I asked point blank about the example in the documentation that states that:
>
>(D)b.foo; // calls A's implementation
>
>That is wrong to me on so many counts that I can't even begin to talk coherently about it.  My major complaint is that A was not mentioned anywhere at all in that code, and B has a valid override of the D interface.  If the above statement is true, it is impossible for me to ever make a generic container via interfaces -- the standard polymorphic example of drawable objects leaps to mind.
>
>Walter stated clearly that the documentation was correct.  I cannot envision how I should understand this system.
>
>Sorry, gotta go to a teleconference...
>Mac

OK, I'm back from the teleconference.

The only mental model I can come up with is that (D)b tries to "cast" b to a D reference, which it can't do (no implementation for D, since it is just an interface).  So, it casts it to the nearest thing it can find along that chain, which is A.  I can (sort of) buy that from an implementation standpoint. Unfortunately, it completely breaks polymorphism on interfaces, and that raises a consistency issue, because polymorphism works for classes but not for interfaces:

class BASE {}
interface INTERFACE {}
class ONE:BASE, INTERFACE { int foo() {return 1;} }
class TWO:ONE, INTERFACE    { int foo() {return 2;} }

BASE b1, b2;
INTERFACE i1, i2;
ONE one;
TWO two;

one.foo(); // returns 1
two.foo(); // returns 1

b1 = (BASE)one;
b2 = (BASE)two;
i1 = (INTERFACE)one;
i2 = (INTERFACE)two;

b1.foo(); // returns 1
b2.foo(); // returns 2, as I would expect

i1.foo(); // returns 1
i2.foo(); // returns 1, because it uses ONEs implementation rather than TWOs.

I'm OK with all of the above except for the last line, which seems completely wrong to me.  But I have been assured that that is what would happen.  It is equivalent to the code from the documentation that does:

(D)b.foo(); // returns 1 from A's implementation

D has been replaced by INTERFACE, and b has been replaced by 'two', to avoid confusion with the BASE class.  If you make those substitutions, the doc code becomes:

(INTERFACE)two.foo();

My code merely uses a storage reference variable, so that what I exactly say is:

INTERFACE i2;
i2 = (INTERFACE)two;
i2.foo();

We can, of course, remove the b1/b2/i1/i2 references, and just do the casts inline, but I expect the problem to arise in real world code when an object reference has been implicitly cast due to a function call and is being stored in a variable reference of a base type (standard collection design).

While I can sort of buy the argument that an explicit cast like:

(ONE)two.foo(); // returns 1, because it has been case to a one

will call the lower interface because of the explicitness of the cast, I don't like that behavior when the cast is implicit.  Nor do I like different behavior for implicit vs. explicit casts.  And I don't really buy it even for explicit casts, because that wasn't my intent.  I wanted to call a function that existed in the ONE class, but I wanted the behavior that was appropriate to the object I actually called it on -- two, of class TWO.  That is what polymorphism is for, and it works for classes.  Having it _not_ work for interfaces seems like a very inconsistent and error prone design.  Realistically, there is no reason for the explicit cast as shown, unless it is meant to be something like C++'s dynamic_cast.

I seem to be wandering from my point, sorry...

I don't understand why you would _want_ to call a base class's implementation of an interface member.  The base class is virtually guaranteed not to properly understand the derived class.  If it did, it seems like you wouldn't have overridden that member.  However, there is a lot of programming out there that I am unfamiliar with, so I won't state that you never need to do this.  I would claim that doing so is an exceptional case, where the programmer is very deliberately choosing a base implementation.  As an exceptional case, I think it deserves the special syntax, and common syntax should support polymorphism properly, like it does for classes.

Two thoughts for how to access base class implementations:
b.A::foo(); // from C++, more or less
b.(A)foo(); // from my fevered imagination, as far as I can tell
b.(A.foo)foo(); // seems redundant...
b.(A.foo)(); // kinda fugly

I have no clue how hard the second option would be to parse, either for compilers or for humans.  I prefer it over (A)b.foo simply because the mechanism that supports (A)b.foo is the standard reference cast mechanism, and I don't want it to do this for all of the previous reasons.  The second option casts the member, not the reference.  Of course, syntactically, you are telling it to cast a function into a class reference, which breaks the syntax==semantics rule, which led me to the third option.  However, the third option is quite verbose, and suggests that it would be legal to do:

b.(A.bar)foo();

which would call A.bar, which raises the question of why foo() is even present
-- hence option 4.

Really, the only option of those that I like is the first one, but it does bring back the evil C++ scope resolution operator.

Oooh! oooh! oooh!  (Sorry, just had a potential brainstorm)
How would:

b.A.foo();

be?  Treat each interface as an implicit member.  If you are trying to get to a particular interface on an object, you go to that member.  If you want standard "most derived" semantics, you just call it normally:

b.foo();   // calls B's foo()
b.B.foo(); // also calls B's foo(), should you wish to be verbose
b.A.foo(); // calls A's foo()
b.D.foo(); // illegal - D has no implementation, and thus is not included in
// the "pseudo-member" set of base classes.

In COM terms, that would mean you got things like:

MySurface.DX7_SURFACE.blit(,,,);

And because it goes through the member system, you can even use 'with' to gather up a collection of calls that all go through a specific base interface:

with MySurface.DX7
{
blit(,,,);
}

vtables/interfaces would not have to exist as specific entities at every level of the hierarchy, and classes that do not provide an overridden implementation would not be considered pseudo-members:

// forgive the shorthand, but this hopefully will make sense in the context
// of what we have been discussing:
interface I
class A:I
class B:A,I // provides special overrides
class C:B   // nothing added/changed here
class D:C,I // provides special overrides

D d;

d.foo;   // OK, uses D
d.D.foo; // OK, uses D
d.C.foo; // illegal -- C does not provide specialized implementation
d.B.foo; // OK, uses B
d.A.foo; // OK, uses A

Would such a system meet all of the requirements for everybody?
1. Polymorphic, "most derived" semantics for all common syntax, whether classes
or interfaces.
2. Access to specific interface implementations when necessary, in a clear,
understandable, and convenient way
3. (Reasonably) easy to implement in a compiler
4. (Reasonably) easy to explain to new programmers

It seems like it would to me, but I am biased.
Mac


October 08, 2002
>However, as I already stated, I don't thnk this is a battle I'm going to win and I can live with redefining the interface for base classes. I just think it redundant.
Don't give up!  There's hope--for I feel the same way <g>.

>classes.  For the purpose of the program I store and manipulate
>them in a containers that holds the base class type.  Now I want
>to store all of them.  I would like to create an I/O object
>and pass it to the containers ISerialize interface.  The container
>in turn will call the ISerialize on all is objects

I actually think a better way of doing it is to have a DataModel object aggregate all objects you want to serialize by holding interface references to all those differently-inherited objects.  To serialize you iterate over this list and you automatically get most-derived semantics.  You get your polymorphism without hassle, albeit (possibly) at the cost of another container.

In my own real-world experience, I usually have an object represent a file, and this object naturally aggregates all objects that I want to serialize.  These interface references are most-derived because they are registered by the constructor of the serializable object.

>I think this is a fairly common design pattern but it won't work with interfaces the way they are now.
Yes it will, but requires your collections be based on utility (interface)
rather than kind (base class).  I think this leads to better encapsulated design
but YMMV.


October 08, 2002
Sorry - while I was typing up my monster length followup, Joe Batelle clarified an issue for me (that the only reason (D)b.foo fell back to A.foo was because B did not state that it was overriding the D interface, so b.foo is apparently part of B, but not part of the D interface to B)

Given that, I pitch my vote in on the "you have to state inheritance from the interface to override any of its methods".

Although I still like the suggestion I ended up with at the end of the
mega-post... (using "pseudo-members" for base interfaces, so that you would do
b.A.foo() if you actually wanted access to A's foo())

Isn't asynchronous communication wonderful?
Mac