September 03, 2001
Axel Kittenberger wrote:
> 
> >> > Private Inheritance: You don't get ISA, but you get implementation and
> >> > data
> >> > Only your closest 'friend' know where you got it
> >>
> >> Hmmm, but what is here now the real difference to HASA?
> >
> > It says the derived class has all the same HASA relationships as the parent, with out manually cutting and pasting code and forwarding method calls.  On the other hand, a pointer to the parent class cannot be assigned a reference to the child class.  It is for code reuse, but not polymorphism.
> 
> If it's private which calls should be forwarded? Is there any good example where private inheritance really makes sense? Against having a normal private field, HASA.

	Nothing is coming to mind.  I'm pretty sure I've had good uses for
private inheritance in previous projects, but not many.  It would be a
lot more useful if the method retained their access classification and
you just did get the ISA relationship from the base class.

> > Well, I can't say for sure myself.  I don't think it would be expensive though.  The implementation inheritance is just a matter the the compiler doing a cut and paste on the programmer's behalf.  The interface inheritance would be the same as inheriting from a pure virtual base class, even if it isn't pure virtual.  LX was pretty flexible too about these things.  It looked like LX would have performance problems because of some of it's features, but this isn't one I expected to cause problems.
> 
> Well I don't want to be harsh to cristopher, as the same goes for my own project. But how much code is actually coded in LX? From my own expirience I can tell a lot of things look good on paper first, but when you try to work with it, you discover it's 'cribbled' or has logical conflicts.

	True, it's a new idea, just worked better in my head than the styles of
inheritance in C++.  For that matter I can't remember if LX had access
control on members.

> We'll just have to look how ideas proof themselfs in practice.
> It's the old science paradigm that in example the greeks did wrong. They
> theoritisied the whole time only, without paying attention to the
> expiriment. I think in basic the same ideology should be maintained also in
> programming environment, expirimients tell what's true or false, not
> theories.

	True, but it is good that there are still folks that are theorizing.
:-)

> > They simply recognized the C++ MI implementation had
> > problems, so they lopped off a leg to fix a problem in the foot.
> > Now let me get back to those divide by zero problems.  You know a naval
> > ship was dead in the water as a result of a divide by zero.  The
> > division operator is just too dangerous.  Besides, we can bit shift.
> > It's the same as dividing by powers of two without all the dangers.
> 
> Yup that's a good argument MI has it's dangers, the division operator has it's dangers. But both have they're advantages, now you can scrap both and workaround them, or you can live with the danger. Division by zero is a result of another bug somewhere, not a problem itself.
> 
> I've a nicer example, the Ariane rocket. Had an integer overflow, and
> suddendly changed it's mind from:
> "I want to go up, I want to go up, I want to go up"
> to "I want to turn around and thrust down", until the computer decided to
> explode the rocket.

Geez!  Maybe we should just use variable length text to represent numbers.  Of course that still doesn't fix divide by zero.  I still say we drop it.

Dan
September 07, 2001
Why don't use another method for late methods binding?

Multiple inheritance in C++ is so clumsy due to using of virtual tables.  There is another method used in Objective-C -- dispatching tables, also known as selector tables.

Objective-C is loosely typed language that support completely anonymous object type (id).  But this is done just to make it more similar to Smalltalk.  It is possible to make a strictly typed language that use dispatching tables.  [Dispatching table for the class contains entries only for those methods, that had been defined in this class.  It also contains a link to superclass dispatch table.  If required method is not found in the its class dispatch table it will be searched in superclass dispatch table and so on.]  Objective-C doesn't support multiple inheritance.  But its model can be easily extended to support it -- just let the dispatch table contain multiple links to superclasses.

Dispatch tables also convenient for various GUI frameworks that define
palymorphing methods for event handlers.  The common problem is that
there are too many events to be handled.  If you make method for each
handler the vtables grow extremely for each GUI framework class.
This is because most frameworks doesn't use virtual methods for
event handlers (for example, Borland VCL and OWL use dynamic methods
that use exactly the same dispatch tables, MFC uses tables that are
similar to dispatch tables).

So, why don't use dispatch tables instead of virtual tables?

-- iliyap
September 08, 2001
Iliya Peregoudov wrote in message <3B98B78E.7232@mail.ru>...
>So, why don't use dispatch tables instead of virtual tables?


Virtual tables have the advantage of fast execution speed.


September 08, 2001
> So, why don't use dispatch tables instead of virtual tables?

Dispatch tables have only an advantage for loosly typed languages, or? As far I understood they don't bring any benefits for strong typed languages, where all type conflicts can be determined at compiletime, so the compiler can precalculate all the possible calling situations and thus vtables are more effective. After all for a vtable call you only require to call a function pointer from an array. I searched a little but found nowhere a nice description how dispatch tables actually work :/

- Axel

September 09, 2001
Axel Kittenberger wrote:
> 
> > So, why don't use dispatch tables instead of virtual tables?
> 
> Dispatch tables have only an advantage for loosly typed languages, or? As far I understood they don't bring any benefits for strong typed languages, where all type conflicts can be determined at compiletime, so the compiler can precalculate all the possible calling situations and thus vtables are more effective. After all for a vtable call you only require to call a function pointer from an array. I searched a little but found nowhere a nice description how dispatch tables actually work :/

A good description of dispatch tables can be found in the
"Object-Oriented
Programming and the Objective-C Language", Chapter 2, "The Objective-C
Language",
under "How Messaging Works".  This book can be downloaded as PDF file
from
http://www.gnustep.org/resources/documentation/ObjectivCBook.pdf (480K)

And I can't find a nice description how virtual tables work when
multiple
inheritance come into the scene :/  But (as I understand) it needs
multiple
virtual tables per class.  The advantage of dispatch tables is that we
always
can put all the things in a single table.

Another advantage is when using the language for GUI frameworks.  There
is no
need to invent other meens of events transport -- all events can be
mapped to
messages.  No more Object Pascal dynamic methods bounded to cm_XXX
constants,
MFC event handler tables, Qt signals -- all is done using intrinsic
language
features.
September 09, 2001
Walter wrote:
> 
> Iliya Peregoudov wrote in message <3B98B78E.7232@mail.ru>...
> >So, why don't use dispatch tables instead of virtual tables?
> 
> Virtual tables have the advantage of fast execution speed.

They're not so inefficient as it seems.  The Objective-C run-time
system, for example, maintains a method cache for ecach class.  When
the method is called for the object it is cached in the object's
class method cache.  When the method called again it looked up in
the cache. Method cache is a flat table so cache lookups are just
a little slower than vtable lookups.
September 09, 2001
Iliya Peregoudov wrote:

> Walter wrote:
>> 
>> Iliya Peregoudov wrote in message <3B98B78E.7232@mail.ru>...
>> >So, why don't use dispatch tables instead of virtual tables?
>> 
>> Virtual tables have the advantage of fast execution speed.
> 
> They're not so inefficient as it seems.  The Objective-C run-time
> system, for example, maintains a method cache for ecach class.  When
> the method is called for the object it is cached in the object's
> class method cache.  When the method called again it looked up in
> the cache. Method cache is a flat table so cache lookups are just
> a little slower than vtable lookups.

That's but why use them if they are just a little slower? What advantage do they bring if weak type binding is not supported at all, just a little slower for what? As far I understood the memory occupied by the one super-table is not less than the sum of all vtables for every object. Why do caching if the compiler can already write the precalculated value into the assembler code as vtable?

A vtable call constist of vtable[MAGIC_KEY](arguments), so it's composed of vtable + MAGIC_KEY, get content of, call. That's are 3 assembler level commands. All I read about dispatch tables was only pure hiping on how they are cool, and how vtables are bad, but without no facts in the background. Of how much assembler compositions does a dispatch call take less than a vtable call? Or maybe a little example of two or three classes, where a dispatch table will take less memory than the 3 vtables.

- Axel
September 09, 2001
Axel Kittenberger wrote:
> 
> Iliya Peregoudov wrote:
> 
> > Walter wrote:
> >>
> >> Iliya Peregoudov wrote in message <3B98B78E.7232@mail.ru>...
> >> >So, why don't use dispatch tables instead of virtual tables?
> >>
> >> Virtual tables have the advantage of fast execution speed.
> >
> > They're not so inefficient as it seems.  The Objective-C run-time
> > system, for example, maintains a method cache for ecach class.  When
> > the method is called for the object it is cached in the object's
> > class method cache.  When the method called again it looked up in
> > the cache. Method cache is a flat table so cache lookups are just
> > a little slower than vtable lookups.
> 
> That's but why use them if they are just a little slower? What advantage do they bring if weak type binding is not supported at all, just a little slower for what? As far I understood the memory occupied by the one super-table is not less than the sum of all vtables for every object. Why do caching if the compiler can already write the precalculated value into the assembler code as vtable?
> 
> A vtable call constist of vtable[MAGIC_KEY](arguments), so it's composed of vtable + MAGIC_KEY, get content of, call. That's are 3 assembler level commands. All I read about dispatch tables was only pure hiping on how they are cool, and how vtables are bad, but without no facts in the background. Of how much assembler compositions does a dispatch call take less than a vtable call? Or maybe a little example of two or three classes, where a dispatch table will take less memory than the 3 vtables.
> 
> - Axel

The MAGIC_KEY mentioned is method index in vtable counted from (for example) zero.  When a class inherits from two classes (multiple inheritance) method indexes overlap each other.  So you'll need two vtables.  If the class inherits from classes that already inherited from other classes (and so on) you'll end up with a great number of vtables for the class.

This can be avoided if each method in the system takes its own unique
MAGIC_KEY.  For example, doSomething(int, char[]) is assigned 1001
(system can contain hungreds of classes with thousands of different
methods).  So if the class implemets method doSomething(int, char[])
it already has vtable 1002 items long (al least).  Most slots of
vtable will be unfilled.

When using dispatch tables each method takes its own unique MAGIC_KEY
named selector.  But the dispatch table is not a linear array,
but a hash, where selector is a key and method address is a value.
Moreover this hash contains only those selector:method pairs that
are defined in the class but not those that inherited.  Cache contains
only those selector:method pairs that are used at runtime for this
class, not all selector:method pairs in the system.

Hash lookups are slower then linear array lookups.  But garbage collection is a also not so efficient than manual memory management. Why garbage collected programs are faster and less error prone than those using manual memory management?
September 09, 2001
> The MAGIC_KEY mentioned is method index in vtable counted from (for example) zero.  When a class inherits from two classes (multiple inheritance) method indexes overlap each other.  So you'll need two vtables.  If the class inherits from classes that already inherited from other classes (and so on) you'll end up with a great number of vtables for the class.

True, I didn't yet thought about vtables in combination with implementing differend interfaces at the same time. So the "problem" even exists with java-like SI inheritince not even only in multi-inheritance.

> This can be avoided if each method in the system takes its own unique
> MAGIC_KEY.  For example, doSomething(int, char[]) is assigned 1001
> (system can contain hungreds of classes with thousands of different
> methods). So if the class implemets method doSomething(int, char[])
> it already has vtable 1002 items long (al least).  Most slots of
> vtable will be unfilled.

"Most slots of vtable will be unfilled."

That's false. In a vtable system not every function will have a global unique ID. Every function has a unique ID in the vtable for the class that started a polymorhpic function. If a class has 7 virtual functions it's vtable will be 7 entries long. If a class implements two interfaces each with 10 different virtual functions, it has 2 vtables with each 10 entries, so it are 20 entries in summeration. Having 1000 unfilled entries is not true, you can watch C++'s vtables in example in ddd/gdb in the debugger in action. The tables had always as much entries as I implemented virtual functions. Why can ID's be duplicated? Since the compiler know in the call to foo() that this function is in the foo interface, so it can take the id for the foo vtable.

> When using dispatch tables each method takes its own unique MAGIC_KEY
> named selector.  But the dispatch table is not a linear array,
> but a hash, where selector is a key and method address is a value.
> Moreover this hash contains only those selector:method pairs that
> are defined in the class but not those that inherited.  Cache contains
> only those selector:method pairs that are used at runtime for this
> class, not all selector:method pairs in the system.

Ahh, I understand know.... still I think system is pretty good for loosly bindend language. For strong typed ones, where all possiblites can be precalculated at compiletime I see neither a speed nor a memory gain.

Take the class bork that implemnts the interfaces cork and dork. cork and
dork have 10 functions. Then the tables use at the end:
dork:   10 entries
cork:   10 entries
bork:   20 entries
        --
        50 entries in summeration.

Now with a dispatch table imagined to be flat it would be 20 functions for 3 classes, so it are 60 entries.

Okay now with hashing you can save the nulls. so the logic is not class but function based. So 10 functions from dork have to be counted 2 times (for dork and bork), and the 10 functions from cork are also counted two times (for cork and bork). So it are hmmm, 40 entries. Okay I understand :)

Still I'm not convinced that the memory requirements pay that much of the hazzle for having to hash.

> But garbage collection is a also not so efficient than manual memory management. Why garbage collected programs are faster and less error prone than those using manual memory management?

Well this is a whole science itself. Garbadge collected programs -can- be faster then convetional ones. Since a lot of object coping can be saved sometimes.But this is not a must be. Especially gcc 3.0 vs 2.95 did not convince me. 3.0 uses now GC inside, but in summeration with all other edits it compiler far slower than 2.95. However to say it's due to the GC is false, since it's the sum of a lot of new stuff, and more sensetive warning checking etc.

- Axel


September 09, 2001
>Well this is a whole science itself. Garbadge collected programs -can- be faster then convetional ones. Since a lot of object coping can be saved sometimes.But this is not a must be. Especially gcc 3.0 vs 2.95 did not convince me. 3.0 uses now GC inside, but in summeration with all other edits it compiler far slower than 2.95. However to say it's due to the GC is false, since it's the sum of a lot of new stuff, and more sensetive warning checking etc.

I'd be a little surprised if gc makes the compiler slower. Most compilations will not need more than available memory to compile, and if the gc is tuned to that, it will rarely need to actually run a collection.