September 09, 2001
Walter wrote:

>>Well this is a whole science itself. Garbadge collected programs -can- be faster then convetional ones. Since a lot of object coping can be saved sometimes.But this is not a must be. Especially gcc 3.0 vs 2.95 did not convince me. 3.0 uses now GC inside, but in summeration with all other edits it compiler far slower than 2.95. However to say it's due to the GC is false, since it's the sum of a lot of new stuff, and more sensetive warning checking etc.
> 
> I'd be a little surprised if gc makes the compiler slower. Most compilations will not need more than available memory to compile, and if the gc is tuned to that, it will rarely need to actually run a collection.

I know, but gcc made advertising GC difficut, since it was one of the most famous projects that really used garbadge collection, and with this release it turned out to compile at least double as long :/ I understand that it has nothing do to with each other, but it's a slap on "marketing". Try to call really a well-known project that runs fast with GC. Tell the people gcc 3.0 and you make a fool of yourself. Say java and you'll have to laugh yourself. I don't know of any others, and I understand the reasons behind speed penality are to be searched other where, but try to tell this :/

- Axel
September 09, 2001
Axel Kittenberger wrote in message <9ngagf$1isq$1@digitaldaemon.com>...
>Say java and you'll have to laugh
>yourself. I don't know of any others, and I understand the reasons behind
>speed penality are to be searched other where, but try to tell this :/


I've worked with Java and gc. GC can make a program faster, given:

1) write code that makes use of gc - for example, in C, people frequently copy strings to avoid memory ownership bugs. With gc, copy a reference, not the data.

2) close cooperation with the language. This, of course, doesn't work with C++.

3) much temporary generation in C++ can go away, again because of (1).

4) Java programs can be slow because the String class is inefficient, and because in general the language requires a lot more heap allocated objects than C/C++. File I/O is slow in Java, and of course, a poorly implemented JIT or GC will also make it slow <g>.


September 10, 2001
Axel Kittenberger wrote:
> 
> Take the class bork that implemnts the interfaces cork and dork. cork and
> dork have 10 functions. Then the tables use at the end:
> dork:   10 entries
> cork:   10 entries
> bork:   20 entries
>         --
>         50 entries in summeration.
> 
> Now with a dispatch table imagined to be flat it would be 20 functions for 3 classes, so it are 60 entries.

vtable contains entries for all class methods, ie. both inherited and newly defined in the class.  Some of the inherited methods can be overriden -- this doesn't increase vtable size.  vtable size only increased when new methods added.

In current D spec each class also defines an interface.  Each class MUST inherit from one class and MAY implement any number of interfaces. Now imagine class A that inherits from B.  B is inherits from C and implements D and E.

  +-----+    +-----+    +-----+
  |  C  |    |  D  |    |  E  |
  +-----+    +-----+    +-----+
     |inher     |          |
  +-----+       |          |
  |  B  |<-impl-/<-impl----+
  +-----+
     |inher
  +-----+
  |  A  |
  +-----+

vtables for class A:

  vtable1 - contains new A + new B + all D + all E + all C methods,
  vtable2 - contains all D methods,
  vtable3 - contains all E methods.

vtable1 implements interfaces A, B, C.  vtable2 implements D interface. vtable3 implements E interface.

vtables for class B:

  vtable1 - contains new B + all D + all E + all C methods,
  vtable2 - contains all D methods,
  vtable3 - contains all E methods.

vtable1 implements interfaces B and C.  vtable2 implements D interface. vtable3 implements E interface.

Lets imagine that vtables for classes C, D, E are trivial -- just one vtable -- classes C, D, E are root classes.

Now lets imagine that each class implements 10 new methods and count the size of the tables:

  E: 10 entries
  D: 10 entries
  C: 10 entries
  B: (vt1=40)+(vt2=10)+(vt3=10)=60 entries
  A: (vt1=50)+(vt2=10)+(vt3=10)=70 entries
all: 160 entries

Now try using dispatch tables.  Dispatch table for class contains entries for newly defined methods and for overriden methods (not all inherited).  So we can calculate minimul and maximum dispatch table sizes.

Minimum size (no inherited methods overriden):

  E: 10
  D: 10
  C: 10
  B: 10
  A: 10
all: 50

Maximum size (all iherited methods overriden):

  E: 10
  D: 10
  C: 10
  B: (inherited=30)+(new=10)=40
  D: (inherited=40)+(new=10)=50
all: 120

Actual size is somewhat in the middle (50% of inherited methods are
overriden):

  (120+50)/2=85

This value depends on what percentage of methods are overriden in typical project.  For GUI frameworks it typically much less (about 20%).  So memory consumption is lesser then for vtables.

As I already said another advantage of dispatch tables is that we need only one table per class, not a vtable per each interface class inherits or implements.  Implementation of such single-table dispatching is clean and easily understandable.  For me, I don't understand how to implement multiple vtables for class.  Should it be vtable[INTERFACE_INDEX][METHOD_INDEX]?  But any future class can potentially implement any interface, so how to generate consistent INTERFACE_INDEXes?
September 10, 2001
Walter wrote:

> 
> Axel Kittenberger wrote in message <9ngagf$1isq$1@digitaldaemon.com>...
>>Say java and you'll have to laugh
>>yourself. I don't know of any others, and I understand the reasons behind
>>speed penality are to be searched other where, but try to tell this :/
> 
> 
> I've worked with Java and gc. GC can make a program faster, given:
> 
> 1) write code that makes use of gc - for example, in C, people frequently copy strings to avoid memory ownership bugs. With gc, copy a reference, not the data.

I know that, but as said marketing and technics are two different things,
just put you're finger on an open project that runs fast and uses GC :/

> 3) much temporary generation in C++ can go away, again because of (1).

I guess you mean what the compile generates himself in background, or? Don't know some C++ things are a mystery for myself, and I believe a lot of constructions are wired because they wanted to still use existing linkers. I believe that if the final linker is aware of some language features a lot of optimization and simplification could take place here (like realöy functional inling, or the whole dynamic-type stuff could be done so much simpler if only the final linker would assign IDs, instead of having to output the whole class descriptions into the object file.

> 4) Java programs can be slow because the String class is inefficient, and because in general the language requires a lot more heap allocated objects than C/C++. File I/O is slow in Java, and of course, a poorly implemented JIT or GC will also make it slow <g>.

Java programs are slower because they are still interpreted/JIT compiled. However I must hardly defend java here, suns hotspot vm (jdk 1.3) or jre or however their marketing calls it today is the fastest vm I've ever seen, and this goes globally not even for java. It runs blazing fast for having to JIT compile / interpred in background, and one can today even already render graphics in a modern java VM with reasonable speed.

- Axel
September 11, 2001
Axel Kittenberger wrote:
> 
> Walter wrote:

> Java programs are slower because they are still interpreted/JIT compiled. However I must hardly defend java here, suns hotspot vm (jdk 1.3) or jre or however their marketing calls it today is the fastest vm I've ever seen, and this goes globally not even for java. It runs blazing fast for having to JIT compile / interpred in background, and one can today even already render graphics in a modern java VM with reasonable speed.

	Java is a dismal sloth where I work.  We had one decrepit machine
running several server processes in C.  Due to politics we switched to
java and now each server has to have its own bleeding edge machine to
crawl with insufferable performance.  I think it's more than the string
implementation.
	I like what hot java does, but if may echo the words of another, it's a
shame we don't do more with self modifying code.  Using hot java to make
up for java's pathetic performance is as bad as the arguments that have
been used to justify micro-kernels.  Any optimization hoops you leap
backwards through to try to make the micro-kernel tolerable could be
done to the monolith to further humiliate the micro-kernel's (lack of)
performance.
	If we could devise smarter systems for optimizing running native code
it would help java and native code.  I had a bit of hope with
Transmeta's work, but it does not seem to be their focus.

	OK, so most of the arguments against micro-kernels are based on mach
which is about as micro as a gas giant.  Likewise, java's VM wasn't
designed for performance.  Well, they didn't get it, and everything
since is just an attempted apology.

Dan
October 24, 2001
I've got two words for you.  Virtual and inheritance.

Say you have this situation:

class A
{
}

class B : A
{
}

class C : A
{
}

class D : B, C
{
}

Now how many instances of A are in D?  Two?  Or one... but which one, the one from B or the one from C?  What if both B and C try to use the same A? Will it behave ok, or will it be a bug?  Can you tell the compiler which one to keep, or to merge the two, or to keep both and have duplicate, possibly conflicting data?  How do clients figure out how to get a pointer to the A part of a D?

There is runtime overhead for the solution C++ came up with to allow one to specify that the compiler should merge the A's together.

MI really just opens up a whole can of worms that is just better left closed.  I've argued the opposite point too, years ago, but lately I really do agree that if you need MI to do something besides exposing interfaces, you could most of the time be better off rethinking your class hierarchy because it's likely flawed.  It's easy to abuse OOP when you start down a flawed path and to make it work you have to tie up some loose ends... MI enables both these things, but they're both unnecessary.  MI is appealing conceptually, until you've had to throw away a few spaghetti projects because you couldn't disentangle them.

A way to auto generate forwarding functions would be nice though, save a lot of typing.  Maybe as part of the inheritance mechanism you could say you want to implement an interface, but have all functionality requests *by default* be routed to a member object which exposes that same interface.  Of course you could then override any of them you cared to... that'd enable some very MI-like objects, but it's all has-a, not is-a;  much less complicated.

Sean

"Eric Gerlach" <egerlach@canada.com> wrote in message news:3B8E33D8.3060502@canada.com...
> >  I believe the D way is code duplication.  I personally don't like this,
> > but it was a design decision in order to prevent the inheritance mess
> > that C++ owes us an apology for.  I know C++'s syntax and semantics are
> > rather nasty and I hear it makes the compiler internals a mess so I
> > can't complain if MI is left out.  I just dislike how some folk pretend
> > that there isn't a loss of functionality without MI.
> > I don't know if D will support MI since there has been a strong
> > reaction against, but if you know of a better way that it could be
> > implemented without sacrificing the other design goals of D (including
> > an easy to implement compiler), I love to hear it.  Since the time that
> > I only knew fortran 77 and Commodore basic, I've want MI functionality.
> > I just didn't know what it was called until I learn C++.
>
> I was thinking about this this morning... you *do* lose a nice feature without MI.  The real reason that I think that's the case is because it's a bitch to get it right.  There are so may cases of conflicting functions... how do you resolve them and know which function to call?
>
> Suppose classes A and B both define foo().  C inherits from both A and B, which one do you inherit?  What if foo() in A and B were redefined from a common ancestor, Z?  What if one of them is final?  There's just too much to think about.
>
> Here's my thought:  What about allowing MI, as long as there are *no* conflicts?  None.  Whatsoever.  If there are, it's an error.  That allows the useful part of MI without getting into the mess.
>
> Now, if anyone has a good way of resolving conflicts (I hear Eiffel is good at that) I think we'd all be willing to hear it.  But disallowing conflicts shouldn't affect the compiler too much (it just fills in the vtable it was going to leave blank), and it gives ua a nice feature to play with.
>
> Comments on that?
>
> Eric
>


October 24, 2001
I've always thought that if you divide by zero you should get infinity (if the dividend was positive) or negative infinity (if the dividend was negative).  With modulo by zero the answer is always zero.  I'm not so sure I agree with my old line of thinking these days though.  It's probably better to organize the code against preventing division by zero than by responding to it after the fact (may save a division, but requires some comparisons first, as well as some knowledge about the limitations of your floating point representation).

I just don't know, I'm so used to DBZ by now it doesn't bother me anymore.

Sean


> Geez!  Maybe we should just use variable length text to represent numbers.  Of course that still doesn't fix divide by zero.  I still say we drop it.
>
> Dan


October 24, 2001
Sean L. Palmer wrote:

> I've always thought that if you divide by zero you should get infinity (if
> the dividend was positive) or negative infinity (if the dividend was
> negative).  With modulo by zero the answer is always zero.  I'm not so
> sure
> I agree with my old line of thinking these days though.  It's probably
> better to organize the code against preventing division by zero than by
> responding to it after the fact (may save a division, but requires some
> comparisons first, as well as some knowledge about the limitations of your
> floating point representation).

Mathematically a division by zero does not give infinity, but is undefined. A devision of _nearly_  zero gives _nearly_ infinity.

Take following sentence
    out of:  a / b = c --follows-->  a = b * c
I guess thats a pretty *dough* sentence :o)

Take in example the proposal:
   2 / 0 = infinity
then it would have to be that
   0 * inifity = ? 2 ?
---
See? Infinity is not quite right.

Maths can calculate with _nearly_ zero and _nearly_ infinity, like the
                a * sin (x)
  lim         ---------
 x -> 0             x

Would give mathematically a defined result, but hence we do programming here not hardcore maths :o)

> Geez!  Maybe we should just use variable length text to represent numbers.  Of course that still doesn't fix divide by zero.  I still say we drop it.

Division by zero is an error, and should raise an exception, a hardware trap,etc.

October 24, 2001
"Sean L. Palmer" wrote:

> I've got two words for you.  Virtual and inheritance.

[ ... snip ... ]

AMEN!

Jan


October 24, 2001
Axel Kittenberger wrote:

> Maths can calculate with _nearly_ zero and _nearly_ infinity, like the
>                 a * sin (x)
>   lim         ---------
>  x -> 0             x

Also remember that you can have "positive zero" and "negative zero" when dealing with limits:

                abs(x)
  lim         ------- = 1
 x -> 0+       x

                abs(x)
  lim         ------- = -1
 x -> 0-       x

--
The Villagers are Online! villagersonline.com

.[ (the fox.(quick,brown)) jumped.over(the dog.lazy) ]
.[ (a version.of(English).(precise.more)) is(possible) ]
?[ you want.to(help(develop(it))) ]