Jump to page: 1 24  
Page
Thread overview
An interesting read: Scalable Computer Programming Languages
Jul 26, 2003
Andrew Edwards
Jul 26, 2003
Helmut Leitner
Jul 26, 2003
Ilya Minkov
Aug 17, 2003
Walter
Jul 27, 2003
Sean L. Palmer
Aug 17, 2003
Walter
Jul 27, 2003
DeadCow
Jul 27, 2003
Ilya Minkov
Jul 28, 2003
Bill Cox
Jul 28, 2003
Ilya Minkov
Jul 29, 2003
Bill Cox
Jul 29, 2003
Sean L. Palmer
Jul 29, 2003
Ilya Minkov
Aug 07, 2003
Ilya Minkov
Aug 17, 2003
Walter
Aug 17, 2003
Walter
Aug 18, 2003
Mike Wynn
Jul 27, 2003
Helmut Leitner
Jul 26, 2003
Matthew Wilson
Jul 26, 2003
Matthew Wilson
Jul 28, 2003
Matthew Wilson
Jul 28, 2003
Matthew Wilson
Jul 28, 2003
Sean L. Palmer
Sep 08, 2003
Walter
Sep 09, 2003
Matthew Wilson
Jul 27, 2003
Mark Evans
Jul 27, 2003
Andrew Edwards
July 26, 2003
http://www.cs.caltech.edu/~mvanier/hacking/rants/scalable_computer_programming_languages.html


July 26, 2003

Andrew Edwards wrote:
> 
> http://www.cs.caltech.edu/~mvanier/hacking/rants/scalable_computer_programming_languages.html

It *is* interesting, but basically the view of a teacher.

--
Helmut Leitner    leitner@hls.via.at Graz, Austria   www.hls-software.com
July 26, 2003
"Helmut Leitner" <helmut.leitner@chello.at> wrote in message news:3F226DD2.8CDD04DA@chello.at...
>
>
> Andrew Edwards wrote:
> >
> >
http://www.cs.caltech.edu/~mvanier/hacking/rants/scalable_computer_programmi ng_languages.html
>
> It *is* interesting, but basically the view of a teacher.
>
> --
> Helmut Leitner    leitner@hls.via.at
> Graz, Austria   www.hls-software.com


I am sorry to say this, but the person that wrote this article is largely ignorant. Here is a piece of it:

"There is a cost to GC, both in time and space efficiency."

A GC that uses a thread is surely not a good solution, since GC can be done without threads.

"Well-designed garbage collectors (especially generational GC) can be
extremely efficient"

Java sucks speedwise. Show me a real-life language with good garbage collection that does not hamper performance.

" (more efficient, for instance, than naive approaches such as reference
counting). "

But GC uses reference counting. If it did not, how the GC mechanism will know if something is referenced or not ? Furthermore, I don't see why reference counting is bad. Even with cyclic references, objects can be manually deleted (and thus break the cycle).

"However, in order to do this they tend to have significantly greater space usages than programs without GC (I've heard estimates on the order of 50% more total space used)."

I don't know about them, but I have done a complete C++ framework with reference counting and without any memory problems. Each object gets one more int. How is that memory consuming ?

"On the other hand, a program that leaks memory has the greatest space usage of all. I've wasted way too much of my life hunting down memory leaks in large C programs, and I have no interest in continuing to do so".

This is because he did not use proper software engineering techniques. And
he did not use C++ (or D ;-)).

"However, the reverse is also often true; many program optimizations (normally performed automatically by the compiler) are rendered much more difficult or impossible in code that uses pointers. In other words, languages that enable micro-optimizations often make macro-optimizations impossible. "

I've heard this a lot of times. But no one cares to put up an example. Until I see a real-life example of pointers doing bad speedwise, I'll believe otherwise.

"The author of the Eiffel language, Bertrand Meyer, has said that (I'm paraphrasing) "you can have pointer arithmetic, or you can have correct programs, but you can't have both". I agree with him. I think that direct memory access through pointers is the single biggest barrier to programming language scalability. "

But there are languages that are extremely powerful, using pointers, but they have no memory leaks. Take ADA, for example. A fine example of a programming language. It uses pointers, but constrained.

"The usual reason why many programmers don't like static type checking is that type declarations are verbose and detract from the purity of the algorithm. "

Yes, but if you come back to the code a year after, which one make more sense ?

x = 0;

or

double x = 0;

I like the C type system. Automatic type inference may save a few strokes, but it makes compiling slower and it prohibits me from quickly catching what a piece of code does (by eye scanning). Imagine being in the middle of a two-page algorithm with no types!!! hell!!!

"The relative virtues of static versus dynamic type checking are one of the great holy wars amongst computer language researchers and users".

Well, in real-life (and not in a research lab), static checking wins every
time.

"This feature is so useful that almost all new languages incorporate it. Interestingly, exceptions also work much better in the presence of garbage collection; avoiding memory leaks in a language like C++ that has exception handling but has no GC is quite tricky (see Scott Meyers' books Effective C++ and More Effective C++ for an extensive description of this issue). This is yet another argument for garbage collection (as if we needed one). "

Nope, it's not tricky. It's much more deterministic. And if your objects are coded the right way, there is no memory leak, since stack unwinding will release resources properly.

Speaking of the stack, languages with no object allocation on the stack Suck
(*cough* Java *cough*).

Oh come on now. Please. A program is as good as its engineers are. NASA writes programs in C(or even in assembler!!!) which he calls a "primitive language", but the bug percentage is less than 1%!!! on the other hand, I've seen programmers (newbies) to make Java crawl, simply because they must allocate everything, even the smallest integer objects on the heap!!!

(and before somebody jumps at me, let me tell you that I am a software engineer for a small company that does defense subcontranting for THALES (ex Thomson) and TRS (THALES RAYTHEON), and I have quite a big experience in C, C++, Java and ADA. Java being better than C++ is a myth. It's only advantage is the "write once, run everywhere" and the huge array of classes already made. But this has nothing to do with the language itself).



July 26, 2003
Achilleas Margaritis wrote:
> I am sorry to say this, but the person that wrote this article is largely
> ignorant. Here is a piece of it:

Like the most of people. :)

> "There is a cost to GC, both in time and space efficiency."
> 
> A GC that uses a thread is surely not a good solution, since GC can be done
> without threads.

If a GC is to cope with a threaded environment anyway, it better be a thread. Thus it can only pause one or two threads and leave the rest running.

> "Well-designed garbage collectors (especially generational GC) can be
> extremely efficient"
> 
> Java sucks speedwise. Show me a real-life language with good garbage
> collection that does not hamper performance.

Examples:
 * C with Boehm GC
 * OCaml

A very minor slowdown. Not comparable with that of Java. About 10% slowdown with all-scan option, and almost no slowdown if you hand-tune the allocation type for each allocation. Like "no pointers", "don't delete", "pointers in the first x bytes only", and so on.

> " (more efficient, for instance, than naive approaches such as reference
> counting). "
> 
> But GC uses reference counting. If it did not, how the GC mechanism will
> know if something is referenced or not ? 

No, it doesn't. A GC tracks allocation of all objects, and whenever the time comes it scans the stack for pointers to allocated objects. These are in turn scannned for pointers. Each object which GC comes across in this process, is marked as "reachable". Afterwards, all objects which have not been marked can be deleted.

> Furthermore, I don't see why
> reference counting is bad. Even with cyclic references, objects can be
> manually deleted (and thus break the cycle).

Manual refcounting is fast but error-prone. Automated one incurs cost per each assignment and parameter pass at function call. Even the "obvious" cases are not optimised out.

Thus, it turns out that "total" GC is significantly less overhead than "total" reference counting.

> "However, in order to do this they tend to have significantly greater space
> usages than programs without GC (I've heard estimates on the order of 50%
> more total space used)."
> 
> I don't know about them, but I have done a complete C++ framework with
> reference counting and without any memory problems. Each object gets one
> more int. How is that memory consuming ?

I haven't read the article, but i believe this means the programs without refcounting. You usually allocate your data in a pool or some other structure. That means, that if aliasing is possible, you cannot reliably say when to delete a single value. You can only delete a pool after the job is done. Another strategy is to forbid aliasing, and thus waste some space. It's a decision to make - depending on what is cheaper. Can't say that a memory overhead is always evil. A GC by itself may consume large amounts of memory.

> "On the other hand, a program that leaks memory has the greatest space usage
> of all. I've wasted way too much of my life hunting down memory leaks in
> large C programs, and I have no interest in continuing to do so".
> 
> This is because he did not use proper software engineering techniques. And
> he did not use C++ (or D ;-)).

D does not use reference counting. :) And if someone pervades the small memory footprint at any cost that's what he gets - either C++-style refcounting which tends to be slow if overused, or memory leaks...

> "However, the reverse is also often true; many program optimizations
> (normally performed automatically by the compiler) are rendered much more
> difficult or impossible in code that uses pointers. In other words,
> languages that enable micro-optimizations often make macro-optimizations
> impossible. "
> 
> I've heard this a lot of times. But no one cares to put up an example. Until
> I see a real-life example of pointers doing bad speedwise, I'll believe
> otherwise.

That's a reason why there are e.g. unaliased objects of 2 kinds in Sather.

In Sather, e.g. INT is a library object, however, because it's immutable it works just as fast as C int. And in fact resolves one-to-one to it, with stack storage, copying, and all. You can create your own types which behave like that easily.

> "The author of the Eiffel language, Bertrand Meyer, has said that (I'm
> paraphrasing) "you can have pointer arithmetic, or you can have correct
> programs, but you can't have both". I agree with him. I think that direct
> memory access through pointers is the single biggest barrier to programming
> language scalability. "
> 
> But there are languages that are extremely powerful, using pointers, but
> they have no memory leaks. Take ADA, for example. A fine example of a
> programming language. It uses pointers, but constrained.

How come it doesn't have memoty leaks? Sorry, i don't know ADA. Either it uses a kind of automatic memory management, or it *does* have memory leaks. What kind of constraint is there? I have some Delphi experience, and Pascal/Delphi is quite prone to leaks, evenif they are not so often due to some reason, be it possibilities for better program organisation or similar things.

> "The usual reason why many programmers don't like static type checking is
> that type declarations are verbose and detract from the purity of the
> algorithm. "
> 
> Yes, but if you come back to the code a year after, which one make more
> sense ?
> 
> x = 0;
> 
> or
> 
> double x = 0;
> 
> I like the C type system. Automatic type inference may save a few strokes,
> but it makes compiling slower and it prohibits me from quickly catching what
> a piece of code does (by eye scanning). Imagine being in the middle of a
> two-page algorithm with no types!!! hell!!!

You are right. OCaml manual says something like "no need to state the obvious over and over again", while things are not that obvious. One has to keep track of types anyway, and writing them down just helps. If one doesn't want to encode type into names or comments, one has to rely on some static system.

Sather has a system not too unsimilar to other languages, but it has 2 forms short notations: you can leave out a name of constructor, if you construct the object of the same type as a variable to place it in (and not a subtype), which saves typing the same type in the same line twice, and a "::=" type inference assignment operator, where the type is obvious to the compiler anyway. There is no real type inference, and this simple plug helps separate long expressions into few more readable parts. The Manual discourages overuse of the latter practice.

> "The relative virtues of static versus dynamic type checking are one of the
> great holy wars amongst computer language researchers and users".
> 
> Well, in real-life (and not in a research lab), static checking wins every
> time.

I don't think dynamic typechecking has any chance in a research lab. And on the contrary: there is such a vast amount of users writing in Perl with its guess-your-type system which counldn't be much worse, that it gets really scary.

> "This feature is so useful that almost all new languages incorporate it.
> Interestingly, exceptions also work much better in the presence of garbage
> collection; avoiding memory leaks in a language like C++ that has exception
> handling but has no GC is quite tricky (see Scott Meyers' books Effective
> C++ and More Effective C++ for an extensive description of this issue). This
> is yet another argument for garbage collection (as if we needed one). "
> 
> Nope, it's not tricky. It's much more deterministic. And if your objects are
> coded the right way, there is no memory leak, since stack unwinding will
> release resources properly.

Have you read EC++ and MEC++?
Well, it does requiere some effort and some thought, while GC requieres none.

> (and before somebody jumps at me, let me tell you that I am a software
> engineer for a small company that does defense subcontranting for THALES (ex
> Thomson) and TRS (THALES RAYTHEON), and I have quite a big experience in C,
> C++, Java and ADA. Java being better than C++ is a myth. It's only advantage
> is the "write once, run everywhere" and the huge array of classes already
> made. But this has nothing to do with the language itself).

C++ being a [paste-anything-here] language is a myth as well. But hey, there have been so many libraries written for it, we can't flush them all down the drain, can we? :) There's no doubt that Java is way more primitive and that the guys have actually missed a chance to make it somewhat better than C++ ... Like, why are there no properties? Quite a surprise for a language which is not performance-centric. And yet many, many things.

[jump!] :>

-i.

July 26, 2003
Academic piffle


July 26, 2003
Well that's a bit strong perhaps.

Had I not answered within 10 minutes of achieving consciousness this morning - before my polite hormones started flowing - I would say that I am always sceptical of any statements asserting one language is better than another.

I believe C++ to be a superior language than Java, though I use it a lot more so am probably biased. But I would not choose to use C++ to implement an e-commerce back-end, when J2EE is so simple, ubiquitous and reliable.

I would not use C, C++, C# or Java to write text file processing code. I use Perl or Python (depending on whether I need more powerful regex or want to do a bit of OO in there).

And the list goes on and on.

I've (thankfully) never written a line of COBOL, but no less an authority that Robert Glass says it is still the preeminent language for certain classes of business software, and I believe him. Why? Because no language is perfect, almost all features are useful to someone at sometime, and the idea that a single language and its set of features will in any way compare in importance to the intelligence and experience of practitioners is fanciful and does a disservice to us all.



July 27, 2003
I agree with most of your points but...

Achilleas Margaritis wrote:
> I am sorry to say this, but the person that wrote this article is largely ignorant. Here is a piece of it:
> 
> "There is a cost to GC, both in time and space efficiency."

What is wrong with this?

> "Well-designed garbage collectors (especially generational GC) can be
> extremely efficient"
> 
> Java sucks speedwise. Show me a real-life language with good garbage collection that does not hamper performance.

I found the MS Windows implementation (JIT) rather efficient and typically
only about 20-30% slower than comparable C code. I would not use "suck".
But it seems that there were a number of slow implementations where you
had to pay >100%.

> " (more efficient, for instance, than naive approaches such as reference
> counting). "
> 
> But GC uses reference counting. If it did not, how the GC mechanism will know if something is referenced or not ?

Books list 3 main GC methods:
  - reference counting
  - mark / sweep
  - copying GC

> Furthermore, I don't see why
> reference counting is bad. Even with cyclic references, objects can be
> manually deleted (and thus break the cycle).

That's not considered save. But there seem to be methods to solve the cycle problem of the naive reference counting implementation.

--
Helmut Leitner    leitner@hls.via.at Graz, Austria   www.hls-software.com
July 27, 2003
"Ilya Minkov" <midiclub@8ung.at> wrote in message news:bfunit$g7v$1@digitaldaemon.com...
> Achilleas Margaritis wrote:
> > I am sorry to say this, but the person that wrote this article is
largely
> > ignorant. Here is a piece of it:
>
> Like the most of people. :)
>
> > "There is a cost to GC, both in time and space efficiency."
> >
> > A GC that uses a thread is surely not a good solution, since GC can be
done
> > without threads.
>
> If a GC is to cope with a threaded environment anyway, it better be a thread. Thus it can only pause one or two threads and leave the rest running.

But if it is a thread, it means that for every pointer that it can be accessed by the GC, it has to provide synchronization. Which in turn, means, provide a mutex locking for each pointer. Which in turn means to enter the kernel a lot of times. Now, a program can have thousands of pointers lying around. I am asking you, what is the fastest way ? to enter the kernel 1000 times to protect each pointer or to pause a little and clean up the memory ? I know what I want. Furthermore, a 2nd thread makes the implementation terribly complicated. When the Java's GC kicks in, although in theory running in parallel, the program freezes.

GC is a mistake, in my opinion. I've never had memory leaks with C++, since I always 'delete' what I 'new'.

>
> > "Well-designed garbage collectors (especially generational GC) can be
> > extremely efficient"
> >
> > Java sucks speedwise. Show me a real-life language with good garbage collection that does not hamper performance.
>
> Examples:
>   * C with Boehm GC
>   * OCaml
>
> A very minor slowdown. Not comparable with that of Java. About 10% slowdown with all-scan option, and almost no slowdown if you hand-tune the allocation type for each allocation. Like "no pointers", "don't delete", "pointers in the first x bytes only", and so on.

But if you have to hand-tune the allocation type, it breaks the promise of ''just only allocate the objects you want, and forget about everything else". And this "hand-tuning" that you are saying is a tough nut to crack. For example, a lot of code goes into our Java applications for reusing the objects. Well, If I have to make such a big effort to "hand-tune", I better take over memory allocation and delete the objects myself.

And I am talking again about real-life programming languages.

>
> > " (more efficient, for instance, than naive approaches such as reference
> > counting). "
> >
> > But GC uses reference counting. If it did not, how the GC mechanism will know if something is referenced or not ?
>
> No, it doesn't. A GC tracks allocation of all objects, and whenever the time comes it scans the stack for pointers to allocated objects. These are in turn scannned for pointers. Each object which GC comes across in this process, is marked as "reachable". Afterwards, all objects which have not been marked can be deleted.

It can't be using a stack, since a stack is a LIFO thing. Pointers can be nullified in any order. Are you saying that each 'pointer' is allocated from a special area in memory ? if it is so, what happens with member pointers ? what is their implementation in reality ? Is a member pointer a pointer to a pointer in reality ? if it is so, it's bad. Really bad.

And how does the GC marks an object as unreachable ? it has to count how many pointers track it. Otherwise, it does not know how many references are there to it. So, it means reference counting, in reality.

If it does not use any way of reference counting as you imply, it has first to reset the 'reachable' flag for every object, then scan pointers and set the 'reachable' flag for those objects that they have pointers that point to them. And I am asking you, how is that more efficient than simple reference counting (which is local, i.e. only when a new pointer is created/destroyed, the actual reference counter integer is affected).

>
> > Furthermore, I don't see why
> > reference counting is bad. Even with cyclic references, objects can be
> > manually deleted (and thus break the cycle).
>
> Manual refcounting is fast but error-prone. Automated one incurs cost per each assignment and parameter pass at function call. Even the "obvious" cases are not optimised out.

Manual refcouting is error prone, and I agree. Automated incurs no cost at all, unless that you are saying that increasing and decreasing an integer is a serious cost for modern CPUs. Furthermore, not every pointer needs to be reference counted. In my implementation, only those pointers that are concerned with the object's lifetime manage reference counting. Every method that accepts a pointer as a parameter, is a normal C++ pointer. In other words, only member pointers are special pointers that do reference counting. Temporary pointers allocated on the stack do not do reference counting. And there is a reason for it: since they are temporary, they are guarranteed to release the reference when destroyed.

Of course, you may say now that some call may destroy the object and leave the stack pointers dangling. And I will say to you, that it's your algorithm's fault, not of the library's: since the inner call destroyed the object, it was not supposed to be accessed afterwards.

So, as you can see, automated refcounting works like a breeze. And you also get the benefit of determinism: you know when destructors are called; and then, you can have stack objects that, when destroyed, do away with all the side effects (for example, a File object closes the file automatically when destroyed).

>
> Thus, it turns out that "total" GC is significantly less overhead than "total" reference counting.

Nope, it does not, as I have demonstrated above.

>
> > "However, in order to do this they tend to have significantly greater
space
> > usages than programs without GC (I've heard estimates on the order of
50%
> > more total space used)."
> >
> > I don't know about them, but I have done a complete C++ framework with reference counting and without any memory problems. Each object gets one more int. How is that memory consuming ?
>
> I haven't read the article, but i believe this means the programs without refcounting. You usually allocate your data in a pool or some other structure. That means, that if aliasing is possible, you cannot reliably say when to delete a single value. You can only delete a pool after the job is done. Another strategy is to forbid aliasing, and thus waste some space. It's a decision to make - depending on what is cheaper. Can't say that a memory overhead is always evil. A GC by itself may consume large amounts of memory.

If the working set is not in the cache, it means a lot of cache misses, thus a slow program. Refcounting only gives 4 bytes extra to each object. If you really want to know when to delete an object, I'll tell you the right moment: when it is no more referenced. And how do you achieve that ? with refcounting.

>
> > "On the other hand, a program that leaks memory has the greatest space
usage
> > of all. I've wasted way too much of my life hunting down memory leaks in large C programs, and I have no interest in continuing to do so".
> >
> > This is because he did not use proper software engineering techniques.
And
> > he did not use C++ (or D ;-)).
>
> D does not use reference counting. :) And if someone pervades the small memory footprint at any cost that's what he gets - either C++-style refcounting which tends to be slow if overused, or memory leaks...

As I told earlier, the trick is to use refcounting where it must be used. In other words, not for pointers allocated on the stack.

>
> > "However, the reverse is also often true; many program optimizations (normally performed automatically by the compiler) are rendered much
more
> > difficult or impossible in code that uses pointers. In other words, languages that enable micro-optimizations often make macro-optimizations impossible. "
> >
> > I've heard this a lot of times. But no one cares to put up an example.
Until
> > I see a real-life example of pointers doing bad speedwise, I'll believe otherwise.
>
> That's a reason why there are e.g. unaliased objects of 2 kinds in Sather.
>
> In Sather, e.g. INT is a library object, however, because it's immutable it works just as fast as C int. And in fact resolves one-to-one to it, with stack storage, copying, and all. You can create your own types which behave like that easily.

Real-life programming languages only, please. You still don't give me an example of how initialization fails with aliasing.

>
> > "The author of the Eiffel language, Bertrand Meyer, has said that (I'm paraphrasing) "you can have pointer arithmetic, or you can have correct programs, but you can't have both". I agree with him. I think that
direct
> > memory access through pointers is the single biggest barrier to
programming
> > language scalability. "
> >
> > But there are languages that are extremely powerful, using pointers, but they have no memory leaks. Take ADA, for example. A fine example of a programming language. It uses pointers, but constrained.
>
> How come it doesn't have memoty leaks? Sorry, i don't know ADA. Either it uses a kind of automatic memory management, or it *does* have memory leaks. What kind of constraint is there? I have some Delphi experience, and Pascal/Delphi is quite prone to leaks, evenif they are not so often due to some reason, be it possibilities for better program organisation or similar things.

At first I thought too that ADA was similar to PASCAL. Well, it is syntactically similar, but that's about it. It's pointer usage is constrained. For example, you can do pointer arithmetic, but it is bounds-checked. You can't have pointer casting, unless it is explicitely specified as an alias on the stack.

>
> > "The usual reason why many programmers don't like static type checking
is
> > that type declarations are verbose and detract from the purity of the algorithm. "
> >
> > Yes, but if you come back to the code a year after, which one make more sense ?
> >
> > x = 0;
> >
> > or
> >
> > double x = 0;
> >
> > I like the C type system. Automatic type inference may save a few
strokes,
> > but it makes compiling slower and it prohibits me from quickly catching
what
> > a piece of code does (by eye scanning). Imagine being in the middle of a two-page algorithm with no types!!! hell!!!
>
> You are right. OCaml manual says something like "no need to state the obvious over and over again", while things are not that obvious. One has to keep track of types anyway, and writing them down just helps. If one doesn't want to encode type into names or comments, one has to rely on some static system.
>
> Sather has a system not too unsimilar to other languages, but it has 2 forms short notations: you can leave out a name of constructor, if you construct the object of the same type as a variable to place it in (and not a subtype), which saves typing the same type in the same line twice, and a "::=" type inference assignment operator, where the type is obvious to the compiler anyway. There is no real type inference, and this simple plug helps separate long expressions into few more readable parts. The Manual discourages overuse of the latter practice.

A cleverer solution would be to have automatic type insertion from the IDE: when I type

'x = 0.0',

the IDE converts it to:

'double x = 0.0'.

After all, it's a typing problem, right ? we are frustrated to type the things that the computer should understand by itself. But that does not have to do about what the program should be like.

Here is a little thought about Java's lack of templates, which is related to the problem of going back to the code and instantly realizing what's happenning:

in C++, when I go back to the code, I can easily remember what type the 'list' or 'map' had because it is mentioned in the templates. In Java, I can't do that, since everything works at the Object level. So, I have to go back to the point that objects are inserted into the list or map and check the type of object inserted. This has two solutions, none of which is very ellegant: either name the collection relevant to the types it uses, for example:

TreeMap intToStringMap;

or use the Javadoc comments to explicitely note which kind of object the map has. For example:

/** maps strings to integers */
TreeMap nameIds;

This has another consequence: two different programmers putting different objects into the same map, and only discovering it when the program runs and raises an exception.

This is way explicit statement of types is very important. We should not mix the 'fast typing' problem with the actual programming language.

>
> > "The relative virtues of static versus dynamic type checking are one of
the
> > great holy wars amongst computer language researchers and users".
> >
> > Well, in real-life (and not in a research lab), static checking wins
every
> > time.
>
> I don't think dynamic typechecking has any chance in a research lab. And on the contrary: there is such a vast amount of users writing in Perl with its guess-your-type system which counldn't be much worse, that it gets really scary.
>
> > "This feature is so useful that almost all new languages incorporate it. Interestingly, exceptions also work much better in the presence of
garbage
> > collection; avoiding memory leaks in a language like C++ that has
exception
> > handling but has no GC is quite tricky (see Scott Meyers' books
Effective
> > C++ and More Effective C++ for an extensive description of this issue).
This
> > is yet another argument for garbage collection (as if we needed one). "
> >
> > Nope, it's not tricky. It's much more deterministic. And if your objects
are
> > coded the right way, there is no memory leak, since stack unwinding will release resources properly.
>
> Have you read EC++ and MEC++?
> Well, it does requiere some effort and some thought, while GC requieres
> none.

Nope, but I don't have any memory leaks in my apps, except only when I forget to delete things. But that's my problem. It's an engineering problem, not a language problem.

>
> > (and before somebody jumps at me, let me tell you that I am a software engineer for a small company that does defense subcontranting for THALES
(ex
> > Thomson) and TRS (THALES RAYTHEON), and I have quite a big experience in
C,
> > C++, Java and ADA. Java being better than C++ is a myth. It's only
advantage
> > is the "write once, run everywhere" and the huge array of classes
already
> > made. But this has nothing to do with the language itself).
>
> C++ being a [paste-anything-here] language is a myth as well. But hey, there have been so many libraries written for it, we can't flush them all down the drain, can we? :) There's no doubt that Java is way more primitive and that the guys have actually missed a chance to make it somewhat better than C++ ... Like, why are there no properties? Quite a surprise for a language which is not performance-centric. And yet many, many things.
>
> [jump!] :>
>
> -i.
>

Nope, it isn't. C++ is the only language that cuts it for me:

1) you always know what is happening. It is deterministic.
2) it has quite straightforward syntax, unlike ADA.
3) supports generics in the best way I have seen (except D of course :-) ).
This is very important.
4) lot's of things can be automated, including memory management.
5) supports every programming technique and paradigm God knows.

ADA is too strict, Java sucks, Basic is good for only small projects and
prototyping.
I also have knowledge of ML(and Haskell), although I would not say that it
is a programming language to build large applications with.

All the other languages are well on a theoritical basis. The only problem I see with C++ is the lack of standard (and free!!!) libraries across different operating systems, especially for the UI.



July 27, 2003
"Helmut Leitner" <helmut.leitner@chello.at> wrote in message news:3F236F13.B37603FB@chello.at...
> I agree with most of your points but...
>
> Achilleas Margaritis wrote:
> > I am sorry to say this, but the person that wrote this article is
largely
> > ignorant. Here is a piece of it:
> >
> > "There is a cost to GC, both in time and space efficiency."
>
> What is wrong with this?
>
> > "Well-designed garbage collectors (especially generational GC) can be
> > extremely efficient"
> >
> > Java sucks speedwise. Show me a real-life language with good garbage collection that does not hamper performance.
>
> I found the MS Windows implementation (JIT) rather efficient and typically only about 20-30% slower than comparable C code. I would not use "suck". But it seems that there were a number of slow implementations where you had to pay >100%.

20-30% not being slower enough ? any slowness that I can notice while working (for example, the program freezing every now and then for a little) deserves the word 'sucks' for me. I am too strict maybe, but that's just me.

>
> > " (more efficient, for instance, than naive approaches such as reference
> > counting). "
> >
> > But GC uses reference counting. If it did not, how the GC mechanism will know if something is referenced or not ?
>
> Books list 3 main GC methods:
>   - reference counting
>   - mark / sweep
>   - copying GC
>
> > Furthermore, I don't see why
> > reference counting is bad. Even with cyclic references, objects can be
> > manually deleted (and thus break the cycle).
>
> That's not considered save. But there seem to be methods to solve the cycle problem of the naive reference counting implementation.
>
> --
> Helmut Leitner    leitner@hls.via.at
> Graz, Austria   www.hls-software.com


July 27, 2003
"Matthew Wilson" <matthew@stlsoft.org> wrote in message news:bfv0op$oeh$1@digitaldaemon.com...
> Well that's a bit strong perhaps.
>
> Had I not answered within 10 minutes of achieving consciousness this morning - before my polite hormones started flowing - I would say that I
am
> always sceptical of any statements asserting one language is better than another.
>
> I believe C++ to be a superior language than Java, though I use it a lot more so am probably biased. But I would not choose to use C++ to implement an e-commerce back-end, when J2EE is so simple, ubiquitous and reliable.

You would not use it because it lacks something like J2EE, not because Java is a better language. We have at last to differenciate between 'language', 'libraries' and 'environment'. Although C++ is a better language, it totally lacks the Java 'envirornment' and tha Java 'libraries'.

>
> I would not use C, C++, C# or Java to write text file processing code. I
use
> Perl or Python (depending on whether I need more powerful regex or want to
> do a bit of OO in there).

Again a problem of available libraries.

>
> And the list goes on and on.
>
> I've (thankfully) never written a line of COBOL, but no less an authority that Robert Glass says it is still the preeminent language for certain classes of business software, and I believe him. Why? Because no language
is
> perfect, almost all features are useful to someone at sometime, and the
idea
> that a single language and its set of features will in any way compare in importance to the intelligence and experience of practitioners is fanciful and does a disservice to us all.
>
>
>



« First   ‹ Prev
1 2 3 4