February 28, 2003
> The Oz book (of the subject line) agrees with that sentiment in a certain way,
> though it would not apportion blame in that puerile, uninformed manner.
> Nonacademics are at fault, and the book is an attempt to help them.  What the
> book blames academics for, if anything, is their failure to communicate results
> in a way 'the masses' can understand.

I find that much (not all) academic research has a research focus, rather than a pragmatic focus (which makes sense to me).  The result tends to be slower languages that would hurt acceptance of my application, which gets benchmarked against the competition for every sale.  Every new concept is naturally pushed by it's inventer as hard as possible (publish or perish), regardless of its real value.  There are exceptions, and I'll list Sather as one.

The other thing I find is that experience in both language design, and application design are required to be able to put together a new language well.  That's why Walter seems a natural for this.  Frankly, D addresses needs that I only new I had after years of intensive coding for high-performance commercial applications.

For example, no pointers.  It's counter-intuitive, but every C and C++ application I've worked on where programmers were encouraged to avoid them ran faster than competing tools.  Programmers are required to typedef away pointers to classes, and many hacks that were used instead of proper modeling with classes go away.

The bar for commercial acceptance of a language feature is different than in academics.  If an idea is too complicated, we can't use it in industry.  There are many such features of good academic languages in this category, and some in C++ that hinder it's use.  Most C++ programmers I know don't know what about the virtual function table pointer, and use, but don't write templates.  I was looking at some older C++ code yesterday, and the authors where hoplessly lost in a sea of features they didn't know how to use.  It was a real mess.

> The book explicitly recognizes the constant reinvention of wheels by language
> designers who ignore academic research.  To address that problem, it adopts a
> 'programmer-friendly' presentation of fundamental concepts.

Personally, I apriciate you're efforts to enlighten us in this group of  academic work, both good and bad.  I've certainly learned a lot.

I'd have to dissagree that academics aren't re-inventing the wheel very often.  For example, Sather's "include" construct solves the same problem as "virtual classes", "framework templates", and covariation. There must be 5 to 10 academic languages out there that re-invented this.

> So you have in Oz the opposite of 'nothing in the language resembles anything in
> any other language you've ever seen.' In fact everything in the kernel language
> resembles everything you've ever seen.  It shows exactly how many wheels have
> been reinvented and names them all.  Next time some hacker trots out one of
> these wheels, and says 'behold a new language,' you can evaluate his work
> objectively.

Ok... I'll read the book!

> The Cult of Originality is a hacker phenomenon.  Academic research is all about
> classification, taxonomy, and discovery.  Academics face vicious peer review --
> the ultimate Darwinian struggle?  Non-academics have cult followings and flame
> wars, with very little technical substance.  Change the syntax a bit, and voila!
> a new language is born.  Add a new feature, and voila! a new cult following.

Hmmm...  When I try to list languages with cult followings, I get some industry born stuff, but mostly academic languages.  I would put the following in this category, and this is just the tip of the iceberg:

Lisp, Scheme, ML and variants, Prolog, APL, Nice, Kiev, Nickel, Eiffel... I probably have some errors here, but the list is very very long.  These have one thing in common - they're not really quite right for developing most cpu intensive commercial applications, yet their backers keep pushing them as such.  If I had a dollar for each time I've heard "just get a faster CPU if you need speed" ...

Industry came up with Forth, Perl, and Java.  I hear game hackers came up with C.  There are more, but not so many as from academics.  The characteristics of these langauges is that they are hammers for solving the immediate problems faced by programmers in industry.  Forth is memory lean.  Perl rocks for simple text manipulation.  Java is portable and good for networking apps.
> No better example than Perl fits the reinvented-wheel category:  pure marketing
> and hackerdom, nothing original in a language sense, and horrid syntax, possibly
> the absolute worst on earth.  I hope for better things of D.  BTW I have plenty
> of colleagues who use Perl at work, and their comments range from 'horrid' to
> 'write-once language'...so even those using it are not always fond of it.

Agreed, at least as far as being a well designed langauge.  I cringe at not only the language, but at most of the code written in it.

Perl seems to have at least two things going for it:

1) It focusses on a real problem where a solution was needed.
2) It offers the masses every simple feature that they could possibly want.

There's nothing hard to understand in original Perl.  Most features are aimed at saving a few keystrokes (like the <> variable).  Most programmers love that.  The excitement I've seen in programmer's eyes as they read the spec remind me of my children at Christmas.

> My impressions of Perl wizards is that they have little exposure to better
> languages, and therefore think Perl, and only Perl, offers what it offers.  One
> way to achieve success is to put blinders on the customers so they can't see the
> competition.

I'd have to guess you're still in your youthful 20's.  I envy your idealistic enthusiasm for language design.  The real world is Dilbert Land.

Most Perl wizards aren't ignorant...  They CHOSE to be Perl wizards, and would again knowing everything you know.

Bill

February 28, 2003
"Bill Cox" <bill@viasic.com> wrote in message news:3E5F4733.8070809@viasic.com...
> That's why Walter seems a natural for this.  Frankly, D
> addresses needs that I only new I had after years of intensive coding
> for high-performance commercial applications.

D is simply the language I always wanted to use <g>.

> Personally, I apriciate you're efforts to enlighten us in this group of
>   academic work, both good and bad.  I've certainly learned a lot.

I agree.

> I hear game hackers came
> up with C.

One thing that is never mentioned about C is that I believe the PC is what
pushed C into the mass accepted language that it is. The reasons are:
1) PCs were slow and small. Writing high performance, memory efficient apps
was a requirement, not a luxury.
2) C was a good fit for the PC architecture.
3) C was the ONLY high level language that had a decent compiler for it in
the early PC days. (The only other options were BASIC and assembler.) Sure,
there were shipping FORTRAN and Pascal compilers for the PC early on, but
the implementations were so truly terrible they were useless (and I and my
coworkers really did try to get them to work).


March 01, 2003
Bill Cox says,

> I find that much (not all) academic research has a research focus, rather than a pragmatic focus (which makes sense to me).

Could you supply 3 examples of each type? Otherwise I'll just dismiss the assertion as an unsupported, vague generalization.

Industry funds much research, as does government. C and C++ came out of a lab from a Ph.D. in mathematics named Dennis M. Ritchie and a Ph.D. in computer science named Bjarne Stroustrup. You could hardly ask for a more academic birthplace.

The dozens of research URLs recently supplied to the D project all have direct applicability. The notion that computer scientists have no interest in practical results, no design sense, or somehow don't listen to industry, is astonishingly silly. I am worn out listening to this kind of thing from D folks. Computer scientists have common sense and sometimes even good taste, along with other skills.

> The result tends to be slower languages that would hurt acceptance of my application, which gets benchmarked against the competition for every sale.

The idea that every new language construct is automatically suspect as a threat to performance is another diatribe I tire of hearing. It ain't so.

Any dynamically typed language will be slower than statically typed languages, but no one is proposing that D go dynamic.

> Every new concept is naturally pushed by it's inventer as hard as possible (publish or perish), regardless of its real value.

Again I would appreciate 3 specific examples. My impression of the D folks is that nobody reads any research material to speak of.

Some languages are designed to explore limits of certain paradigms. Even then, you find that CS folks listen and adapt. The wonderful INRIA folks put OO features into SML, and bequeathed to us O'Caml -- a real screamer of a language, right up there with C. Compiled O'Caml gives about 50% the speed of raw C with probably 50 times the expressiveness and complete C/C++ interfacing.

> There are exceptions, and I'll list Sather as one.

Sather is no longer developed, sadly. (What that fact says about its exceptional status in your eyes, I do not know.)

> The other thing I find is that experience in both language design, and application design are required to be able to put together a new language well.  That's why Walter seems a natural for this.

Walter wrote games, but has he designed any languages beyond D? Todd Proebsting has worked on many languages -- his stuff is worth reading. See the Disruptive Languages presentation. He is quite interested in what makes a language "sell." You see, academics are not unconcerned about such things.

> Frankly, D addresses needs that I only [k]new I had after years of intensive coding for high-performance commercial applications.

You're not alone. C/C++ users know their flaws and agonies. That's why I'm here, too. That's also part of the motivation behind Java, C#, ...

> The bar for commercial acceptance of a language feature is different than in academics. If an idea is too complicated, we can't use it in industry.

The fallacy here is that a 'language feature' automatically increases complexity. Features increase expressiveness, or in other words, make programming simpler and less error-prone. True, they make writing a compiler more difficult, but I would rather have the complexity in the compiler than in my code.

The phrase 'commercial acceptance of a language feature' is strange. I picture a software manager talking to subordinates: 'No Bob we won't let you use C++, because it has generics.' 'Sally you can't use O'Caml, because it has recursion.' I don't see that scenario.

A car with five gear ratios and a V8 engine is more capable than one with three gear ratios and a 4-cylinder. Nonetheless you can drive it like the 4-cylinder. The point is to have the extra power on board when needed, and use it only under expert control.

Either type of driving will get you home, but one is more pleasant, more fun, and faster. If you give the car to granny or junior, just tell them to keep it under 25 MPH.

> Personally, I apriciate you're [sic] efforts to enlighten us in this group of academic work, both good and bad. I've certainly learned a lot.

Thanks! Do you care to identify the 'bad' research?

> I'd have to dissagree that academics aren't re-inventing the wheel very often.  For example, Sather's "include" construct solves the same problem as "virtual classes", "framework templates", and covariation. There must be 5 to 10 academic languages out there that re-invented this.

The Oz book addresses academics too, you know. Given any problem domain, I expect that every conceivable solution has probably been explored by some academician -- somewhere. That is their job.

> Ok... I'll read the book!

Glad to get a commitment. Many D people have expressed opinions without reading it.

> Hmmm...  When I try to list languages with cult followings, I get some industry born stuff, but mostly academic languages.  I would put the following in this category, and this is just the tip of the iceberg: Lisp, Scheme, ML and variants, Prolog, APL, Nice, Kiev, Nickel, Eiffel...

Er, Nickel has a cult following? Are we certain it has *any* following? Do you have any citations, or is this more guesswork? If we're doing guesswork, then I'm as qualified as you, and will strikethrough all items on your list except perhaps Eiffel.

When I call Perl a cult, I am just echoing its adherents. They admit it! "Because Perl enjoys this social club air, comprehension of the subject is sometimes informed by a certain sense of smugness." (etc.) http://63.236.73.146/Authoring/Languages/Perl/PerlfortheWeb/index22.html

O'Caml and Alice ML are exceptionally pragmatic languages useful for production work. I don't see a cult around them. O'Caml folks use other languages liberally, and in fact that is one of its strengths, foreign function interfacing.

> I probably have some errors here, but the list is very very
> long.  These have one thing in common - they're not really quite right
> for developing most cpu intensive commercial applications, yet their
> backers keep pushing them as such.

I don't buy that. Don't most of these languages offer C interfacing of some kind? Why do you suppose they do that? Because they acknowledge the need for speed.

> If I had a dollar for each time I've
> heard "just get a faster CPU if you need speed" ...

Speed is great, really great, but D folks are too paranoid about the supposed costs of expressiveness. Let's push the envelope on performance, and even make it priority #1 -- fine. Now let's also push the expressiveness envelope as a close second priority.

D should push both sides to their limits, maximizing the language power. My fear is that there is so much narrow attention on #1 that nobody is really worried about #2. In fact there is this paranoid knee-jerk reaction which kicks in, putting #2 in artificial competition with #1 when in fact they can often cohabitate.

We have in D something with better expressiveness than C++. D will always be better than C++. OK. But are we content with a language that is "just" better than C++? I want a language with maximum performance and maximum expressiveness, not just "better than C++" expressiveness. C++ is such an ugly kludge that almost any evolution is better (Java, C#, what have you).

Todd Proebsting's "Disruptive Languages" talk prophecies that the next disruptive language will be slower than its predecessor because what really counts is programmer productivity. Before someone whacks me on the head, I understand that D puts CPU cycles above expressiveness. Todd just happens to think that industry will value expressiveness more. The main point I wish to make is that expressiveness should not be ignored or considered a threat. D has priorities, but should strive to max them all out.

Just because D lets us go "under the hood" doesn't mean D should force us under the hood. That's what expressiveness is all about -- I express the logic in the most appropriate manner. The language gives me tradeoff choices. If I, the programmer, am willing to give up CPU cycles to finish my job faster, then D should permit that to the maximum extent of the law.

> Industry came up with Forth, Perl, and Java.  I hear game hackers came up with C.  There are more, but not so many as from academics.

No, Ph.D. academics working in a research lab came up with C.

> The characteristics of these langauges is that they are hammers for solving the immediate problems faced by programmers in industry. Forth is memory lean. Perl rocks for simple text manipulation. Java is portable and good for networking apps.

Academic projects are also started to solve specific problems. The industry workers you admire were trained in academia by the way.

Mark


March 01, 2003
"Mark Evans" <Mark_member@pathlink.com> wrote in message news:b3pg8m$19te$1@digitaldaemon.com...
> > The other thing I find is that experience in both language design, and application design are required to be able to put together a new language well.  That's why Walter seems a natural for this.
> Walter wrote games, but has he designed any languages beyond D?

Yes, the ABEL (Advanced Boolean Expression Language). The language was a big commercial success for Data I/O, and has lasted for 15 years or so. It's now obsolete because the hardware chips it was targetted for are gone.

My experiences, however, are more in implementing languages than designing them.


March 01, 2003
"Walter" <walter@digitalmars.com> writes:
> "Bill Cox" <bill@viasic.com> wrote in message news:3E5F4733.8070809@viasic.com...
>> I hear game hackers came
>> up with C.
>
> One thing that is never mentioned about C is that I believe the PC is what
> pushed C into the mass accepted language that it is. The reasons are:
> 1) PCs were slow and small. Writing high performance, memory efficient apps
> was a requirement, not a luxury.
> 2) C was a good fit for the PC architecture.
> 3) C was the ONLY high level language that had a decent compiler for it in
> the early PC days. (The only other options were BASIC and assembler.) Sure,
> there were shipping FORTRAN and Pascal compilers for the PC early on, but
> the implementations were so truly terrible they were useless (and I and my
> coworkers really did try to get them to work).

I recently visited my university's library and examined some programming language textbooks from the late 1970's and the early 1980's.  Many of them didn't even mention C and those that did, didn't usually consider it to be much of a language.  Algol, Pascal, Cobol and Fortran were the languages of the day, with an occasional side note for Lisp, Prolog and Smalltalk. Although C had existed from around 1973 is was still a cryptical-looking, system-oriented niche language used only by some UNIX researchers and didn't seem to be of much interest.

-Antti
March 01, 2003
Mark Evans <Mark_member@pathlink.com> wrote in news:b3pg8m$19te$1@digitaldaemon.com:

Hi,

in your last post you often mentioned the term "languages expressiveness".

What is this exactly?
Is there any academic definition for it (That is easily understood) ?
How can you measure this? [ the kernel language? :-) ]


Why focus on performance:

I think, programmers focus so much on performance, because it is easy to
benchmark performance.
When I write some code that run's faster than the code of a fellow
programmer, I can say: "My code is better than yours. It is faster." (Of
course this is true only if performce is the number one goal for the
application to be written.)
The fellow programmer would not be offended since the statement is
objective. He could learn more about programming and algorithms, rewrite
his code and beat my code.

When I write some code that I think is more expressive than the code of a fellow programmer, I'd better not say : "My code is better than yours. It is more expressive." The fellow programmer would be offended: He would think - "Why should your code be more expressive than mine? He's doing a smear campaign against my coding practices!". The problem is that the two programmers would spent more time on argueing what the most expressive code is, than on doing their job.


For similar reasons, performance is often used for marketing compaigns. Performance usually is the trait of software that is easiest to measure and to compare. Price is much harder, due to the complex and different licensing practices <g>.


More comments are embedded.


Farmer

>> The result tends to be slower languages that would hurt acceptance of my application, which gets benchmarked against the competition for every sale.
> 
> The idea that every new language construct is automatically suspect as a threat to performance is another diatribe I tire of hearing. It ain't so.
> 
I guess that computer languages evolved from simple CPU instructions to
higher abstraction instructions. For simple assembler like languages, it is
possible to code a reasonably fast implemention of an algorithm without the
help of a profiler.
Fancy new language features *may* have adverse effects on performance,
since they are designed to appeal to man's brain instead of hardware. So,
when I look e.g. at virtual fields in Kiev, I reject this feature untill I
know their good points (better support for MI?) and their bad points (some
CPU cycles a likely to be wasted, aren't they?) Could I use virtual fields
in performance critical code sections? Or should I use virtual fields only
for prototypes/research work?
You may be tired of hearing that stuff. But programmers ask these questions
for good reasons.


>> Every new concept is naturally pushed by it's inventer as hard as possible (publish or perish), regardless of its real value.
> 
> Again I would appreciate 3 specific examples. My impression of the D folks is that nobody reads any research material to speak of.

I guess, you are right on this one.

But I also have the impression that researchers don't read any source code material to speak of. So researchers tend to come up with great solutions for problems that are of minor importance for mainstream applications.

> Some languages are designed to explore limits of certain paradigms. Even then, you find that CS folks listen and adapt. The wonderful INRIA folks put OO features into SML, and bequeathed to us O'Caml -- a real screamer of a language, right up there with C. Compiled O'Caml gives about 50% the speed of raw C with probably 50 times the expressiveness and complete C/C++ interfacing.

I wonder what the figures of O'Caml for memory consumption are?

>> The other thing I find is that experience in both language design, and application design are required to be able to put together a new language well.  That's why Walter seems a natural for this.
> 
> Walter wrote games, but has he designed any languages beyond D? Todd Proebsting has worked on many languages -- his stuff is worth reading. See the Disruptive Languages presentation. He is quite interested in what makes a language "sell." You see, academics are not unconcerned about such things.

Todd Proebsting's presentation was a great read. I really enjoyed it.


> Walter wrote games, but has he designed any languages beyond D?

"What is an Architect ? He designs a house for another to build and someone else to inhabit."

Taken from "How Java’s Floating-Point Hurts Everyone Everywhere" (was posted in another thread).

I'm glad that Walter designs a house for him to build and him and everybody else to inhabit.


I think that designing games and designing languages have some similiar
traits, these days:
-It's NOT about how many FEATURES or ORIGINAL IDEAS you put in. The KEY IS
that EVERYTHING fits WELL. Simple games can be fun. More feature-rich games
can be even more fun (just my personal opinion), though just adding
features to games does not make them fun.

-Every designer has access to the same set of features: generics,
interfaces, inheritance, multimethods, static type system, easy interfacing
to C, portablility, etc.
So by adding features to the language you cannot gain any significant
advantage over your competitors.

-Market success is only loosly coupled with superiority of games: Even average games can outsell truely outstanding games. This happens because of designer/vendors's reputation, hype, mainstream conformance, or simply luck (appearance at the right time)


>> The bar for commercial acceptance of a language feature is different than in academics. If an idea is too complicated, we can't use it in industry.
> 
> The fallacy here is that a 'language feature' automatically increases complexity. Features increase expressiveness, or in other words, make programming simpler and less error-prone. True, they make writing a compiler more difficult, but I would rather have the complexity in the compiler than in my code.

Adding language features does increase complexity. The language spec becomes bigger, book authors will have to write more pages about D, programmers will have to read and understand more pages about D, vendors of development tools (I do not mean compiler vendors here) are faced with more complexity. Simplicity is a language feature by itself. From doing Java programming I learnt that it is a major feature.

> 
> The phrase 'commercial acceptance of a language feature' is strange. I picture a software manager talking to subordinates: 'No Bob we won't let you use C++, because it has generics.' 'Sally you can't use O'Caml, because it has recursion.' I don't see that scenario.

Just imagine : 'No Bob we won't let you use C++, because it has POINTERS.' 'Sally you can't use C#, because it supports GOTOs.'

Actually I believe [But I can hardly remember, so things may be a bit different], that some embedded programming folks, made a subset of C++ that banned templates. They said it would add too much complexity on compiler implementations. Stroustrup could not convince them that templates does not interfere with embedded programming. (you don't have to use them; many C++ compilers already support templates)


> A car with five gear ratios and a V8 engine is more capable than one with three gear ratios and a 4-cylinder. Nonetheless you can drive it like the 4-cylinder.

A car with a V8 engine is more expensive: The engine is more expensive. Because of the car's extra power, extra weight and extra size, the car must also be improved on other parts, like brakes or tires. That makes the car even more costly to manufacture.

I drive a car with 4-cylinders, I don't want to drive one with a V8 engine. It just consumes more fuel, but does not increase the car's usefullness. Though I drive in one of the very few countries, that have no speed limit on autobahns, I could not save any time to speak of, if I drove a car with a V8 engine.

>The point is to have the extra power on board
> when needed, and use it only under expert control.
Consider that I'm not an expert driver, still I drive my car.

> 
> Either type of driving will get you home, but one is more pleasant, more fun, and faster. If you give the car to granny or junior, just tell them to keep it under 25 MPH.
Junior will nod, then he will accelerate the car to 150 MPH crashing with the tree. Junior is dead.

Worse your car is broken !

> 
>> Personally, I apriciate you're [sic] efforts to enlighten us in this group of academic work, both good and bad. I've certainly learned a lot.
> 
> Thanks! Do you care to identify the 'bad' research?

The paper "A Critique of C++" from Ian Joyner that you had posted. It's like a sh*tty marketing paper, that wears the clothes of science.



>> I probably have some errors here, but the list is very very
>> long.  These have one thing in common - they're not really quite
>> right for developing most cpu intensive commercial applications, yet
>> their backers keep pushing them as such.
> 
> I don't buy that. Don't most of these languages offer C interfacing of some kind? Why do you suppose they do that? Because they acknowledge the need for speed.
Maybe also to reuse existing libraries.
Having to interface with C for performance critical code is an ugly kludge.
Why not put the ability for speed right into the language instead ?


> 
>> If I had a dollar for each time I've
>> heard "just get a faster CPU if you need speed" ...
> 
> Speed is great, really great, but D folks are too paranoid about the supposed costs of expressiveness. Let's push the envelope on performance, and even make it priority #1 -- fine. Now let's also push the expressiveness envelope as a close second priority.

I agree. Sometimes I wish that D would always make safe behaviour the default and fast but bug-prone ways of doing things merely possible.

> 
> D should push both sides to their limits, maximizing the language power. My fear is that there is so much narrow attention on #1 that nobody is really worried about #2. In fact there is this paranoid knee-jerk reaction which kicks in, putting #2 in artificial competition with #1 when in fact they can often cohabitate.
So, I suggest, that D should include any feature that could increase performance AND (not or) could increase expressiveness.

> 
> We have in D something with better expressiveness than C++. D will always be better than C++. OK. But are we content with a language that is "just" better than C++? I want a language with maximum performance and maximum expressiveness, not just "better than C++" expressiveness. C++ is such an ugly kludge that almost any evolution is better (Java, C#, what have you).
Neither Java nor C# is better than C++. But for many/some/a few applications today, Java and C# is the better choice. Both languages were not designed to be a successor to C++.

> 
> Todd Proebsting's "Disruptive Languages" talk prophecies that the next disruptive language will be slower than its predecessor because what really counts is programmer productivity. Before someone whacks me on the head, I understand that D puts CPU cycles above expressiveness. Todd just happens to think that industry will value expressiveness more. The main point I wish to make is that expressiveness should not be ignored or considered a threat. D has priorities, but should strive to max them all out.

Before I read Todd's presentation, I thought it would be better if D put less emphasize on performance. But now I think, that D must put strong emphasize on performance to survive against a forthcoming language, created by MS that focuses on programmer productivity.

I'll talk prophecies that the next disruptive language will succeed because of a great library. The speed or expressiveness of the language itself will be of secondary importance. The library will have the greatest influence on programmer productivity and performance.


> Just because D lets us go "under the hood" doesn't mean D should force us under the hood. That's what expressiveness is all about -- I express the logic in the most appropriate manner. The language gives me tradeoff choices. If I, the programmer, am willing to give up CPU cycles to finish my job faster, then D should permit that to the maximum extent of the law.

Yes, it would be nice if D could be flexible enough to provide both.

> Mark
> 
> 
March 01, 2003
Mark Evans <Mark_member@pathlink.com> writes:
> Todd Proebsting has worked on many languages -- his stuff is worth reading.

Indeed.  I found this exploratory article quite interesting:

ftp://ftp.research.microsoft.com/pub/tr/tr-2000-54.ps

"We propose a new language feature, a program history, that significantly reduces bookkeeping code in imperative programs. A history represents previous program state that is not explicitly recorded by the programmer. By reducing bookkeeping, programs are more convenient to write and less error-prone. Example program histories include a list that represents all the values previously assigned to a given variable, an accumulator that represents the sum of values assigned to a given variable, and a counter that represents the number of times a given loop has iterated. Many program histories can be implemented with low overhead."

-Antti
March 01, 2003
I certainly wouldn't want the compiler keeping profiling histories in my code, at least not if they weren't used.  Perhaps the virtual properties would exist only if you tried to use them.

Sean

"Antti Sykari" <jsykari@gamma.hut.fi> wrote in message news:87ptpac4bd.fsf@hoastest1-8c.hoasnet.inet.fi...
> Mark Evans <Mark_member@pathlink.com> writes:
> > Todd Proebsting has worked on many languages -- his stuff is worth reading.
>
> Indeed.  I found this exploratory article quite interesting:
>
> ftp://ftp.research.microsoft.com/pub/tr/tr-2000-54.ps
>
> "We propose a new language feature, a program history, that significantly reduces bookkeeping code in imperative programs. A history represents previous program state that is not explicitly recorded by the programmer. By reducing bookkeeping, programs are more convenient to write and less error-prone. Example program histories include a list that represents all the values previously assigned to a given variable, an accumulator that represents the sum of values assigned to a given variable, and a counter that represents the number of times a given loop has iterated. Many program histories can be implemented with low overhead."
>
> -Antti


March 01, 2003
"Antti Sykari" <jsykari@gamma.hut.fi> wrote in message news:87smu7b1sx.fsf@hoastest1-8c.hoasnet.inet.fi...
> "Walter" <walter@digitalmars.com> writes:
> > One thing that is never mentioned about C is that I believe the PC is
what
> > pushed C into the mass accepted language that it is. The reasons are: 1) PCs were slow and small. Writing high performance, memory efficient
apps
> > was a requirement, not a luxury.
> > 2) C was a good fit for the PC architecture.
> > 3) C was the ONLY high level language that had a decent compiler for it
in
> > the early PC days. (The only other options were BASIC and assembler.)
Sure,
> > there were shipping FORTRAN and Pascal compilers for the PC early on,
but
> > the implementations were so truly terrible they were useless (and I and
my
> > coworkers really did try to get them to work).
> I recently visited my university's library and examined some programming language textbooks from the late 1970's and the early 1980's.  Many of them didn't even mention C and those that did, didn't usually consider it to be much of a language.  Algol, Pascal, Cobol and Fortran were the languages of the day, with an occasional side note for Lisp, Prolog and Smalltalk. Although C had existed from around 1973 is was still a cryptical-looking, system-oriented niche language used only by some UNIX researchers and didn't seem to be of much interest.

You're right. I started programming in 1975, and had heard of all those languages (and many more like bliss, simula, APL) except for C, which I never heard of until 1983.


March 01, 2003
"Farmer" <itsFarmer.@freenet.de> wrote in message news:Xns9331BF31FC6CAitsFarmer@63.105.9.61...
> But I also have the impression that researchers don't read any source code material to speak of. So researchers tend to come up with great solutions for problems that are of minor importance for mainstream applications.

One thing I do bring to the table is 18 years of doing tech support for compilers.


> I'm glad that Walter designs a house for him to build and him and
everybody
> else to inhabit.

The interesting thing about my career is when I've built a product to the marketing department's specifications, or listened to what Bob tells me that Fred wants, the resulting product was invariably a failure. When I designed a product that *I* wanted to use, and that features in it that Bob told me that *Bob* needed, the products were a success.


> I think that designing games and designing languages have some similiar
> traits, these days:
> -It's NOT about how many FEATURES or ORIGINAL IDEAS you put in. The KEY IS
> that EVERYTHING fits WELL. Simple games can be fun. More feature-rich
games
> can be even more fun (just my personal opinion), though just adding features to games does not make them fun.

The interesting thing about my game Empire is it had very few features. You could show someone how to play it in a minute or so. I think that was a large factor in its success. Simple rules, but complex play resulting from those rules.

> Adding language features does increase complexity. The language spec becomes bigger, book authors will have to write more pages about D, programmers will have to read and understand more pages about D, vendors
of
> development tools (I do not mean compiler vendors here) are faced with
more
> complexity.
> Simplicity is a language feature by itself. From doing Java programming I
> learnt that it is a major feature.

You're right. But there is a downside to it - I think the way Java does inner classes and closures is too complicated. They just aren't a natural extension to the way Java does other things.

> Actually I believe [But I can hardly remember, so things may be a bit different], that some embedded programming folks, made a subset of C++
that
> banned templates. They said it would add too much complexity on compiler implementations. Stroustrup could not convince them that templates does
not
> interfere with embedded programming. (you don't have to use them; many C++ compilers already support templates)

I remember it too, I think it was called "embedded C++".

> Having to interface with C for performance critical code is an ugly
kludge.
> Why not put the ability for speed right into the language instead ?

You're right. What those languages are in effect doing are saying "if you want to program in X, you must learn both X and C."

> Neither Java nor C# is better than C++. But for many/some/a few applications today, Java and C# is the better choice. Both languages were not designed to be a successor to C++.

Right. D has an entirely different purpose than C# or Java has.

> Before I read Todd's presentation, I thought it would be better if D put less emphasize on performance. But now I think, that D must put strong emphasize on performance to survive against a forthcoming language,
created
> by MS that focuses on programmer productivity.

There are a lot of other programming languages that de-emphasis performance to achieve some other attribute. What I'm trying to do with D is show that you can get substantial programmer productivity improvements over C++, without sacrificing performance. D even promises to be able to generate *faster* code than C++. That is the hook that makes D different and appealing to C++ programmers. They won't be giving up what attracted them to C++ in the first place, performance and control.