Jump to page: 1 2
Thread overview
Death to C/C++
Feb 04, 2002
D
Feb 04, 2002
Sean L. Palmer
Feb 04, 2002
Alex Vincent
Feb 04, 2002
Alex Vincent
Feb 04, 2002
Russell Borogove
Feb 04, 2002
OddesE
Feb 05, 2002
D
Feb 05, 2002
D
Feb 05, 2002
user
Feb 05, 2002
D
Feb 05, 2002
Pavel Minayev
Feb 05, 2002
D
Feb 05, 2002
Russell Borogove
Feb 05, 2002
D
Feb 14, 2002
Tom Howard
Mar 04, 2002
d
Dec 29, 2005
Clay
February 04, 2002
I have just read with great interest the overview of your "D" language, contained within the distribution file dmdalpha.zip and thought I would add my 2 cents.

Well, ok.... 2K of uncommon sense.

Let me start by saying that I loath both C and C++.  These languages are abominations, abortions, unworthy of existence, a pox on the earth, etc... C fails on so many levels that I don't know where to begin.  Even the standard I/O library is an abomination....

Consider the function "GETS".  It takes a special kind of moron to write such a function.  It takes a special class of mindless wonders to actually decide to incorporate such a piece of filth into a "standard" library. Those who wrote included Gets in the first place, and those who voted to put it in the standard simply are too stupid to justify their continued existence.  Even torture and death is an inadequate punishment for their criminal stupidity.  Words can not convey my loathing for these worthless vermin.

However having got that off my chest, C/C++ still do have some nice characteristics. And while the languages themselves are also unworthy of existence, they can be salvaged.  I long for a day when C is replaced by a similarly featured, language that solves most of it's ample list of failings.

I come from an assembly language background so I am very familiar with how machines operate at the lowest levels.  I'm intimately familiar with pointers, and register sets, memory allocation, and rolling my own functions and syntax as needed.

I appreciate the utility of high level languages, although I am turned off by the pathetic level of optimizations they provide. To this day, compilers still produce code that is 2 to 4 times larger and 2 to 4 times smaller than I can produce by hand.  Of course they can produce it a zillion times faster than I can so in the current environment that is the more significant factor.  I am also turned off by the lack of low level control that most high level languages provide.

C/C++ strike a minimally acceptable balance between low level capability and high level convenience.

Having said that, I recognize that C/C++ fail to provide adequate low level control for many common low level functions.

These limitations are mostly a result of K&R (may they burn eternally in hell), defining a language that appeals to the lowest common denominator among all machines.  IE. makes no assumptions about register sizes or the manner in which variable data is represented.  Variables are considered as simply abstractly numeric and not necessarily stored internally as a set of binary bits.

Given that C/C++ is used to code I/O drivers, and write other very low level code, the lack of concept of "bit" in the language necessarily makes these programs non-compliant with proper C coding standards, and hence non-portable.  It was foolish of K&R to create a median level language for low level coding while defining it in such a manner as to make the production of low level code, outside the sanctioned scope of the language.

Insanity!

As you observe, C/C++ can't be fixed.  At least not while remaining anywhere near compatible.  It needs to be replaced.  You have my vote for "D".  It's a step in the right direction.

By far your best decision has been to abandon the concept of code compatibility with C/c++.  The need to maintain code compatibility with C clearly placed nasty constraints on the development of C++, and as you observe the language "standard" is vastly too convoluted to manage let alone patch.

As I implied above, in my view, C/C++ is a very poor language because it's underlying core philosophy is often contradictory and simply wrong headed. Consistency is replaced by special cases, and irrational behaviour and inconsistent nomenclature.  Not only does this cause the standard to become bloated and confused, but it also makes the language more difficult to use, and the resulting programs more error prone.

There was a time when programs were written like spaghetti.  Programmers, it was observed, didn't have the self decipline to follow rational coding standards.  As a result, the structured programming paradigm was invented in order to provide a consistent approach to writing programs.  The result was higher code quality, greater code simplicity and less error prone software.

The same argument is at the heart of the object model of programming. Programmers, it was observed, didn't have the self decipline to follow rational coding standards and properly encapsulate the functionality of their programs.  The Object programming paradigm, imposes for the most part the "correct" behaviour on the programmer, or at least points him/her in the proper direction.  Encapsulation is now foremost on everyone's mind.

Odd, I've been encapsulating Assembler functions since before the days of C++.  The justification for encapsulation was clear.  The method was generic and applicable to many other languages.  But not easily applicable to C, because the language is so poorly defined.

The purpose of any language above assembler is two fold.  First, to make it easier for programmers to write code, and second, to make it more probable that the code produced will properly do it's intended job.

Portability is often raised as a reason for the existence of high level languages, but this is not a legitimate argument since low level languages can be made to be just as portable as high level ones.  Java byte code and other languages that generate intermediate code or Pcode have repeatedly proven this.  JIT compilers are the final nail in the coffin.  Portability is a feature of any language, and not as it is so often claimed by members of the C/C++ religions, a characteristic of high level languages.

Ease of use and program correctness are related of course.  One pretty much implies the other.  When a language is easy to use, it will typically be easier to spot programming errors.  The opposite is also true.  When it is difficult to write correct programs, a language is generally difficult to use.  It's just common sense.

C is poor because it's subtle behavioural anomalies and inconsistencies make it difficult to follow it's code.  Type conversions are often hidden, assumptions must often be made about a variable's size and internal representation which are outside of the language's specifications etc.

C is poor because it relies on the conscious effort of the programmer to avoid hidden or concealed pitfalls, rather than avoiding them in the first place though proper ergonomic language design.

The C philosophy is contradictory since as a higher level language, it's reason for existence is convenience.  But it then burdens the programmer with it's own irrational and inconsistent behaviours making the language inconvenient and error prone.

At it's very core the C philosophy is contradictory.  It is a structured language, (clearly K&R recognized the superiority of structure in the creation of well written programs).  But having recognized this, K&R then foolishly refused to implement many important structural features that are highly desirable, and trivially easy to implement.

Consider that C omits block typing.  Those structured languages that don't descend from C, typically identify different kinds of blocks with different bracketing keywords.  Do/While, If/Endif, Repeat/Until etc.

When reading a block typed program, if the programmer sees the word "until", he/she knows that the statement ends a "repeat" block.

With C on the other hand the same delimiters are used for all blocks.  The bracket that ends a C block could be the end of an if block or any other kind of block.  The programmer is forced when reading the code, to search back through the program to find which block is being closed.   For convenience the strategy to combat this language failing is to progressively indent blocks so that the start of a block can easily be identified.  And of course sicne C is a free format language, we immediately run into the problem of people having different indention styles that typically make following code written by another person difficult.

The indent requirement is utter lunacy.  No language should be able to have it's meaning be made essentially human unreadable through a simple loss of indentation.

Insanity!


The C philosophy is that the language should be self documenting, yet the language provides inadequate syntactic guards, redundancy and signposts that would make it self documenting.

Specifically the omission of keywords like "then" after "if" beg the question "if what?"

Yes, in an "if then" statement the keyword "then" is a redundancy. Redundancy is a very good thing.

Natural language is full of redundancy.  It exists for a purpose.  If this were not so, it would have been evolved out of the language.

Redundancy is maintained through the evolution of a language for a reason. The reason is that it facilitates the conveyance of the intended meaning of what is said.  A person may miss, or misinterpret one key idea in a sentence, but through the language redundancy will often be guided to the intended meaning.

Practical redundancy to enhance readability is a very good thing.  Syntactic minimalism is a wish for the ignorant, and is be avoided in any rationally defined languages.  APL is proof enough of that.

The more cryptic a language the more likely errors will be made.  The C "religion" accepts that C is superior to assembler because the C syntax is more readable and more convenient to use than assembler.  I agree.  However, C dogma then goes on to reject alterations in the syntax that would provide further benefits to readability and every other language convenience on the grounds that they don't add any value to the language itself.  Clearly this dogma is false.

I strongly urge you to move further away from the C syntax, and implement block typing. If Then/End if, While/Whend, Do/Loop, SelectCase/EndCase, Begin/End.

Note that I say SelectCase rather than Switch.  The word switch has no
contextual connection to the operation it identifies in C/C++.  In the real
world, a switch is a binary device.  It is either on or off.  In the real
world a switch does not allow the selection of multiple settings.  Dials,
and other kinds of SELECTORS do that kind of thing, and when they make
discrete choices, they are composed of <separate> "switches".
The keyword "switch" should therefore be changed to something meaningful,
something that describes what is being done.  How about using "Select"?
Seems to work for other languages.

The "switch" statement also has the unfortunate characteristic of not allowing multiple comparisons for each case.  Also unfortunate is the need for the "break" keyword to exit the statement.

Switch would be syntactically cleaner if it allowed multiple entries, and the break keyword is removed.

Select (expression)
   Case a,b,c;
   Case d,e,f;
   default;
end select;

Rather than

switch (expression) (
   case a;
   case b;
   case c;
   break;
   case d;
   case e;
   case f;
   break;
   default;
);

The scope rules for C and C++ are also inconsistent.  All variables inside a block are visible only within the block.  All variables inside a procedure are visible only within the procedure.  C is inconsistent in that variables and functions that are inside a translation unit or module, (whatever you wish to call it), have global scope by default.

This behaviour is also plainly irrational.

Modules, should present <NO> external linkages at all unless those linkages are explicitly indicated.  With the default behaviour set to make all module level definitions global in scope, the programmer is forced to perform extra work in order to code properly.  The unweary end up wasting compile resources and throwing away potential optimizations when procedures are left to the default global scope.

Insanity!  In order to minimize error and make a language convenient, default behaviour should be that behaviour that is most consistent with good programming practice.

C's method of defining variables is also fundamentally flawed. Not in the syntax used, but in the manner in which the variables are sized.  C/C++ provides no clean and rational mechanism for programmers to know if the variables they will be using are of adequate size or type to hold the data they wish to represent.

C dogma holds that either the programmer write to the lowest common
denominator, or
use conditional compilation to select appropriate sized variables for the
task at hand.

That's not portability.  It's Insanity!

It is an extreme burden for the programmer to constantly have to worry if his program is going to run correctly because the language may have altered the size or type of his variable in some environments.  How many subtle programming errors can be introduced into a program when it's integer size be altered from 8 to 32 bits?  Or more catastrophically the opposite?

The proper way to solve the problem is to provide the programmer with a set of variable types that are guaranteed to be implemented in the language.  It is the duty of the language to conform to the programmers demands.  If the programmer wishes to use an 8 bit integer, and it is a sanctioned size, it is the obligation of the language to synthesize one, out of the given CPU registers even if the size is not natively supported.

Undefined variable sizes produce errors.

Compilers are often burdened to provide support for floating point types on CPU's that don't have floating point variables.  On some CPU's floats must be synthesized out of raw integers.  The programmer need not concern himself as to weather the target CPU actually has native register support for these variables.  Integers should be no different.

It is not relevant if the underlying CPU or environment supports 8 bit integers or not. Such variables can be constructed with 16 or 32 bit registers as quite easily.  Program correctness must trump efficiency.  In the case of variable definitions, C places efficiency above program correctness.

Insanity!

For D, your decision to forget about 8 bit CPU's will limit this problem to some extent, but as a practical matter, integer variables are never going to exceed 128 bits in size, and number representation is not going to stray from typical binary representation found in modern CPU's.   So, this places reasonable limits on the number of variable types that must be supported.

Some machines will fail to conform.  Too bad, those machines shouldn't exist.

Rather than providing a set of integer variables of unknown size, I strongly recommend that you define a set of integer variables of fixed size and sign type.

I would suggest the following...

Byte   (unsigned 8 bit)
SByte  (Signed 8 bit)
Word   (unsigned 16 bit)
SWord  (signed 16 bit)
Dword  (unsigned 32 bits)
SDword (Signed 32 bits)
Qword  (unsigned 64 bits)
SQword (Signed 64 bits)
0Word  (Unsigned 128 bits)
SOWord (Signed 128 bits)

or ...

int8    (unsigned 8 bit)
Sint8   (Signed 8 bit)
int16   (unsigned 16 bit)
sint16  (signed 16 bit)
int32   (unsigned 32 bits)
sint32  (Signed 32 bits)
int64   (unsigned 64 bits)
sint64  (Signed 64 bits)
int128  (Unsigned 128 bits)
sint128 (Signed 128 bits)

Lets have no optional behaviour in the language spec. None of this - a character is an 8 bit signed number in some machines, and an unsigned number in others - nonsense.

Another aspect of variable definition that should be changed is the names of the types themselves.  Again the problem is ambiguity.  Ambiguity breeds error.

Single.  Single what?    "flapjack" has just as much meaning.
Double.  Double what?    Double the flapjack of course.  How about "tall"?
Long.    Long what?      How long is a long?  Shouldn't "dint" be "short"?

With the integer data types shown previously there is no ambiguity.

I see nothing wrong with...

Float1
Float2
Float3

or

FSingle
FDouble
FExtra

As long as the floating point nature of the variable, if not it's absolute size are made abundantly clear.

Specifying pointer sizes is clearly an issue.  As a practical matter no CPU that I know of has more than one pointer size, so not specifying the size in bits is a legitimate option.  It does however raise the issue of how the program can ensure that a pointer subtraction can be contained within a register of known size.

Fortunately pointer subtraction doesn't occur very often so in my view a typedef based on variable size seems the most pragmatic and acceptable solution.


C also fails when it comes to character types as it does not specify if they are signed or unsigned.  defining Char and SChar, WChr and SWChar will solve that problem for both ASCII and wide Unicode charactes.

But why should characters be treated as numbers at all when they are characters. Doing so is a rejection of type definition itself.  Rational languages are strongly typed and conversion can be indicated through explicit casting.

While I am on the subject of variables, I observe that very many programming errors in C are caused by improper array bounds checking.  I have long thought that a secondary "secure" pointer type should be implemented that places bounds on the pointer's value. This would be a complex type that contains not only the pointer itself, but the upper and lower bounding values.

Of course you can implement bounded pointers through overloading, but who actually does it?  Using a composite variable is also inefficient when implemented in software by the compiler.  Ideally this type should be implemented in hardware so that bounding tests can be performed in parallel as the content of the pointer register is changed.

Why this kind of thing isn't already implemented in hardware is beyond me. It's not needed I guess.  All those buffer overflow problems that are the source of 90% of all the security exploits must be figments of my imagination...

Insanity!

In C and C++ /*Comments*/ can't nest.  I've never understood the justification for such a limitation.  Certainly the preprocessor strips out the comments before the compiler sees them, so it should have been trivial to simply have the pre-processor use a counter rather than a boolean to determine if it was inside a comment.

C/C++ are free format languages, and there is nothing wrong with that.

However, it is often argued by C religionists, that the mandatory use of semicolons are required if the language is to remain free format.  This is clearly false.

Missing semicolons are the #1 bugaboo of all C programmers no matter what their level of experience.  Why not remove their requirement?

This should be as simple as defining <EOL> to be equivalent to a semicolon, and then defining a line continuation character that causes the next occurrence of <EOL> to be ignored.  Semicolons can be reserved for putting multiple statements on one logical line.

begin
  dint a,b,c
  c = b + a; b = a + c
end

Rather than...

{
  dint a;
  dint b;
  dint c;
  c = b + a;
  b = a + c;
}

a = FtnCall(VariableA, VariableB, _
            VariableC, VariableD)

Rather than...

a = FtnCall(VariableA, VariableB,
            VariableC, VariableD);



Your choice in "D" is to remove operator overloading from the language spec. Excellent, I concur with you that the ability to redefine operators causes many more problems than it solves.  However, having the ability to use operator notation does greatly simplify the coding of equations involving complex numbers etc.

The idea of converting function calls to an operator syntax is a good one.
C (and other oop languages) simply take the wrong tact.  Rather than
overload the existing operators, it is a much better idea to allow the
creation of new operators.
New operators should all have equal precedence.

Many languages have the facility to implement new operators rather than simply overloading the existing ones.  If properly used this can provide a means of operator typing and therefore improve code clarity.  You could for example write statements such as...

c .c= a .c+ b

Where <.c=> is defined as complex assignment
      <.c+> is defined as complex add

Recc .Rec= Reca .Rec_Name_Greater Recb

Where .Rec= is defined as record assignment
      .Rec_Name_Greater  is defined as a comparon between named portions of
two records.

Once identified, the above syntax can easily be converted to a function like synatx by a preprocessor as follows.

Define Operator Rec_Name_Greater = as type Rec Rec_Name_Greater(type Rec,
type Rec)

The precompiler would then translate the statement ..

Recc .Rec= Reca .Rec_Name_Greater Recb

into the call ...

Recc = Reg_Name_Greater(Reca,Recb)

This implies that the compler should be smart enough to be able to automatically handle assignment to complex data types via a block move as necessary.

It also implies that the internal organization of structures be defined within the language.

Defining the internal variable arrangement of structures poses difficulties for language portability and efficiency.  I would suggest that there be two types of structures.  One defined for portability between platforms, and one for internal use with the stipulation that the one defined for portability between platforms be used only where structures are necessarily shared.

Since data formats are assumed to be binary and in the form of 8, 16, 32, 64 or 128 bits and strings, it is necessary to worry about endian formats, both bitwise and bytewise. The most convenient way to do this would be to allow the definition of new big endian/little endian variable types within structures.  The best way to do this would be to define the types as "overrides" in the structure type definition itself.

I.E.

new iotype wombat BigBitEndian, BigByteEndian
   int8  a ,b ,c
   int16 a1,b1,c1
   int32 a2,b2,c2
   ...
end type

iotypes should not be allowed to support any operation other than assignment.  This will force the immediate conversion of such types to the native format used by the machine.

Again, coming from a machine language environment, I prefer to keep things as compact as possible.  To that end, I often define one integer of storage space and use it to hold multiple boolean flags.

const wombat  equ 0x0000000000000001
const wombat1 equ 0x0000000000000010
const wombat2 equ 0x0000000000000100
...
WombatFlg     dw  0x0000000000000000

You can do similar things in most other languages of course, but in every case that I know of you end up with a whopping number of similarly named and unassociated constants that may have the same numerical values.  Not good.

I would like to have the ability to associate a constant with a particular data type.  For example

new type wombat
   const wombat  equ 0x0000000000000001
   const wombat1 equ 0x0000000000000010
   const wombat2 equ 0x0000000000000100
   ...
   WombatFlg     dw  0x0000000000000000
end type wombatx

wombatx.wombatflg |= wombat.wombat1


The way C and C++ implement Preincrement and Postincrement is also fundamentally wrong headed.  The C standard stipulates that there is no guarantee where in an expression the pre or post increment will occur, only that they will have been performed at the end of the expression.  Multiple appearances of a variable in an expression provide different results for the expression if the variable is incremented/decremented within the expression.

Insanity!

At the very least, C/C++ should check for such problems and REFUSE to compile any program that contains such a problem.  Hopefully spitting out a meaningful error message.

I would prefer the following...

The definition should stipulate that preincrements occur before the first reference to the variable in question, and post increments occur after the last reference to a variable.  Multiple increments/decrements of the same variable in an expression should trigger a compile error.


Automatic variable type conversion is a legitimate convenience feature that C provides. However, once again the language default behaviour is contrary to common sense.

Casting errors are extremely common and often difficult to find particularly when they occur in a function call.  To make this less likely, variable casting should only occur if explicitly stated.  I would even go so far as to require explicit casting where arrays are converted to pointer references in function calls.  This may be going too far, but anything that will make the conversion more explicit is beneficial.

I believe I read that "D" will pass arrays rather than converting them to pointers.  This is a mistake, as passing arrays will clearly be very inefficient.  Any attempt to pass an array as an array rather than a casted pointer should generate an error.

Label case sensitivity is also an issue.  Here in the real world, words do not change their meaning when they capitalized.  A Banana is a BANANA.  So it should also be with programming labels and keywords.   All compiler should ignore the case of all names and keywords, yet optionally keep the case intact for the purpose of export and linkage.  Linkage within the language itself should also be case insensitive.  I would recommend through the conversion of all exported labels to upper case.

Keeping case sensitivity to names and keywords promotes poor programming practices like defining variables with names like HWin, hWin, hWIN, etc. Oh ya, like that isn't found in every windows program ever written.

Insanity!

In the current C/C++ dogma, case sensitivity is considered to be a good because it provides slightly greater flexibility in naming, but this benefit is predicated on the creation of identically named identifiers that differ only by case, yet creating such labels is considered (correctly so), bad programming practice.  So the benefit is only had through bad programming practice.  Another inconsistency.

Insanity!


Other features that I would like to see is the ability to perform initializations in the following manner

int16 a,b,c = 7;
int16 a,b,c = 7,8,9;

static int16 a,b,c = 0;


You have gotten rid of the . vs -> fiasco.  Wonderful.
Lost the forward reference, predefinition kludge.  Wonderful.
Lost macros.  Wonderful as long as a better facility is provided.
              Being able to define your own operators goes a long way in
this respect.
Improved error trapping.  Wonderful.
Decided to include strings as a native type.  Wonderful.

Improvements in object implementation.  Wonderful.

I wish you success in the development of "D", and I prey for a world rational enough to abandon the abominations that are now being used, and adopt your language.

Unfortunately, members of the C religion are far too deep into the Plauktau - the blood fever - to listen to reason.

Landru - guide us!

All is chaos!





February 04, 2002
That's quite a rant (27Kb worth!!!)

I hear you, brother!  ;)

Sean

"D" <s_nudds@hotmail.com> wrote in message news:a3le07$nf4$1@digitaldaemon.com...
> I have just read with great interest the overview of your "D" language, contained within the distribution file dmdalpha.zip and thought I would
add
> my 2 cents.
>
> Well, ok.... 2K of uncommon sense.
>
> Let me start by saying that I loath both C and C++.  These languages are abominations, abortions, unworthy of existence, a pox on the earth, etc... C fails on so many levels that I don't know where to begin.  Even the standard I/O library is an abomination....
>
> Consider the function "GETS".  It takes a special kind of moron to write such a function.  It takes a special class of mindless wonders to actually decide to incorporate such a piece of filth into a "standard" library. Those who wrote included Gets in the first place, and those who voted to
put
> it in the standard simply are too stupid to justify their continued existence.  Even torture and death is an inadequate punishment for their criminal stupidity.  Words can not convey my loathing for these worthless vermin.
>
> However having got that off my chest, C/C++ still do have some nice characteristics. And while the languages themselves are also unworthy of existence, they can be salvaged.  I long for a day when C is replaced by a similarly featured, language that solves most of it's ample list of failings.
>
> I come from an assembly language background so I am very familiar with how machines operate at the lowest levels.  I'm intimately familiar with pointers, and register sets, memory allocation, and rolling my own
functions
> and syntax as needed.
>
> I appreciate the utility of high level languages, although I am turned off by the pathetic level of optimizations they provide. To this day,
compilers
> still produce code that is 2 to 4 times larger and 2 to 4 times smaller
than
> I can produce by hand.  Of course they can produce it a zillion times
faster
> than I can so in the current environment that is the more significant factor.  I am also turned off by the lack of low level control that most high level languages provide.
>
> C/C++ strike a minimally acceptable balance between low level capability
and
> high level convenience.
>
> Having said that, I recognize that C/C++ fail to provide adequate low
level
> control for many common low level functions.
>
> These limitations are mostly a result of K&R (may they burn eternally in hell), defining a language that appeals to the lowest common denominator among all machines.  IE. makes no assumptions about register sizes or the manner in which variable data is represented.  Variables are considered as simply abstractly numeric and not necessarily stored internally as a set
of
> binary bits.
>
> Given that C/C++ is used to code I/O drivers, and write other very low
level
> code, the lack of concept of "bit" in the language necessarily makes these programs non-compliant with proper C coding standards, and hence non-portable.  It was foolish of K&R to create a median level language for low level coding while defining it in such a manner as to make the production of low level code, outside the sanctioned scope of the
language.
>
> Insanity!
>
> As you observe, C/C++ can't be fixed.  At least not while remaining
anywhere
> near compatible.  It needs to be replaced.  You have my vote for "D".
It's
> a step in the right direction.
>
> By far your best decision has been to abandon the concept of code compatibility with C/c++.  The need to maintain code compatibility with C clearly placed nasty constraints on the development of C++, and as you observe the language "standard" is vastly too convoluted to manage let
alone
> patch.
>
> As I implied above, in my view, C/C++ is a very poor language because it's underlying core philosophy is often contradictory and simply wrong headed. Consistency is replaced by special cases, and irrational behaviour and inconsistent nomenclature.  Not only does this cause the standard to
become
> bloated and confused, but it also makes the language more difficult to
use,
> and the resulting programs more error prone.
>
> There was a time when programs were written like spaghetti.  Programmers,
it
> was observed, didn't have the self decipline to follow rational coding standards.  As a result, the structured programming paradigm was invented
in
> order to provide a consistent approach to writing programs.  The result
was
> higher code quality, greater code simplicity and less error prone
software.
>
> The same argument is at the heart of the object model of programming. Programmers, it was observed, didn't have the self decipline to follow rational coding standards and properly encapsulate the functionality of their programs.  The Object programming paradigm, imposes for the most
part
> the "correct" behaviour on the programmer, or at least points him/her in
the
> proper direction.  Encapsulation is now foremost on everyone's mind.
>
> Odd, I've been encapsulating Assembler functions since before the days of C++.  The justification for encapsulation was clear.  The method was
generic
> and applicable to many other languages.  But not easily applicable to C, because the language is so poorly defined.
>
> The purpose of any language above assembler is two fold.  First, to make
it
> easier for programmers to write code, and second, to make it more probable that the code produced will properly do it's intended job.
>
> Portability is often raised as a reason for the existence of high level languages, but this is not a legitimate argument since low level languages can be made to be just as portable as high level ones.  Java byte code and other languages that generate intermediate code or Pcode have repeatedly proven this.  JIT compilers are the final nail in the coffin.  Portability is a feature of any language, and not as it is so often claimed by members of the C/C++ religions, a characteristic of high level languages.
>
> Ease of use and program correctness are related of course.  One pretty
much
> implies the other.  When a language is easy to use, it will typically be easier to spot programming errors.  The opposite is also true.  When it is difficult to write correct programs, a language is generally difficult to use.  It's just common sense.
>
> C is poor because it's subtle behavioural anomalies and inconsistencies
make
> it difficult to follow it's code.  Type conversions are often hidden, assumptions must often be made about a variable's size and internal representation which are outside of the language's specifications etc.
>
> C is poor because it relies on the conscious effort of the programmer to avoid hidden or concealed pitfalls, rather than avoiding them in the first place though proper ergonomic language design.
>
> The C philosophy is contradictory since as a higher level language, it's reason for existence is convenience.  But it then burdens the programmer with it's own irrational and inconsistent behaviours making the language inconvenient and error prone.
>
> At it's very core the C philosophy is contradictory.  It is a structured language, (clearly K&R recognized the superiority of structure in the creation of well written programs).  But having recognized this, K&R then foolishly refused to implement many important structural features that are highly desirable, and trivially easy to implement.
>
> Consider that C omits block typing.  Those structured languages that don't descend from C, typically identify different kinds of blocks with
different
> bracketing keywords.  Do/While, If/Endif, Repeat/Until etc.
>
> When reading a block typed program, if the programmer sees the word
"until",
> he/she knows that the statement ends a "repeat" block.
>
> With C on the other hand the same delimiters are used for all blocks.  The bracket that ends a C block could be the end of an if block or any other kind of block.  The programmer is forced when reading the code, to search back through the program to find which block is being closed.   For convenience the strategy to combat this language failing is to
progressively
> indent blocks so that the start of a block can easily be identified.  And
of
> course sicne C is a free format language, we immediately run into the problem of people having different indention styles that typically make following code written by another person difficult.
>
> The indent requirement is utter lunacy.  No language should be able to
have
> it's meaning be made essentially human unreadable through a simple loss of indentation.
>
> Insanity!
>
>
> The C philosophy is that the language should be self documenting, yet the language provides inadequate syntactic guards, redundancy and signposts
that
> would make it self documenting.
>
> Specifically the omission of keywords like "then" after "if" beg the question "if what?"
>
> Yes, in an "if then" statement the keyword "then" is a redundancy. Redundancy is a very good thing.
>
> Natural language is full of redundancy.  It exists for a purpose.  If this were not so, it would have been evolved out of the language.
>
> Redundancy is maintained through the evolution of a language for a reason. The reason is that it facilitates the conveyance of the intended meaning
of
> what is said.  A person may miss, or misinterpret one key idea in a sentence, but through the language redundancy will often be guided to the intended meaning.
>
> Practical redundancy to enhance readability is a very good thing.
Syntactic
> minimalism is a wish for the ignorant, and is be avoided in any rationally defined languages.  APL is proof enough of that.
>
> The more cryptic a language the more likely errors will be made.  The C "religion" accepts that C is superior to assembler because the C syntax is more readable and more convenient to use than assembler.  I agree.
However,
> C dogma then goes on to reject alterations in the syntax that would
provide
> further benefits to readability and every other language convenience on
the
> grounds that they don't add any value to the language itself.  Clearly
this
> dogma is false.
>
> I strongly urge you to move further away from the C syntax, and implement block typing. If Then/End if, While/Whend, Do/Loop, SelectCase/EndCase, Begin/End.
>
> Note that I say SelectCase rather than Switch.  The word switch has no contextual connection to the operation it identifies in C/C++.  In the
real
> world, a switch is a binary device.  It is either on or off.  In the real
> world a switch does not allow the selection of multiple settings.  Dials,
> and other kinds of SELECTORS do that kind of thing, and when they make
> discrete choices, they are composed of <separate> "switches".
> The keyword "switch" should therefore be changed to something meaningful,
> something that describes what is being done.  How about using "Select"?
> Seems to work for other languages.
>
> The "switch" statement also has the unfortunate characteristic of not allowing multiple comparisons for each case.  Also unfortunate is the need for the "break" keyword to exit the statement.
>
> Switch would be syntactically cleaner if it allowed multiple entries, and the break keyword is removed.
>
> Select (expression)
>    Case a,b,c;
>    Case d,e,f;
>    default;
> end select;
>
> Rather than
>
> switch (expression) (
>    case a;
>    case b;
>    case c;
>    break;
>    case d;
>    case e;
>    case f;
>    break;
>    default;
> );
>
> The scope rules for C and C++ are also inconsistent.  All variables inside
a
> block are visible only within the block.  All variables inside a procedure are visible only within the procedure.  C is inconsistent in that
variables
> and functions that are inside a translation unit or module, (whatever you wish to call it), have global scope by default.
>
> This behaviour is also plainly irrational.
>
> Modules, should present <NO> external linkages at all unless those
linkages
> are explicitly indicated.  With the default behaviour set to make all
module
> level definitions global in scope, the programmer is forced to perform
extra
> work in order to code properly.  The unweary end up wasting compile resources and throwing away potential optimizations when procedures are
left
> to the default global scope.
>
> Insanity!  In order to minimize error and make a language convenient, default behaviour should be that behaviour that is most consistent with
good
> programming practice.
>
> C's method of defining variables is also fundamentally flawed. Not in the syntax used, but in the manner in which the variables are sized.  C/C++ provides no clean and rational mechanism for programmers to know if the variables they will be using are of adequate size or type to hold the data they wish to represent.
>
> C dogma holds that either the programmer write to the lowest common
> denominator, or
> use conditional compilation to select appropriate sized variables for the
> task at hand.
>
> That's not portability.  It's Insanity!
>
> It is an extreme burden for the programmer to constantly have to worry if his program is going to run correctly because the language may have
altered
> the size or type of his variable in some environments.  How many subtle programming errors can be introduced into a program when it's integer size be altered from 8 to 32 bits?  Or more catastrophically the opposite?
>
> The proper way to solve the problem is to provide the programmer with a
set
> of variable types that are guaranteed to be implemented in the language.
It
> is the duty of the language to conform to the programmers demands.  If the programmer wishes to use an 8 bit integer, and it is a sanctioned size, it is the obligation of the language to synthesize one, out of the given CPU registers even if the size is not natively supported.
>
> Undefined variable sizes produce errors.
>
> Compilers are often burdened to provide support for floating point types
on
> CPU's that don't have floating point variables.  On some CPU's floats must be synthesized out of raw integers.  The programmer need not concern
himself
> as to weather the target CPU actually has native register support for
these
> variables.  Integers should be no different.
>
> It is not relevant if the underlying CPU or environment supports 8 bit integers or not. Such variables can be constructed with 16 or 32 bit registers as quite easily.  Program correctness must trump efficiency.  In the case of variable definitions, C places efficiency above program correctness.
>
> Insanity!
>
> For D, your decision to forget about 8 bit CPU's will limit this problem
to
> some extent, but as a practical matter, integer variables are never going
to
> exceed 128 bits in size, and number representation is not going to stray from typical binary representation found in modern CPU's.   So, this
places
> reasonable limits on the number of variable types that must be supported.
>
> Some machines will fail to conform.  Too bad, those machines shouldn't exist.
>
> Rather than providing a set of integer variables of unknown size, I
strongly
> recommend that you define a set of integer variables of fixed size and
sign
> type.
>
> I would suggest the following...
>
> Byte   (unsigned 8 bit)
> SByte  (Signed 8 bit)
> Word   (unsigned 16 bit)
> SWord  (signed 16 bit)
> Dword  (unsigned 32 bits)
> SDword (Signed 32 bits)
> Qword  (unsigned 64 bits)
> SQword (Signed 64 bits)
> 0Word  (Unsigned 128 bits)
> SOWord (Signed 128 bits)
>
> or ...
>
> int8    (unsigned 8 bit)
> Sint8   (Signed 8 bit)
> int16   (unsigned 16 bit)
> sint16  (signed 16 bit)
> int32   (unsigned 32 bits)
> sint32  (Signed 32 bits)
> int64   (unsigned 64 bits)
> sint64  (Signed 64 bits)
> int128  (Unsigned 128 bits)
> sint128 (Signed 128 bits)
>
> Lets have no optional behaviour in the language spec. None of this - a character is an 8 bit signed number in some machines, and an unsigned
number
> in others - nonsense.
>
> Another aspect of variable definition that should be changed is the names
of
> the types themselves.  Again the problem is ambiguity.  Ambiguity breeds error.
>
> Single.  Single what?    "flapjack" has just as much meaning.
> Double.  Double what?    Double the flapjack of course.  How about "tall"?
> Long.    Long what?      How long is a long?  Shouldn't "dint" be "short"?
>
> With the integer data types shown previously there is no ambiguity.
>
> I see nothing wrong with...
>
> Float1
> Float2
> Float3
>
> or
>
> FSingle
> FDouble
> FExtra
>
> As long as the floating point nature of the variable, if not it's absolute size are made abundantly clear.
>
> Specifying pointer sizes is clearly an issue.  As a practical matter no
CPU
> that I know of has more than one pointer size, so not specifying the size
in
> bits is a legitimate option.  It does however raise the issue of how the program can ensure that a pointer subtraction can be contained within a register of known size.
>
> Fortunately pointer subtraction doesn't occur very often so in my view a typedef based on variable size seems the most pragmatic and acceptable solution.
>
>
> C also fails when it comes to character types as it does not specify if
they
> are signed or unsigned.  defining Char and SChar, WChr and SWChar will
solve
> that problem for both ASCII and wide Unicode charactes.
>
> But why should characters be treated as numbers at all when they are characters. Doing so is a rejection of type definition itself.  Rational languages are strongly typed and conversion can be indicated through explicit casting.
>
> While I am on the subject of variables, I observe that very many
programming
> errors in C are caused by improper array bounds checking.  I have long thought that a secondary "secure" pointer type should be implemented that places bounds on the pointer's value. This would be a complex type that contains not only the pointer itself, but the upper and lower bounding values.
>
> Of course you can implement bounded pointers through overloading, but who actually does it?  Using a composite variable is also inefficient when implemented in software by the compiler.  Ideally this type should be implemented in hardware so that bounding tests can be performed in
parallel
> as the content of the pointer register is changed.
>
> Why this kind of thing isn't already implemented in hardware is beyond me. It's not needed I guess.  All those buffer overflow problems that are the source of 90% of all the security exploits must be figments of my imagination...
>
> Insanity!
>
> In C and C++ /*Comments*/ can't nest.  I've never understood the justification for such a limitation.  Certainly the preprocessor strips
out
> the comments before the compiler sees them, so it should have been trivial to simply have the pre-processor use a counter rather than a boolean to determine if it was inside a comment.
>
> C/C++ are free format languages, and there is nothing wrong with that.
>
> However, it is often argued by C religionists, that the mandatory use of semicolons are required if the language is to remain free format.  This is clearly false.
>
> Missing semicolons are the #1 bugaboo of all C programmers no matter what their level of experience.  Why not remove their requirement?
>
> This should be as simple as defining <EOL> to be equivalent to a
semicolon,
> and then defining a line continuation character that causes the next occurrence of <EOL> to be ignored.  Semicolons can be reserved for putting multiple statements on one logical line.
>
> begin
>   dint a,b,c
>   c = b + a; b = a + c
> end
>
> Rather than...
>
> {
>   dint a;
>   dint b;
>   dint c;
>   c = b + a;
>   b = a + c;
> }
>
> a = FtnCall(VariableA, VariableB, _
>             VariableC, VariableD)
>
> Rather than...
>
> a = FtnCall(VariableA, VariableB,
>             VariableC, VariableD);
>
>
>
> Your choice in "D" is to remove operator overloading from the language
spec.
> Excellent, I concur with you that the ability to redefine operators causes many more problems than it solves.  However, having the ability to use operator notation does greatly simplify the coding of equations involving complex numbers etc.
>
> The idea of converting function calls to an operator syntax is a good one.
> C (and other oop languages) simply take the wrong tact.  Rather than
> overload the existing operators, it is a much better idea to allow the
> creation of new operators.
> New operators should all have equal precedence.
>
> Many languages have the facility to implement new operators rather than simply overloading the existing ones.  If properly used this can provide a means of operator typing and therefore improve code clarity.  You could
for
> example write statements such as...
>
> c .c= a .c+ b
>
> Where <.c=> is defined as complex assignment
>       <.c+> is defined as complex add
>
> Recc .Rec= Reca .Rec_Name_Greater Recb
>
> Where .Rec= is defined as record assignment
>       .Rec_Name_Greater  is defined as a comparon between named portions
of
> two records.
>
> Once identified, the above syntax can easily be converted to a function
like
> synatx by a preprocessor as follows.
>
> Define Operator Rec_Name_Greater = as type Rec Rec_Name_Greater(type Rec,
> type Rec)
>
> The precompiler would then translate the statement ..
>
> Recc .Rec= Reca .Rec_Name_Greater Recb
>
> into the call ...
>
> Recc = Reg_Name_Greater(Reca,Recb)
>
> This implies that the compler should be smart enough to be able to automatically handle assignment to complex data types via a block move as necessary.
>
> It also implies that the internal organization of structures be defined within the language.
>
> Defining the internal variable arrangement of structures poses
difficulties
> for language portability and efficiency.  I would suggest that there be
two
> types of structures.  One defined for portability between platforms, and
one
> for internal use with the stipulation that the one defined for portability between platforms be used only where structures are necessarily shared.
>
> Since data formats are assumed to be binary and in the form of 8, 16, 32,
64
> or 128 bits and strings, it is necessary to worry about endian formats,
both
> bitwise and bytewise. The most convenient way to do this would be to allow the definition of new big endian/little endian variable types within structures.  The best way to do this would be to define the types as "overrides" in the structure type definition itself.
>
> I.E.
>
> new iotype wombat BigBitEndian, BigByteEndian
>    int8  a ,b ,c
>    int16 a1,b1,c1
>    int32 a2,b2,c2
>    ...
> end type
>
> iotypes should not be allowed to support any operation other than assignment.  This will force the immediate conversion of such types to the native format used by the machine.
>
> Again, coming from a machine language environment, I prefer to keep things as compact as possible.  To that end, I often define one integer of
storage
> space and use it to hold multiple boolean flags.
>
> const wombat  equ 0x0000000000000001
> const wombat1 equ 0x0000000000000010
> const wombat2 equ 0x0000000000000100
> ...
> WombatFlg     dw  0x0000000000000000
>
> You can do similar things in most other languages of course, but in every case that I know of you end up with a whopping number of similarly named
and
> unassociated constants that may have the same numerical values.  Not good.
>
> I would like to have the ability to associate a constant with a particular data type.  For example
>
> new type wombat
>    const wombat  equ 0x0000000000000001
>    const wombat1 equ 0x0000000000000010
>    const wombat2 equ 0x0000000000000100
>    ...
>    WombatFlg     dw  0x0000000000000000
> end type wombatx
>
> wombatx.wombatflg |= wombat.wombat1
>
>
> The way C and C++ implement Preincrement and Postincrement is also fundamentally wrong headed.  The C standard stipulates that there is no guarantee where in an expression the pre or post increment will occur,
only
> that they will have been performed at the end of the expression.  Multiple appearances of a variable in an expression provide different results for
the
> expression if the variable is incremented/decremented within the
expression.
>
> Insanity!
>
> At the very least, C/C++ should check for such problems and REFUSE to compile any program that contains such a problem.  Hopefully spitting out
a
> meaningful error message.
>
> I would prefer the following...
>
> The definition should stipulate that preincrements occur before the first reference to the variable in question, and post increments occur after the last reference to a variable.  Multiple increments/decrements of the same variable in an expression should trigger a compile error.
>
>
> Automatic variable type conversion is a legitimate convenience feature
that
> C provides. However, once again the language default behaviour is contrary to common sense.
>
> Casting errors are extremely common and often difficult to find
particularly
> when they occur in a function call.  To make this less likely, variable casting should only occur if explicitly stated.  I would even go so far as to require explicit casting where arrays are converted to pointer
references
> in function calls.  This may be going too far, but anything that will make the conversion more explicit is beneficial.
>
> I believe I read that "D" will pass arrays rather than converting them to pointers.  This is a mistake, as passing arrays will clearly be very inefficient.  Any attempt to pass an array as an array rather than a
casted
> pointer should generate an error.
>
> Label case sensitivity is also an issue.  Here in the real world, words do not change their meaning when they capitalized.  A Banana is a BANANA.  So it should also be with programming labels and keywords.   All compiler should ignore the case of all names and keywords, yet optionally keep the case intact for the purpose of export and linkage.  Linkage within the language itself should also be case insensitive.  I would recommend
through
> the conversion of all exported labels to upper case.
>
> Keeping case sensitivity to names and keywords promotes poor programming practices like defining variables with names like HWin, hWin, hWIN, etc.
Oh
> ya, like that isn't found in every windows program ever written.
>
> Insanity!
>
> In the current C/C++ dogma, case sensitivity is considered to be a good because it provides slightly greater flexibility in naming, but this
benefit
> is predicated on the creation of identically named identifiers that differ only by case, yet creating such labels is considered (correctly so), bad programming practice.  So the benefit is only had through bad programming practice.  Another inconsistency.
>
> Insanity!
>
>
> Other features that I would like to see is the ability to perform initializations in the following manner
>
> int16 a,b,c = 7;
> int16 a,b,c = 7,8,9;
>
> static int16 a,b,c = 0;
>
>
> You have gotten rid of the . vs -> fiasco.  Wonderful.
> Lost the forward reference, predefinition kludge.  Wonderful.
> Lost macros.  Wonderful as long as a better facility is provided.
>               Being able to define your own operators goes a long way in
> this respect.
> Improved error trapping.  Wonderful.
> Decided to include strings as a native type.  Wonderful.
>
> Improvements in object implementation.  Wonderful.
>
> I wish you success in the development of "D", and I prey for a world rational enough to abandon the abominations that are now being used, and adopt your language.
>
> Unfortunately, members of the C religion are far too deep into the Plauktau - the blood fever - to listen to reason.
>
> Landru - guide us!
>
> All is chaos!
>
>
>
>
>


February 04, 2002
> Unfortunately, members of the C religion are far too deep into the
> Plauktau - the blood fever - to listen to reason.
> 
> Landru - guide us!
> 
> All is chaos!

Quite a rant indeed, from someone who doesn't speak Vulcan.  Palk'tow, I believe, is the correct spelling for Trekkies.  :-)

I''l be honest:  I do not have any experience in coding C/C++.  But D looks similar enough to ECMAScript to be passable.

February 04, 2002
Alex Vincent wrote:
>> Unfortunately, members of the C religion are far too deep into the
>> Plauktau - the blood fever - to listen to reason.
>>
>> Landru - guide us!
>>
>> All is chaos!
> 
> 
> Quite a rant indeed, from someone who doesn't speak Vulcan.  Palk'tow, I believe, is the correct spelling for Trekkies.  :-)

Damn it, I meant, "Plak'tow".

> 
> I''l be honest:  I do not have any experience in coding C/C++.  But D looks similar enough to ECMAScript to be passable.

And "I'll".


February 04, 2002
D wrote:

> Let me start by saying that I loath both C and C++.  These languages are
> abominations, abortions, unworthy of existence, a pox on the earth, etc...
> C fails on so many levels that I don't know where to begin.  Even the
> standard I/O library is an abomination....


You should really stop beating around the bush and tell us how
you really feel.

Let me start by saying that C is like MS Windows, or like democracy.
It's pretty much the most loathesome choice imaginable -- except
for any of the others.


> I am also turned off by the lack of low level control that most
> high level languages provide.


I am skeptical of the ability of any language to provide both
the safety that you seem to be calling for in most of your
posts, and more low level control simultaneously.


> Having said that, I recognize that C/C++ fail to provide adequate low level
> control for many common low level functions.
> 
> These limitations are mostly a result of K&R (may they burn eternally in
> hell), defining a language that appeals to the lowest common denominator
> among all machines. 

> [snip]
> It was foolish of K&R to create a median level language for
> low level coding while defining it in such a manner as to make the
> production of low level code, outside the sanctioned scope of the
> language.

I assume K&R's view on this was roughly "if you want assembly
language, you know where to find it." At the time, the stuff that
had to be low level would be written in assembly, and C would
provide some small relief from the tedium of writing everything
else in asm. That C has been applied, since, to things it might
not be optimally suited for is hardly K&R's fault.


> Portability is often raised as a reason for the existence of high level
> languages, but this is not a legitimate argument since low level languages
> can be made to be just as portable as high level ones.  Java byte code and
> other languages that generate intermediate code or Pcode have repeatedly
> proven this.  JIT compilers are the final nail in the coffin.  Portability
> is a feature of any language, and not as it is so often claimed by members
> of the C/C++ religions, a characteristic of high level languages.


As long as we're on the topic of Java, is there some reason it's
not suitable for you?


> Consider that C omits block typing.  Those structured languages that don't
> descend from C, typically identify different kinds of blocks with different
> bracketing keywords.  Do/While, If/Endif, Repeat/Until etc.

> [snip]

It's entirely possible that K&R were aware of the argument you
make, and rejected it on the grounds that, first, it's at least
as common to have a tower of nested "for"s or nested "if"s, which
don't benefit from your strategy: if (a) if (b) if (c) if (d)
else (f) if (g) foo; endif endif endif endif endif -- and secondly,
they were working in an environment where limited storage space
and the nature of the teletype encouraged the evolution of extremely
abbreviated syntax -- cf. all the two-letterisms of Unix.


> Yes, in an "if then" statement the keyword "then" is a redundancy.
> Redundancy is a very good thing.


Not at 300 baud.


> The more cryptic a language the more likely errors will be made.  The C
> "religion" accepts that C is superior to assembler because the C syntax is
> more readable and more convenient to use than assembler.  I agree.  However,
> C dogma then goes on to reject alterations in the syntax that would provide
> further benefits to readability and every other language convenience on the
> grounds that they don't add any value to the language itself.  Clearly this
> dogma is false.


I think C dogma rejects alterations in the syntax because of
investment in existing codebases.


> It is not relevant if the underlying CPU or environment supports 8 bit
> integers or not. Such variables can be constructed with 16 or 32 bit
> registers as quite easily.  Program correctness must trump efficiency. 


All sweeping generalizations are false.

Some of us don't have the luxury of taking more than 16.6 ms
to finish one pass through the main loop of our program.


> Of course you can implement bounded pointers through overloading, but who
> actually does it?


Well, I do it, in any language wherein fascists aren't telling
me that operator overloading is _dangerous_. Fortunately, Walter
has "overloaded" the traditional C/C++ syntax to support bounds
checking in D.

Why do you support operator overloading to avoid having to write:

  if (myArray.Check( index ))
  {
     foo = myArray.GetElement( index );
  }
  else
  {
     throw ArrayBoundsException;
  }

instead of

  foo = myArray[ index ];

while not supporting being able to write:

  myColor += otherColor;

instead of:

  myColor.Add( otherColor );
  myColor.Clamp( 0, 0, 0, 255, 255, 255 );


> Why this kind of thing isn't already implemented in hardware is beyond me.
> It's not needed I guess.  All those buffer overflow problems that are the
> source of 90% of all the security exploits must be figments of my
> imagination...


That well known brain-damaged CPU architecture called x86 has
had hardware bounds-checking available since the 286. Let me
ask you, when writing in assembly, do you check the index
every time you do an indexed memory access?


> Missing semicolons are the #1 bugaboo of all C programmers no matter what
> their level of experience.  Why not remove their requirement?


Missing semicolons don't bother me. The compiler bitches, I put
the semicolon in, the problem is solved. 99% of the time, if I
left the semicolon out, it's because I was rewriting code in a
hurry and it's just as well that the compiler is making me take
a second look.


> Unfortunately, members of the C religion are far too deep into the
> Plauktau - the blood fever - to listen to reason.
> 
> Landru - guide us!
> 
> All is chaos!


Yeah, thank goodness we have level headed pragmatic rationalists
around to save us from the C zealots.

-Russell Borogove

February 04, 2002
"Russell Borogove" <kaleja@estarcion.com> wrote in message news:3C5ECE2D.6060500@estarcion.com...
> D wrote:
>
> > Let me start by saying that I loath both C and C++.  These languages are abominations, abortions, unworthy of existence, a pox on the earth,
etc...
> > C fails on so many levels that I don't know where to begin.  Even the standard I/O library is an abomination....
>
>
> You should really stop beating around the bush and tell us how you really feel.
>
> Let me start by saying that C is like MS Windows, or like democracy. It's pretty much the most loathesome choice imaginable -- except for any of the others.
>
>
> > I am also turned off by the lack of low level control that most high level languages provide.
>
>
> I am skeptical of the ability of any language to provide both the safety that you seem to be calling for in most of your posts, and more low level control simultaneously.
>
>
> > Having said that, I recognize that C/C++ fail to provide adequate low
level
> > control for many common low level functions.
> >
> > These limitations are mostly a result of K&R (may they burn eternally in hell), defining a language that appeals to the lowest common denominator among all machines.
>
>  > [snip]
>  > It was foolish of K&R to create a median level language for
>  > low level coding while defining it in such a manner as to make the
>  > production of low level code, outside the sanctioned scope of the
>  > language.
>
> I assume K&R's view on this was roughly "if you want assembly language, you know where to find it." At the time, the stuff that had to be low level would be written in assembly, and C would provide some small relief from the tedium of writing everything else in asm. That C has been applied, since, to things it might not be optimally suited for is hardly K&R's fault.
>
>
> > Portability is often raised as a reason for the existence of high level languages, but this is not a legitimate argument since low level
languages
> > can be made to be just as portable as high level ones.  Java byte code
and
> > other languages that generate intermediate code or Pcode have repeatedly proven this.  JIT compilers are the final nail in the coffin.
Portability
> > is a feature of any language, and not as it is so often claimed by
members
> > of the C/C++ religions, a characteristic of high level languages.
>
>
> As long as we're on the topic of Java, is there some reason it's not suitable for you?
>
>
> > Consider that C omits block typing.  Those structured languages that
don't
> > descend from C, typically identify different kinds of blocks with
different
> > bracketing keywords.  Do/While, If/Endif, Repeat/Until etc.
>
>  > [snip]
>
> It's entirely possible that K&R were aware of the argument you
> make, and rejected it on the grounds that, first, it's at least
> as common to have a tower of nested "for"s or nested "if"s, which
> don't benefit from your strategy: if (a) if (b) if (c) if (d)
> else (f) if (g) foo; endif endif endif endif endif -- and secondly,
> they were working in an environment where limited storage space
> and the nature of the teletype encouraged the evolution of extremely
> abbreviated syntax -- cf. all the two-letterisms of Unix.
>


I can't say I really like all those abbreviations though...
To me they are a source of errors.
I still hate if...endif, if..fi, do..done, repeat..until (although
that one not as much) and even pascal's begin..end.
{..} is just good enough. If you really want to show what
line the closing } belongs to, just copy it and put it there
as a comment.


>
> > Yes, in an "if then" statement the keyword "then" is a redundancy. Redundancy is a very good thing.
>
>
> Not at 300 baud.
>

Nothing is a good thing a 300 baud. Buy a new modem.
Redundancy in communication with/to humans definitely is
A Good Thing (tm). D has a very good point. I still hate
if..fi, if..endif, while..endwhile, do..done....Oh sorry, getting
redundant again... :)

> > The more cryptic a language the more likely errors will be made.  The C "religion" accepts that C is superior to assembler because the C syntax
is
> > more readable and more convenient to use than assembler.  I agree.
However,
> > C dogma then goes on to reject alterations in the syntax that would
provide
> > further benefits to readability and every other language convenience on
the
> > grounds that they don't add any value to the language itself.  Clearly
this
> > dogma is false.
>
>
> I think C dogma rejects alterations in the syntax because of investment in existing codebases.
>


Which does not matter for D. I still hate if..fi, oh...did I already say that? :)


>
> > It is not relevant if the underlying CPU or environment supports 8 bit integers or not. Such variables can be constructed with 16 or 32 bit registers as quite easily.  Program correctness must trump efficiency.
>
>
> All sweeping generalizations are false.
>

"All sweeping generalizations are false."
Ironic humor that that means this statement is also false.. :)
But it is true ofcourse. Without efficiency some programs
just can't exist, period.


> Some of us don't have the luxury of taking more than 16.6 ms to finish one pass through the main loop of our program.
>
>
> > Of course you can implement bounded pointers through overloading, but
who
> > actually does it?
>
>
> Well, I do it, in any language wherein fascists aren't telling me that operator overloading is _dangerous_. Fortunately, Walter has "overloaded" the traditional C/C++ syntax to support bounds checking in D.
>
> Why do you support operator overloading to avoid having to write:
>
>    if (myArray.Check( index ))
>    {
>       foo = myArray.GetElement( index );
>    }
>    else
>    {
>       throw ArrayBoundsException;
>    }
>
> instead of
>
>    foo = myArray[ index ];
>
> while not supporting being able to write:
>
>    myColor += otherColor;
>
> instead of:
>
>    myColor.Add( otherColor );
>    myColor.Clamp( 0, 0, 0, 255, 255, 255 );
>
>
> > Why this kind of thing isn't already implemented in hardware is beyond
me.
> > It's not needed I guess.  All those buffer overflow problems that are
the
> > source of 90% of all the security exploits must be figments of my imagination...
>
>
> That well known brain-damaged CPU architecture called x86 has had hardware bounds-checking available since the 286. Let me ask you, when writing in assembly, do you check the index every time you do an indexed memory access?
>
>
> > Missing semicolons are the #1 bugaboo of all C programmers no matter
what
> > their level of experience.  Why not remove their requirement?
>
>
> Missing semicolons don't bother me. The compiler bitches, I put the semicolon in, the problem is solved. 99% of the time, if I left the semicolon out, it's because I was rewriting code in a hurry and it's just as well that the compiler is making me take a second look.
>
>

You need to be able to separate statements....semicolons are
fine. I actually hate using newlines as separators.
Almost just as much as I hate if..fi and (*Aargh...annoyed
crowd attacks poster that takes redundancy is A Good Thing
just a tad too far.... :) )


> > Unfortunately, members of the C religion are far too deep into the Plauktau - the blood fever - to listen to reason.
> >
> > Landru - guide us!
> >
> > All is chaos!
>
>
> Yeah, thank goodness we have level headed pragmatic rationalists around to save us from the C zealots.
>
> -Russell Borogove
>

:)


--
Stijn
OddesE_XYZ@hotmail.com
http://OddesE.cjb.net
__________________________________________
Remove _XYZ from my address when replying by mail



February 05, 2002
OddesE <OddesE_XYZ@hotmail.com> wrote in message news:a3n5l3$20ts$1@digitaldaemon.com...
> I can't say I really like all those abbreviations though...
> To me they are a source of errors.
> I still hate if...endif, if..fi, do..done, repeat..until (although
> that one not as much) and even pascal's begin..end.
> {..} is just good enough. If you really want to show what
> line the closing } belongs to, just copy it and put it there
> as a comment.

Exactly <WRONG>.  Comments aren't types.  You might as well argue that variable types aren't needed, just put a comment your variant so youi know what type it is.

If Block typing is not checked by the compler, then there is <NO> block typing.


Odesse Repeats:
> Nothing is a good thing a 300 baud. Buy a new modem.
> Redundancy in communication with/to humans definitely is
> A Good Thing (tm). D has a very good point. I still hate
> if..fi, if..endif, while..endwhile, do..done....Oh sorry, getting
> redundant again... :)

Exactly <WRONG>.  Comments aren't types.  You might as well argue that variable types aren't needed, just put a comment your variant so youi know what type it is.

If Block typing is not checked by the compler, then there is <NO> block typing.



> > I think C dogma rejects alterations in the syntax because of investment in existing codebases.
>
OdessE writes:
> Which does not matter for D. I still hate if..fi, oh...did I already say that? :)

Actually it does matter.  But that issue can be solved for the most part by translating C to D.  The syntactic changes I have mentioned typically can be implemented by something only slightly more complex than a "pretty print" reformat utility.  It's simply a matter of omitting opening brackets for most blocks, and remembering which block you are in and replacing it's terminating bracket with the approprate block ending keyword.


> > > It is not relevant if the underlying CPU or environment supports 8 bit integers or not. Such variables can be constructed with 16 or 32 bit registers as quite easily.  Program correctness must trump efficiency.
> >
> > All sweeping generalizations are false.
> >

Odesse writes:
> "All sweeping generalizations are false."
> Ironic humor that that means this statement is also false.. :)
> But it is true ofcourse. Without efficiency some programs
> just can't exist, period.

  Then use assembler.





February 05, 2002
Russell Borogove <kaleja@estarcion.com> wrote in message
> Let me start by saying that C is like MS Windows, or like democracy. It's pretty much the most loathesome choice imaginable -- except for any of the others.

C is the DOS of programming languages.  It's long past due for something rational.


> > I am also turned off by the lack of low level control that most high level languages provide.

Russell Borogove <kaleja@estarcion.com> wrote in message
> I am skeptical of the ability of any language to provide both the safety that you seem to be calling for in most of your posts, and more low level control simultaneously.

   Low level control can be handled by specifying a "virtual cpu" and
providing the HLL constructs needed to interface to it's assembly language
and include it's assembly language in line..  I am not suggesting that D
provide such a feature at this time. But it should be in the back of the
mind of the author.

Outside of that, low level functionality that is excluded from C is the implicit assumption that data is stored as a sequence of binary digits in modulo signed form for signed numbers.  Also not included is the ability to select between 1, 2, 4,and 8 byte signed and unsigned integers, perform efficient roll's of those integers, etc.


> > These limitations are mostly a result of K&R (may they burn eternally in hell), defining a language that appeals to the lowest common denominator among all machines.
>
>  > [snip]
>  > It was foolish of K&R to create a median level language for
>  > low level coding while defining it in such a manner as to make the
>  > production of low level code, outside the sanctioned scope of the
>  > language.

Russell Borogove <kaleja@estarcion.com> wrote in message
> I assume K&R's view on this was roughly "if you want assembly language, you know where to find it." At the time, the stuff that had to be low level would be written in assembly, and C would provide some small relief from the tedium of writing everything else in asm. That C has been applied, since, to things it might not be optimally suited for is hardly K&R's fault.

  K&R created filth.  They are responsible for the filth.  It's as simple as
that.


> > Portability is often raised as a reason for the existence of high level languages, but this is not a legitimate argument since low level
languages
> > can be made to be just as portable as high level ones.  Java byte code
and
> > other languages that generate intermediate code or Pcode have repeatedly proven this.  JIT compilers are the final nail in the coffin.
Portability
> > is a feature of any language, and not as it is so often claimed by
members
> > of the C/C++ religions, a characteristic of high level languages.

Russell Borogove <kaleja@estarcion.com> wrote in message
> As long as we're on the topic of Java, is there some reason it's not suitable for you?

Java is essentially C++.  C++ is filth.  Hence Java is filth.


> > Consider that C omits block typing.  Those structured languages that
don't
> > descend from C, typically identify different kinds of blocks with
different
> > bracketing keywords.  Do/While, If/Endif, Repeat/Until etc.
>
>  > [snip]
>

Russell Borogove <kaleja@estarcion.com> wrote in message
> It's entirely possible that K&R were aware of the argument you
> make, and rejected it on the grounds that, first, it's at least
> as common to have a tower of nested "for"s or nested "if"s, which
> don't benefit from your strategy: if (a) if (b) if (c) if (d)
> else (f) if (g) foo; endif endif endif endif endif -- and secondly,
> they were working in an environment where limited storage space
> and the nature of the teletype encouraged the evolution of extremely
> abbreviated syntax -- cf. all the two-letterisms of Unix.

  Their stated justification was that they wished to produce a language with
the minimum number of tokens
If code size was their concern then it is clear to me that their requirement
for indention to make the code readable would have been contradictory to
their goals.  I am forced to conclude then, that if code size was their
concern, they are still fools for having implemented the C language
structure.


> > Yes, in an "if then" statement the keyword "then" is a redundancy. Redundancy is a very good thing.

Russell Borogove <kaleja@estarcion.com> wrote in message
> Not at 300 baud.

  Even at 300 baud.  30 Characters per second it would take an extra 1/10th
of a second to replace }  with next.

Is that I brace I typed by the way?  I can't tell with any of the Windows fonts. Snicker.

With 25/2 lines of nexts instead of braces, that represents and extra 1.5 seconds per screen.  Whoop-de-do!


> > The more cryptic a language the more likely errors will be made.  The C "religion" accepts that C is superior to assembler because the C syntax
is
> > more readable and more convenient to use than assembler.  I agree.
However,
> > C dogma then goes on to reject alterations in the syntax that would
provide
> > further benefits to readability and every other language convenience on
the
> > grounds that they don't add any value to the language itself.  Clearly
this
> > dogma is false.

Russell Borogove <kaleja@estarcion.com> wrote in message
> I think C dogma rejects alterations in the syntax because of investment in existing codebases.

  The syntatic changes I have proposed can for the most part be made through
simple code translation that isn't much more complex than that used by print
formatting utilities.  The existing code base can be converted in an
automated manner.



> > It is not relevant if the underlying CPU or environment supports 8 bit integers or not. Such variables can be constructed with 16 or 32 bit registers as quite easily.  Program correctness must trump efficiency.

Russell Borogove <kaleja@estarcion.com> wrote in message
> All sweeping generalizations are false.

Time flies like an arrow. Fruit flies like a banana.



Russell Borogove <kaleja@estarcion.com> wrote in message
> Some of us don't have the luxury of taking more than 16.6 ms to finish one pass through the main loop of our program.

  The time it takes will be highly dependent on the machine you are running
the program on.  If you are writing machine specific code that may not run
on alternate platforms, then you have already violated C portability
standards.

   In the case of the language syntax I have suggested, you would select the
variable size appropriate for the platform you are targeting.

   Quite simple really.  What is it that you don't understand?


> > Of course you can implement bounded pointers through overloading, but
who
> > actually does it?

Russell Borogove <kaleja@estarcion.com> wrote in message
> Well, I do it, in any language wherein fascists aren't telling me that operator overloading is _dangerous_. Fortunately, Walter has "overloaded" the traditional C/C++ syntax to support bounds checking in D.

  Overloading the syntax is insufficient since the variables are not native
to the language.  Hence there is no direction to the hardware guys to
implement them at the register level where they belong.

  As to operator overloading, you have already lost that argument multiple
times.




Russell Borogove <kaleja@estarcion.com> wrote in message
> Why do you <not> support operator overloading to avoid having to write:
>
>    if (myArray.Check( index ))
>    {
>       foo = myArray.GetElement( index );
>    }
>    else
>    {
>       throw ArrayBoundsException;
>    }
>
> instead of
>
>    foo = myArray[ index ];
>
> while not supporting being able to write:
>
>    myColor += otherColor;
>
> instead of:
>
>    myColor.Add( otherColor );
>    myColor.Clamp( 0, 0, 0, 255, 255, 255 );

Clearly because range restricted pointers are best implemented as an internal type rather than a derrived class.

I'm not interested in hiding failure and sloth behind syntactic candy.



> > Why this kind of thing isn't already implemented in hardware is beyond
me.
> > It's not needed I guess.  All those buffer overflow problems that are
the
> > source of 90% of all the security exploits must be figments of my imagination...
>


Russell Borogove <kaleja@estarcion.com> wrote in message
> That well known brain-damaged CPU architecture called x86 has had hardware bounds-checking available since the 286. Let me ask you, when writing in assembly, do you check the index every time you do an indexed memory access?

  Segments do not provide adequate bounds checking limits since segmetns are
typically very large.  So essentially there is no effective bounds checking
on pointer operations.

  And yes, in those cases where I am not iterating over a loop where the
loop counter is set to the size of the buffer I am working on, I check
bounds every time the pointer changes value.

  It would be much more efficient to simply load the pointer register with
the initial pointer, and the start/end of the allowed range, and then
iterate with the pointer automatically checked on each update for a range
violation.  On the CPU level, the comparisons can be performed in parallel
with the operations, and since all range violations can be considered
critical program errors, there is no requirement that the update check can
not cause an exception after the fact.



> > Missing semicolons are the #1 bugaboo of all C programmers no matter
what
> > their level of experience.  Why not remove their requirement?

Russell Borogove <kaleja@estarcion.com> wrote in message
> Missing semicolons don't bother me. The compiler bitches, I put the semicolon in, the problem is solved. 99% of the time, if I left the semicolon out, it's because I was rewriting code in a hurry and it's just as well that the compiler is making me take a second look.

In other words, you omit semicolons regularly and then are forced to recompile due to an error that need not have occurred in the first place if the semicolon requirement were not there.

Thank you for illustrating my point so clearly.



> > Unfortunately, members of the C religion are far too deep into the Plauktau - the blood fever - to listen to reason.
> >
> > Landru - guide us!
> >
> > All is chaos!

Russell Borogove <kaleja@estarcion.com> wrote in message
> Yeah, thank goodness we have level headed pragmatic rationalists around to save us from the C zealots.

  We have not been saved from the C Zealots.  If we had been, the world
wouldn't be flooded by buggy software written in C/C++ by the brainwashed
code monkeys

  By the way, I just clipped the above paragraph and pasted it into my text
editor.  2/3's of the text was missing.

  C/C++ are among the worst languages ever written.

  I recommend immediate replacement.

.




February 05, 2002
>>Yeah, thank goodness we have level headed pragmatic rationalists
>>around to save us from the C zealots.
>>
> 
>   We have not been saved from the C Zealots.  If we had been, the world
> wouldn't be flooded by buggy software written in C/C++ by the brainwashed
> code monkeys
> 
>   By the way, I just clipped the above paragraph and pasted it into my text
> editor.  2/3's of the text was missing.
> 
>   C/C++ are among the worst languages ever written.
> 
>   I recommend immediate replacement.

So, were is _YOUR_ language and compiler????

February 05, 2002
"D" <s_nudds@hotmail.com> wrote in message news:a3nu2r$2ecd$1@digitaldaemon.com...

> Java is essentially C++.  C++ is filth.  Hence Java is filth.

ROTFLMAO!!!


>   Even at 300 baud.  30 Characters per second it would take an extra
1/10th
> of a second to replace }  with next.
>
> Is that I brace I typed by the way?  I can't tell with any of the Windows fonts. Snicker.

Write programs in full-screen text mode. It's the way all true coders do this - and so do I =)




« First   ‹ Prev
1 2