October 19, 2001
In article <9qp606$i8b$1@digitaldaemon.com>, "Axel Kittenberger" <axel@dtone.org> wrote:

>> Why stop as assert?  Why not have a modifier for function parameters which says that whenever the function is called, the expression used to generate that parameter can't have side-effects.  This would be useful for writing your own assert handler, and perhaps other issues such as thread safety and order of expression evaluation.
> 
> Why stop even here? Why not make a language that does not allow any
> stupid side effects at all :)))))
> 
> => ending up into the java-pascal line again? :o)

I am thinking of a cross between Ada, Python and C -- preferably the best bits from each. :)

I don't like the overall feel of Java.
October 19, 2001
> Why stop even here? Why not make a language that does not allow any stupid
> side effects at all :)))))
> 
> => ending up into the java-pascal line again? :o)

When I think more about side effect statements, looking today backward to 20? years of C, I think one can say side effects like ++, --, += and all that were stupid ideas, and brought nothing but problems in pratice.

I think a clean language should either forbit these constructs after all (like java or the pascal family) or have defined behavior for things like:

   a[i]=i++;
   printf("%d %d", i++, i);
   std::cout << i*=2 << i*=2;
   ... etc...

Or guarentees to produce a compiler error. The problem is only sometimes the expressions can be so complicated that it is nearly impossible to track all dependencies between the sync-points. GCC produces nowadays a warning on many situations, but still is not able to grap them all :(

-- 
|D) http://www.dtone.org
- Axel
October 20, 2001
Axel Kittenberger wrote:
> 
> > Why stop even here? Why not make a language that does not allow any stupid
> > side effects at all :)))))
> >
> > => ending up into the java-pascal line again? :o)
> 
> When I think more about side effect statements, looking today backward to 20? years of C, I think one can say side effects like ++, --, += and all that were stupid ideas, and brought nothing but problems in pratice.

	Why stop here.  Get rid of all operators.  There is NO ambiguity in
lisp!  When you look at the source, you're lookin' at the parse tree.
	For what it's worth, there are schools of thought that say there should
not be any assignment operator.  There are others that believe that x+=y
is better that x=x+y since it better resemble an accumulator.

Dan
October 20, 2001
Russell Borogove wrote in message <3BCB289D.7375676A@estarcion.com>...
>If your systems are that sensitive to timing, then aren't you going to run into all sorts of problems down the road, when your model Foo microcontrollers are phased out by the company that makes them and replaced with the Foo-Plus-Turbo models which are binary-compatible but 2 to 5 times faster depending on the instruction mix?


Back when I did some hardware design, the rule of thumb was to design it so that replacing parts with faster ones would work. It was ok for it to fail if you plugged in slower ones.


October 21, 2001
> Why stop here.  Get rid of all operators.  There is NO ambiguity in lisp!

Pah! sarcasm, thats a bad move in a discussion. :o(

Operators are something of convienience, I know you don't need them, but they make things easier to read.

> When you look at the source, you're lookin' at the parse tree.
> For what it's worth, there are schools of thought that say there should
> not be any assignment operator.  There are others that believe that x+=y
> is better that x=x+y since it better resemble an accumulator.

That += fact is so old, and so long no longer valid, in K&R C in 1980 maybe it mattered if you wrote x+=y instead of x=x+y since compilers this times wrote assembly directly during parsing, no syntax tree as intermediate step, but today -every- compiler can at least see fromt the syntax tree how to do x=x+y optimal.

October 21, 2001
Axel Kittenberger wrote:
> 
> > Why stop here.  Get rid of all operators.  There is NO ambiguity in lisp!
> 
> Pah! sarcasm, thats a bad move in a discussion. :o(
> 
> Operators are something of convienience, I know you don't need them, but they make things easier to read.

	I stand by my sarcasm.  I suspect some lisp fanatics would back me
here.  (They might not even be sarcastic when they say it.)

> > When you look at the source, you're lookin' at the parse tree.
> > For what it's worth, there are schools of thought that say there should
> > not be any assignment operator.  There are others that believe that x+=y
> > is better that x=x+y since it better resemble an accumulator.
> 
> That += fact is so old, and so long no longer valid, in K&R C in 1980 maybe it mattered if you wrote x+=y instead of x=x+y since compilers this times wrote assembly directly during parsing, no syntax tree as intermediate step, but today -every- compiler can at least see fromt the syntax tree how to do x=x+y optimal.

	As an optimization, += is not needed.  That still does not change the
fact that it provided that abstraction of an accumulator better than the
alternative.
	Even with optimizers, there is still an important difference between +=
and the alternative if the lvalue contains a function call.

	*(f(x)) = *(f(x)) + y
	*(f(x)) += y

These two lines could be very different.
	Lastly, += style operators may not be needed (and I won't commit to
that for now) but at the very least, they are something of convenience
and, for those of us who know the language, they make things easier to
read.
October 21, 2001
> *(f(x)) = *(f(x)) + y
> *(f(x)) += y
> 
> These two lines could be very different.

Are they?

You've two possiblities to decide, function f() either depends on global variables that it also changes or not. If it does the first expression is invalid in C eitherway. You may in this case not relay on any order of f() been called (or if it is called twice afterall). If it does not depend on global vars it changes (which the compiler could assume, otherwise the statement would be invalid) then it does not matter if it is called twice or once, and since x is equal in both cases he is also allowed to decide to only call f() once.

Actually for cleaness purposes f() should have some 'const' attribute or other contract that it does not depend/change on global vars, in my opinion if it does an error should be created either way, if does not there is no difference between + and +=.

And after all what is easier to understand in your opinion?

*(f(x)) += y;

or in example:

int &p = f(x);
p = p + y;

> Lastly, += style operators may not be needed (and I won't commit to that for now) but at the very least, they are something of convenience and, for those of us who know the language, they make things easier to read.

I work with C for years now, and honestly in my opinion they not make things easier to read if reading -others- code, variables that are changed in places you donnot expect them do can be horrific in trying to understand code, especially when the changes are combined with && and || tokens.

Something somewhere in contest like this:
(a++ == 1) && (x*=2);

What does it do?  In cleared text:

if (a != 0) {
   x = x * 2;
}
a = a + 1;

I've nothing against shortcut operators if people think it's worth the 3 characters safed to type. But I spoke against side effect operations, like all equal operations inside other operations, or function calls.

This is something I would also have feelings against, no matter how the operator syntax looks like:

fprintf("%d", x = x +1);

- Axel
-- 
|D) http://www.dtone.org

October 21, 2001
a wrote:

> Axel Kittenberger wrote:
> > That += fact is so old, and so long no longer valid, in K&R C in 1980 maybe

> > it mattered if you wrote x+=y instead of x=x+y since compilers this times wrote assembly directly during parsing, no syntax tree as intermediate step, but today -every- compiler can at least see fromt the syntax tree how to do x=x+y optimal.
>
>         As an optimization, += is not needed.  That still does not change the
> fact that it provided that abstraction of an accumulator better than the
> alternative.
>         Even with optimizers, there is still an important difference between +=
> and the alternative if the lvalue contains a function call.
>
>         *(f(x)) = *(f(x)) + y
>         *(f(x)) += y
>
> These two lines could be very different.
>         Lastly, += style operators may not be needed (and I won't commit to
> that for now) but at the very least, they are something of convenience
> and, for those of us who know the language, they make things easier to
> read.

In general, for any LHS that has a side-effect, the extended assignment operators are very useful.  They serve to avoid adding intermediate variables merely to bypass the side effects (variables that many compilers fail to optimize away).

This especially applies when accessing hardware registers!  If you've ever written a low-level device driver, you'd know.

In several compilers, you can flag such expressions as being atomic, so their execution cannot be interrupted part way through.  This simple enhancement eliminates the need for many hard and soft mutexes (for SMP and multi-threaded code) as well as minimizing the time that interrupts have to be disabled.  I like it when the compiler can help me make low-latency thread-safe and highly reentrant code.  Consider it a "wish-list" item for the complex assignment operators in D!

Again, this is all based on D being aimed toward being not just a general-purpose language, but also an excellent low-level systems programming language, all the while being easier, safer and more powerful than C, C++ and Java.

I am aware of no other language under active development with these goals, with the possible exception of the EC++ standardization effort (of which I was a minor participant).  The goal here was to find the fight balance between C and C++ that was optimized for embedded systems.  It was initiated by the Japanese car manufacturers, and soon spread world-wide.  The process involved in creating the language had the explicit limitation that it could NOT create a "new" language: It had to be a strict subset of ANSI C++, so it would be guaranteed to compile under any C++ compiler.

That means they could only delete things from the C++ spec to create the EC++ language spec, though there was also lots of work on creating versions of the STL and the other standard libraries that were optimized for EC++.  P.J. Plaugher's Dinkumware company has produced both free and commercial versions of these libraries, and they help make EC++ SCREAM in embedded environments!

At least two compilers have provided some form of support for EC++:  GCC and Green Hills.

However, the EC++ effort has failed to obtain a huge following, and the reasons are obvious:

1. So much of C++ is "anti-real-time" and "anti-embedded" that some of the most powerful features of C++ had to be left behind.  However, as compilers get smarter, and more effective and efficient implementation strategies become available, EC++ is expected to backtrack on some of its earlier "butchering" of the C++ spec.  This presently means that it is very hard to train a C++ person to become a good EC++ programmer.

2. Very few real-time and embedded (RT&E) systems are design and built from an
OO perspective.  This is due to many reasons:  There are few tools that are truly
useful for supporting OOA&D for RT&E systems.  This is slowly changing, but I have
yet to see a tool or suite I'd recommend using for the systems I've had to
create.  Also, many RT&E engineers were educated before the OO "boom" of the late
80's and early 90's.  Furthermore, today's CS curriculum is not generating many of
the kind of engineers needed to create RT&E systems.  This two-way education gap
has to close before OO will become common in RT&E systems.  As senior RT&E
engineers retire, it is becoming ever harder to fill their positions (believe me,
I know!).

So, things need to change.  We need to bring the power of OO to systems and RT&E programming.  IMHO, the D language may well be the best effort in that direction I've seen since the EC++ effort.

And that's one (long) reason why I want to keep the complex assignment operators!


-BobC


October 21, 2001
Axel Kittenberger wrote:

> > *(f(x)) = *(f(x)) + y
> > *(f(x)) += y
> >
> > These two lines could be very different.
>
> Are they?
>
> You've two possiblities to decide, function f() either depends on global variables that it also changes or not.

Careful here.  You're thinking single threaded...and for a guy like me, who hopes to use D as the basis for a multithreaded library, that's fatal! :(

f() could depend on volatile global data that f() does not change.  It grabs a lock, reads it, and by the time that the 2nd function call comes, another thread has grabbed the lock and changed the underlying data.

--
The Villagers are Online! http://villagersonline.com

.[ (the fox.(quick,brown)) jumped.over(the dog.lazy) ]
.[ (a version.of(English).(precise.more)) is(possible) ]
?[ you want.to(help(develop(it))) ]


October 21, 2001
Hmmm...thought some more, and that last comment seemed hasty.  I apologize if it wasn't thoughtful enough.

Let me qualify it by noting that in C/C++ you can assume that non-volatile variables won't change like that.  But eventually that means that I, with a multithreaded library, pretty much have to declare ALL of my variables volatile, since all of them might be changed by actions on a peer thread.  Thus, a modern language should allow more clarity.

For example, you might want to declare something as "volatile when lock x is not held," which would cover most of what I'm looking for.  It would require that the language be aware of locks, but would be good because it could optimize some things (when it knows that the lock is held) and not optimize others (when the lock is not held).  Imagine this psuedo-syntax:

Lock foo;  // this declares a Lock object
volatile-when-not-held(foo) int bar;
foo.Lock();  // this performs the lock action
while(bar != 0)
{
   foo.Unlock();
   // wait on some signal here
   foo.Lock();
};
baz(bar);  // calls the function
foo.Unlock();

In this example, the only time that the compiler optimizes bar is when the test works; in that case, the lock is held so it sees that it is legal to consider bar non-volatile.  However, every time you cycle the lock-unlock on foo, it forgets the old value of bar and reloads it, assuming that things might have changed.

It would also be good to have an "atomic" keyword.  Some values are atomic in some architectures; for places where they are not, you could either throw a syntax error or, if it was workable and the implementer decided to do it, you could have the compiler could implement a hidden lock to protect that variable.

--
The Villagers are Online! http://villagersonline.com

.[ (the fox.(quick,brown)) jumped.over(the dog.lazy) ]
.[ (a version.of(English).(precise.more)) is(possible) ]
?[ you want.to(help(develop(it))) ]