September 09, 2001
> The distinction between slots and methods is stupid. Doing foo.x should be defined to be equivalent to foo.x(), with lexical magic for ``foo.x = ...'' assignment. Compilers should be trivially able to inline zero-argument accessor methods to be inline object+offset loads. That way programmers wouldn't break every single one of their callers when they happen to change the internal implementation of something from something which happened to be a ``slot'' to something with slightly more complicated behavior.

However I believe in a principle I called myself 'context free reading'. It means that when I look at a function implementation, or even a single line of it, I can say what it will do. I don't have to look at context it occures, I don't have to look into other files that actually change the behavior of the code. Getters and Setters are nice, but they destroy 'context free reading'. Foo.a = 5; Looking at it without context can be any be anything from the obvious integer a that gets assigned the value of 10 toward to a function that will format the /dev/hda5 partition.

And so far I can see and compare between language it C implemnets best context free readind, a line of code will always do what there stands. Macros of course violate C context freenes, my guess is thats the very reason why so many hate them.

Also C++ violated context free reading, Like the obvious in operator
overloading. I've nothing with special operaters as long I can see at the
use of them that it's actually a hidden call.
Concating two string with like in example:
A :=: B :+: C;
I've nothing against it, since it's ovious that this is not normal
arithmetic. I can understand what this code will do and how the assembler
will roughly look like without having to look at the classes A B and C.

Also reference parameter or out paramters destroy context free reading in my sense. Someone doesn't exepect a function parameter to be usually changed.

Take following C++ code:

        int a = 1;
        check(a);
        if (a > 0) {
                // do you think this will be exectuted?
        }

Wrong, check could define it's first parameter as reference, or as out. One cannot see this in the call without looking at the implementation of check.

Otherwise in c;
check(&a);

It's obvious on first sight that a can be altered by the callee. Thats why I for my project sticked with the reference parameter for calls. I know the compiler doesn't need it, but the programmer does, and more likely the maintainer or debugger of source somebody else written will apriciate it a lot.

'Refrence' does not mean to be exactly the same as 'address of' altough it resembles to 90% to it. The difference is the reference of a reference parameter is still the reference, altough the address of the address is something different.

- Axel

September 10, 2001
"Axel Kittenberger" <axel@dtone.org> wrote in message news:9ngeo2$1lal$1@digitaldaemon.com...

> > The distinction between slots and methods is stupid. Doing foo.x should
be


> However I believe in a principle I called myself 'context free reading'. It means that when I look at a function implementation, or even a single line of it, I can say what it will do. I don't have to look at context it occures, I don't have to look into other files that actually change the behavior of the code. Getters and Setters are nice, but they destroy 'context free reading'. Foo.a = 5; Looking at it without context can be
any
> be anything from the obvious integer a that gets assigned the value of 10 toward to a function that will format the /dev/hda5 partition.
>

I have heared that argument before, and I don't buy it. There is no language construct, no language even that can save your butt if your co-programmers are malicious morons. *whenever* you use an object, with or without properties, you look at the public interface, temporarily ignore the implementation and just asume that the functions do something related to thier names & the docs. Even if you do read the implementation, you don't have an absolute guarantee that it will stay the same, just a vauge assurance that it even if the implementation changes, it will try to reach the same 'goal'.

A settor invoked when you set foo.a that does some action unrelated to storing a value is just bad code.  if foo.a = 5 or foo.setA(5); reformats the HD, then the public interface of that class is just wrong. If your property use destroys your ability to read what the class is doing, then you are using properties wrong. I don't think it's hard - I've seldom if ever seen anyone get it badly wrong in practice. My practical experience of using properties in Delphi for several years is that (all other things being equal & co-workers being relatively sane) that using public properties makes your code cleaner & easier to read, even if you only look at the class interface.

OO programming is partly about implementation changing to better support a public interface without the users of that interface having to know about it.  Properties help with this by allowing you to add or revise gettor & settor methods, but they cannot protect you from a bad interface. But then nothing can - except good design.

Anthony




September 10, 2001
> I have heared that argument before, and I don't buy it. There is no language construct, no language even that can save your butt if your co-programmers are malicious morons. *whenever* you use an object, with or without properties, you look at the public interface, temporarily ignore the implementation and just asume that the functions do something related to thier names & the docs. Even if you do read the implementation, you don't have an absolute guarantee that it will stay the same, just a vauge assurance that it even if the implementation changes, it will try to reach the same 'goal'.

However, it's the same with a function call, strcat() can also format the harddisk. But I will still know that this is a function call, I know that it will push stuff on the stack, it will jump else where and will return. Except the function is inlined of course, but the results are the same. However I still want to see calls on the first sight, not having them hidden behind stuff that looks like a field access, it just makes debugging in hard cases difficult. And in hardcases I mean when you're experimenting with a new hardware, that doesn't yet run 100%. In some worst case situations (and they do happen) you step down at stepping through the source assembler instruction by instruction. Until you'll find an insturction that is not handled by the hardware, because in example these two instructions in combination raise a silicon bug, or create a bus burst that breaks the memory content, reads the next instruction false into the cache or whatever. But even with O2 optimation turned on and viewing assembler in one window, while having the C source in the other, one can easily draw the lines between the two in mind. Having an call hidden behind a normal Foo.x destroy this completly.

That's the reason why C was a system languange, and this it the bias it was created for, and I would expect from something that calls itself it's sucessor to handle it in the same bias, or call himself differntly.

- Axel
September 10, 2001
Axel Kittenberger wrote:

> However, it's the same with a function call, strcat() can also format the harddisk. But I will still know that this is a function call, I know that it will push stuff on the stack, it will jump else where and will return. Except the function is inlined of course, but the results are the same. However I still want to see calls on the first sight, not having them hidden behind stuff that looks like a field access, it just makes debugging in hard cases difficult. And in hardcases I mean when you're experimenting with a new hardware, that doesn't yet run 100%. In some worst case situations (and they do happen) you step down at stepping through the source assembler instruction by instruction. Until you'll find an insturction that is not handled by the hardware, because in example these two instructions in combination raise a silicon bug, or create a bus burst that breaks the memory content, reads the next instruction false into the cache or whatever. But even with O2 optimation turned on and viewing assembler in one window, while having the C source in the other, one can easily draw the lines between the two in mind. Having an call hidden behind a normal Foo.x destroy this completly.

This really seems like a serious double-standard in your logic.  You're assuming a situation where the user is doing some really serious low-level debugging of hardware.  This implies someone who has a very in-depth knowledge of the system and of computers in general.

At the same time, you assume that this programmer is not aware of gettors and settors in D, or if he did, he didn't take the time to look at the definition of the class.

If you are doing something as low-level as you are talking, then you should not be using weird class libraries you don't know well (or that somebody is still modifying the implementation), nor should you be using a language that you are not an expert in.  The whole idea of the programmer being "surprised" simply indicates that he should have used another language that he knew better...be it C, assembly, or whatever.

This same argument could be used against inlined functions, macros, typedefs, or even structs.  If the programmer doesn't know what the language does, he shouldn't be using it for this kind of debugging.  If he doesn't understand how the compiler works, he shouldn't be trying to step through the assembly that it generated.

September 11, 2001
> This really seems like a serious double-standard in your logic.  You're
> assuming a situation where the user is doing some really serious low-level
> debugging of
> hardware.  This implies someone who has a very in-depth knowledge of the
> system and of computers in general.
> 
> At the same time, you assume that this programmer is not aware of gettors and settors in D, or if he did, he didn't take the time to look at the definition of the class.
> 
> If you are doing something as low-level as you are talking, then you
> should not be using weird class libraries you don't know well (or that
> somebody is still modifying the implementation), nor should you be using a
> language that you are
> not an expert in.  The whole idea of the programmer being "surprised"
> simply indicates that he should have used another language that he knew
> better...be it C, assembly, or whatever.
> 
> This same argument could be used against inlined functions, macros,
> typedefs, or
> even structs.  If the programmer doesn't know what the language does, he
> shouldn't be using it for this kind of debugging.  If he doesn't
> understand how the compiler works, he shouldn't be trying to step through
> the assembly that it generated.

Actually the class Foo does not need to be a standard class, and also very often you're debugging code someone else has written. It's just as you said, for getters&setters you need to know how Foo is constructed to see what it results in assembler, in the context-free paradigm you don't need to, it's paradigm, like it or not. I think it's important for a) huge projects where you can never have a global view of things b) projects more than one guy works on it, for one man projects everything changes significantly.

And well one can understand C very well and be an expert in it, but you're assuming that when I in example debug the linux kernel, I also have to have a global expert-knowledge to it. Actually it's not the case, in C you can just debug the part that makes problems, without knowing surrondings, or having to look at other implementations elsewhere in the code.

- Axel
September 11, 2001
I agree with you here, though it seems like a contradiction.  Your previous post (the one I was responding to) was presuming that you were debugging *hardware*. That's much lower level than even an OS kernel, much less complex software written by somebody else.  IMHO, at that (hardware) level, it's reasonable to expect that the debugging programmer knows the software intimately.

1 2
Next ›   Last »