May 08, 2008
Nick Sabalausky wrote:
> "Michael Neumann" <mneumann@ntecs.de> wrote in message
> news:fvvd1a$7p3$1@digitalmars.com...
>> terranium wrote:
>>> Michael Neumann Wrote:
>>>
>>>> Another example which leads to hard to read code and potential
>>>> bugs is (ignoring compiler warnings):
>>>>
>>>>      class A
>>>>      {
>>>>        int i;
>>>>
>>>>        //
>>>>        // add some 25 lines of code here
>>>>        //
>>>>
>>>>        void foo(int i)
>>>>        {
>>>>          // what is "i" here?
>>>>        }
>>>>      }
>>>>
>>>> This is solved in Ruby by using a separate namespace for instance
>>>> variables ("@i" for instance variable "i", and "i" for local variable
>>>> "i").
>>> In C family languages this is ruled out by naming convention.
>> Which in the case of using a m_ prefix leads to hard(er) to read code.
>> And then there is no standard naming convention, and who actually uses
>> such a naming convention? Without that, you can't easily distinguish a
>> local variable from an instance variable from a global variable.
>>
>
> In "good" C family languages, the instance variable is referred to by
> prefixing it with something like "this.". I think there are some that do it
> differently (ECMAScript, IIRC), but I'd argue those ones are making a big
> mistake.
>
> However, that does bring up an inconsistancy inherent to the C-style.
> Following your example, if I do this:
>
> class A{
> int i;
> void foo(int i) {}
> void foo() {}
> void bar() {}
> }
>
> In that case, "i" means one thing if you're in "foo(int)", and another thing
> if you're in "foo()" or "bar()". Of course, you could decide to *always* use
> "this." when referring to an instance variable, but that's kinda long, and
> you still end up with a hidden bug if you decide to use a local var named
> "i" and forget to declare it.

Yeah, you are right. It is inconsitent to define the instance variable
with "i" while accessing it with "@i":

  class A {
    int i;

    void foo(int i) { @i = i; }
  };

But, then it's no less inconsistent than using "this.i".

> There are things about Ruby I don't like, but the @instanceVar syntax is one
> of the things I think it got spot-on. I would be totally in favor of
> adopting that.

Btw, I wrote a C++ preprocessor script that during compilation
transparently replaces every occurence of "@" by "this->".
Would be nice to have this build into D directly.

Regards,

  Michael
May 08, 2008
"Michael Neumann" <mneumann@ntecs.de> wrote in message news:48236605.9050806@ntecs.de...
> Yeah, you are right. It is inconsitent to define the instance variable with "i" while accessing it with "@i":
>
>   class A {
>     int i;
>
>     void foo(int i) { @i = i; }
>   };
>

That's not really what I meant (See below for clarification of what I meant). Actually, I hadn't even thought of that, but it is a good point.

> Btw, I wrote a C++ preprocessor script that during compilation transparently replaces every occurence of "@" by "this->". Would be nice to have this build into D directly.
>

I agree, that would be nice (provided, of course, it didn't interfere with deliberate uses of "@", such as within a string). Unfortunately, in C++ or D it still wouldn't solve the problem of accidentially clobbering the instance variable "i" by intending trying to use a local var "i", but forgetting to declare it:

// Note: untested
class A {
  int i=0;

  void foo() {
    // Accidentially clobbers "this.i" aka "@i"
    for(i=0; i<77; i++)
      {/* Do stuff */}
  }

  invariant() {
    assert(this.i==0); // Fails after foo() is called
  }
}

I still like the @ thing, though.


May 08, 2008
Janice Caron, el  8 de mayo a las 15:51 me escribiste:
> 2008/5/8 Leandro Lucarella <llucax@gmail.com>:
> > I don't know what's the
> >  point of putting shitty and unreal examples to prove a point
> 
> You won't win any arguments by calling examples "shitty". All that will achieve is that people who don't want to be insulted will stop debating with you, and D won't change.

Great excuse to not answer a mail with a lot of good points...

Very convenient.

Ok, you are right about "shitty", my apologies for using an unappropriate word. Please remove " shitty and" in my previous mail and tell me what flaws you find in my proposal?

Thank you.

-- 
Leandro Lucarella (luca) | Blog colectivo: http://www.mazziblog.com.ar/blog/
----------------------------------------------------------------------------
GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145  104C 949E BFB6 5F5A 8D05)
----------------------------------------------------------------------------
EL PRIMER MONITO DEL MILENIO...
	-- Crónica TV
May 09, 2008
On 08/05/2008, Leandro Lucarella <llucax@gmail.com> wrote:
>  Please remove " shitty and" in my previous mail and tell me what
>  flaws you find in my proposal?

Sure, but I think it's kinda obvious. In answer to your question: "What's the point of putting unreal examples to prove a point?", the reason is /to prove a point/.

The onus is on you to prove that no examples are ambiguous. I only have to provide a single counterexample. That, I have done, as have many other people. Whether or not an example is "real" or "unreal" is in the eye of the beholder, but certainly I simplified my examples so as to demonstrate /only/ the point being made. Superflous information just sidetracks the issue.

I'm not sure there's anything further to say, however. The reason is, when you said "Yes, in my suggestion you need to escape false-newline-end-of-statement", you effectively closed all ambiguities. My arguments were predicated on the assumption that there would be no newline-escaping. That means, your plan /is/ workable - but the price (having to escape newlines) is higher than I want to pay. It becomes a matter of taste. I prefer C-style; you prefer Python-style. Maybe you (or someone else who wants this) can write that conversion tool I mentioned somewhere.

But even with the "you need to escape escape false-newline-end-of-statement" rule, there is still room for silent bugs to creep in. For example:

    foo(int x)  // Danger!
    {
        /* stuff */
    }

Under your scheme that would have to be rewritten as either

    foo(int x)  {
        /* stuff */
    }

or

    foo(int x)  \
    {
        /* stuff */
    }

or else run the risk of being misparsed as

    foo(int x);
    {
        /* stuff */
    }

which often times will be valid D. (Not always, but sometimes - e.g. as an inner function). So unless you go "all the way" and aim for full Python style, you run the risk of introducing some very hard to find bugs.
May 09, 2008
Michael Neumann wrote:
> Don wrote:
>  > Walter Bright wrote:
>  >> Nick Sabalausky wrote:
>  >>> Python's semantically-meaningful indentation was intended to fix the
>  >>> problem of poorly-indented code by enforcing proper indentation in
>  >>> the language and compiler. But the problem is, it *doesn't* actually
>  >>> enforce it. In fact, it *can't* enforce it because it doesn't have
>  >>> enough information to enforce it. All it really does (and all it's
>  >>> able to do) is run around *assuming* your code is properly indented
>  >>> while silently drawing semantic conclusions from those (obviously not
>  >>> always correct) assumptions.
>  >>>
>  >>> In fact it's really the same root problem as "no variable
>  >>> declarations". In both cases, the compiler does nothing but assume
>  >>> that what you wrote is what you meant, thus silently introducing
>  >>> hidden bugs 1. Whenever you didn't *really* want the new variables
>  >>> "my_reponse" and "my_responce" in additon to "my_response"
>  >>> (VB/VBScript coders use "option explicit" *for a reason*), and 2.
>  >>> Whenever you didn't *really* want to break out of that loop/conditional.
>  >>
>  >> That goes back to the point that a language needs redundancy in order
>  >> to detect errors. Having semantically-meaningful indentation, removing
>  >> redundant semicolons, and implicitly declaring variables all remove
>  >> redundancy at the (high) cost of inability to detect common bugs.
>  >>
>  >> Those things are fine for scripting language programs that are fairly
>  >> short (like under a screenful). It gets increasingly bad as the size
>  >> of the program increases.
>  >
>  > Implicitly declared variables are probably the greatest of all false
>  > economies in the programming world.
>  >
>  > bugs(no variable declarations) > 100 * bugs(dangling pointers).
> 
> Is that your own experience? Only practice tells the truth!

Yes. And such bugs can be horrible to track down. When I use such languages I seem to spend most of my time hunting for typos which the compiler should have caught.

> 
> Would you say that Smalltalk is a scripting language? See where it is
> used, and notice the size of the applications written in it.
> 
> I am sure every C program includes more errors than the worst
> Ruby/Python program you can ever write. Not so sure about other
> scripting language... :)

Ignoring the obvious exaggeration (look at the bug lists for Knuth's code, for an example of bug-free C code) -- there are many causes of bugs in C, other than dangling pointers!
And I think that many bugs attributed to dangling pointers are actually _uninitialized variable_ bugs. An uninitialised pointer containing random garbage is horrible thing. In my experience, the problem is almost always in the initialisation, not in the use of pointers.
May 09, 2008
terranium Wrote:

> As to java and .net standard naming convention is provided by standard library.
> 

They say, ruby's code is very good... hmm... maybe, but I've seen its C sources too... :) they're not just bad, they're chaotic evil :)))
May 09, 2008
Don wrote:
> Michael Neumann wrote:
>> Don wrote:
>>  > Walter Bright wrote:
>>  >> Nick Sabalausky wrote:
>>  >>> Python's semantically-meaningful indentation was intended to fix the
>>  >>> problem of poorly-indented code by enforcing proper indentation in
>>  >>> the language and compiler. But the problem is, it *doesn't* actually
>>  >>> enforce it. In fact, it *can't* enforce it because it doesn't have
>>  >>> enough information to enforce it. All it really does (and all it's
>>  >>> able to do) is run around *assuming* your code is properly indented
>>  >>> while silently drawing semantic conclusions from those (obviously
>> not
>>  >>> always correct) assumptions.
>>  >>>
>>  >>> In fact it's really the same root problem as "no variable
>>  >>> declarations". In both cases, the compiler does nothing but assume
>>  >>> that what you wrote is what you meant, thus silently introducing
>>  >>> hidden bugs 1. Whenever you didn't *really* want the new variables
>>  >>> "my_reponse" and "my_responce" in additon to "my_response"
>>  >>> (VB/VBScript coders use "option explicit" *for a reason*), and 2.
>>  >>> Whenever you didn't *really* want to break out of that
>> loop/conditional.
>>  >>
>>  >> That goes back to the point that a language needs redundancy in order
>>  >> to detect errors. Having semantically-meaningful indentation,
>> removing
>>  >> redundant semicolons, and implicitly declaring variables all remove
>>  >> redundancy at the (high) cost of inability to detect common bugs.
>>  >>
>>  >> Those things are fine for scripting language programs that are fairly
>>  >> short (like under a screenful). It gets increasingly bad as the size
>>  >> of the program increases.
>>  >
>>  > Implicitly declared variables are probably the greatest of all false
>>  > economies in the programming world.
>>  >
>>  > bugs(no variable declarations) > 100 * bugs(dangling pointers).
>>
>> Is that your own experience? Only practice tells the truth!
>
> Yes. And such bugs can be horrible to track down. When I use such
> languages I seem to spend most of my time hunting for typos which the
> compiler should have caught.

How can the compiler prevent you from doing any typos? Imagine mixing
two variables "i" and "j". This can happen to you in any language!

And how is the following any better (taken from another post):

  class A {
    int i=0;

    void foo() {
      // Accidentially clobbers "this.i" aka "@i"
      for(i=0; i<77; i++)
        {/* Do stuff */}
    }

    invariant() {
      assert(this.i==0); // Fails after foo() is called
    }
  }

Sure, the compiler will issue a warning.

>> Would you say that Smalltalk is a scripting language? See where it is
>> used, and notice the size of the applications written in it.
>>
>> I am sure every C program includes more errors than the worst
>> Ruby/Python program you can ever write. Not so sure about other
>> scripting language... :)
>
> Ignoring the obvious exaggeration (look at the bug lists for Knuth's
> code, for an example of bug-free C code) -- there are many causes of
> bugs in C, other than dangling pointers!

Agreed.

> And I think that many bugs attributed to dangling pointers are actually
> _uninitialized variable_ bugs.

But in Ruby, to mention a scripting language, instance variables are
*always* initialized! In C, an uninitialized variable will silently
produce wrong results, which are hardest to find bugs (IMHO). On the
other side, in Ruby, it's very likely that you get a method-missing
runtime exception and as such the problem is easy to spot!

> An uninitialised pointer containing
> random garbage is horrible thing. In my experience, the problem is
> almost always in the initialisation, not in the use of pointers.

Lets replace "declarations" with "initialization" in your
original statement, and I agree 100%:

bugs(no variable initialization) > 100 * bugs(dangling pointers).

Regards,

  Michael
May 09, 2008
terranium wrote:
> terranium Wrote:
>
>> As to java and .net standard naming convention is provided by standard library.
>>
>
> They say, ruby's code is very good... hmm... maybe, but I've seen its C sources too... :) they're not just bad, they're chaotic evil :)))

After working a while with the C sources of Ruby they'll become less
evil, even so they might be evil :). But hey, it's C!
Just wondering if the sources of the .NET of Java compiler/runtimes are
any better. For sure they are 10x as much code ;-)

And then, there is a lot of bad ruby code out there as well! But if you
have the choice between 500 lines of bad java code and 50 lines bad ruby
code, I'd just take the 50 lines bad ruby code and rewrite it myself ;-)

Regards,

  Michael
May 09, 2008
Nick Sabalausky wrote:
> // Note: untested
> class A {
>   int i=0;
>
>   void foo() {
>     // Accidentially clobbers "this.i" aka "@i"
>     for(i=0; i<77; i++)
>       {/* Do stuff */}
>   }
>
>   invariant() {
>     assert(this.i==0); // Fails after foo() is called
>   }
> }
>
> I still like the @ thing, though.

Very good example!

A solution could be to force the programmer to use the "this." notation
or at least issuing a compiler warning if not done.  The latter could be
implemented in the compiler without any changes to the syntax/language.

The next step would be to have "@" as a synonym for "this.", as typing
"this." all the time is either annoying and as such is ignored or leads
to less readable code (IMHO).

Regards,

  Michael
May 09, 2008
Michael Neumann Wrote:

> But hey, it's C!

It's not an excuse for writing junk. The excuse is this was written by one man.