April 15, 2005
In article <d3pegp$1be7$1@digitaldaemon.com>, Matthew says...
>
>Do you mean Catchable, or Quenchable? They have very quite different implications. AFAIK, only Sean has mentioned anything even uncatchable-like.

And my suggestion was only that Errors should be unrecoverable (unquenchable by your terminology?)--they can be re-thrown six ways to sunday.

>What I've been proposing is that CP violations should be unquechable. This is no way prevents the word processor doing its best to save your work.

I think Walter meant quenchable.  That assertion failures currently throw an exception rather than halting the app immediately tell me that Walter isn't against tidying the cabin before going down with the ship.


Sean


April 15, 2005
"Ben Hinkle" <bhinkle@mathworks.com> wrote in message news:d3h2sd$40v$1@digitaldaemon.com...
> I think Walter has made the right choices - except the hierarchy has
gotten
> out of whack. Robust, fault-tolerant software is easier to write with D
than
> C++.

D is easier to write robust code in than C++ because it is more fault *intolerant* than C++ is. The idea is not to paper over bugs and soldier on, which would be fault-tolerant, but to set things up so that any faults cannot be ignored. Here's an example:

C++:

    class Foo { ... void func(); ... };
    ...
    void bar()
    {
        Foo *f;    // oops, forgot to initialize it
        ...
        f->func();
    }

f is initialized with garbage. The f->foo() may or may not fail. If it doesn't fail, the bug may proceed unnoticed. Does this kind of problem happen in the wild? Happens all the time.

D:

    class Foo { ... void func(); ... }
    ...
    void bar()
    {
        Foo f;    // oops, forgot to initialize it
        ...
        f.func();
    }

D will provide a default initialization of f to null. Then, if f is dereference as in f.func(), you'll get a null pointer exception. Every time, in the same place. This helps with two main needs for writing robust code: 1) flushing out the bugs so they appear and 2) having the symptoms of the bug be repeatable.

Here's another one:

C++:
    Foo *f = new Foo;
    Foo *g = f;
    ...
    delete f;
    ...
    g->func();

That'll appear to 'work' most of the time, but will rarely fail, and such problems are typically very hard to track down.

D:
    Foo f = new Foo;
    Foo g = f;
    ...
    delete f;
    ...
    g.func();

You're much more likely to get a null pointer exception with D, because when delete deletes a class reference, it nulls out the vptr. It's not perfect, as in the meantime a new class object could be allocated using that same chunk of memory, but in my own experience it has been very helpful in exposing bugs, much more so than C++'s method.


April 16, 2005
"Walter" <newshound@digitalmars.com> wrote in message news:d3phdv$1dkv$1@digitaldaemon.com...
>
> "Ben Hinkle" <bhinkle@mathworks.com> wrote in message news:d3h2sd$40v$1@digitaldaemon.com...
>> I think Walter has made the right choices - except the hierarchy has
> gotten
>> out of whack. Robust, fault-tolerant software is easier to write with D
> than
>> C++.
>
> D is easier to write robust code in than C++ because it is more
> fault
> *intolerant* than C++ is. The idea is not to paper over bugs and
> soldier on,
> which would be fault-tolerant, but to set things up so that any
> faults
> cannot be ignored. Here's an example:
>
> C++:
>
>    class Foo { ... void func(); ... };
>    ...
>    void bar()
>    {
>        Foo *f;    // oops, forgot to initialize it
>        ...
>        f->func();
>    }
>
> f is initialized with garbage. The f->foo() may or may not fail.
> If it
> doesn't fail, the bug may proceed unnoticed. Does this kind of
> problem
> happen in the wild? Happens all the time.
>
> D:
>
>    class Foo { ... void func(); ... }
>    ...
>    void bar()
>    {
>        Foo f;    // oops, forgot to initialize it
>        ...
>        f.func();
>    }
>
> D will provide a default initialization of f to null. Then, if f
> is
> dereference as in f.func(), you'll get a null pointer exception.
> Every time,
> in the same place. This helps with two main needs for writing
> robust code:
> 1) flushing out the bugs so they appear and 2) having the symptoms
> of the
> bug be repeatable.
>
> Here's another one:
>
> C++:
>    Foo *f = new Foo;
>    Foo *g = f;
>    ...
>    delete f;
>    ...
>    g->func();
>
> That'll appear to 'work' most of the time, but will rarely fail,
> and such
> problems are typically very hard to track down.
>
> D:
>    Foo f = new Foo;
>    Foo g = f;
>    ...
>    delete f;
>    ...
>    g.func();
>
> You're much more likely to get a null pointer exception with D,
> because when
> delete deletes a class reference, it nulls out the vptr. It's not
> perfect,
> as in the meantime a new class object could be allocated using
> that same
> chunk of memory, but in my own experience it has been very helpful
> in
> exposing bugs, much more so than C++'s method.

True. But I believe I did say that my skepticisim was from a personal perspective, as, I believe, is the proposition itself.

Simply put, I think D does not fulfil the promise to me. I don't doubt that it will do so to others. But my suspicion is that it addresses the easy/trivial/neophytic gotchas - which is a great thing, to be sure - while failing to address the larger/deeper/harder problems and in some cases exacerbating them. The non-auto nature of exceptions is one, albeit quite arcane. The use of root-level potentially meaningless methods - opCmp() et al - is another. These, to me personally, present a much less robust programming environment than the one I am experienced in. I don't say that that's going to be a common perspective, much less the majority one, but it is a valid perspective, and one which is shared by others. Thus, my skepticism about D's ability to make such claims.

And to be clear and forestall any mirepresentation my position: I think the above examples show that D is making welcome advances in facilitating good coding. I just happen to think it's also taking some backwards steps, and that those areas are the ones which present a far greater challenge to people of all levels of experience, skill and diligence.



April 16, 2005
"Sean Kelly" <sean@f4.ca> wrote in message news:d3pgmo$1ctk$1@digitaldaemon.com...
> In article <d3pegp$1be7$1@digitaldaemon.com>, Matthew says...
>>
>>Do you mean Catchable, or Quenchable? They have very quite
>>different
>>implications. AFAIK, only Sean has mentioned anything even
>>uncatchable-like.
>
> And my suggestion was only that Errors should be unrecoverable
> (unquenchable by
> your terminology?)--they can be re-thrown six ways to sunday.

I wasn't trying to say that you'd said that things _should_ be uncatchable, merely that you'd entertained a solution whereby irrecoverability could be achieved within the current definition of the language, by 'throwing' auto classes.

>
>>What I've been proposing is that CP violations
>>should be unquechable. This is no way prevents the word processor
>>doing its best to save your work.
>
> I think Walter meant quenchable.

Then if he did, he would need to supply a motivating example for "having a class of errors that is not catchable at all would be a mistake", because the word processor does not represent such. It can (do its best to) save the data without violating irrecoverability.

Anyway, this has been debated ad nauseum, and I'm quite content that people should have the options to make weighted decisions of risk/severity. My only contention is that the Principle of Irrecoverability has no exceptions, and its consequences should be weighed when writing software.

Further, I am now almost completely convinced, for practical reasons, by Georg's proposition that D libraries (incl runtime libs) should be in either CP or no-CP form, and that one should build an app one-way or the other. I humbly suggest that the only remaining point worthy of debate on the issue is what happens in a process consisting of dynamic link-units when some are built with CP and some without.



April 16, 2005
"Matthew" <admin@stlsoft.dot.dot.dot.dot.org> wrote in message news:d3po2n$1iek$1@digitaldaemon.com...
> True. But I believe I did say that my skepticisim was from a personal perspective, as, I believe, is the proposition itself. Simply put, I think D does not fulfil the promise to me. I don't doubt that it will do so to others. But my suspicion is that it addresses the easy/trivial/neophytic gotchas - which is a great thing, to be sure - while failing to address the larger/deeper/harder problems and in some cases exacerbating them. The non-auto nature of exceptions is one, albeit quite arcane. The use of root-level potentially meaningless methods - opCmp() et al - is another. These, to me personally, present a much less robust programming environment than the one I am experienced in. I don't say that that's going to be a common perspective, much less the majority one, but it is a valid perspective, and one which is shared by others. Thus, my skepticism about D's ability to make such claims.
>
> And to be clear and forestall any mirepresentation my position: I think the above examples show that D is making welcome advances in facilitating good coding. I just happen to think it's also taking some backwards steps, and that those areas are the ones which present a far greater challenge to people of all levels of experience, skill and diligence.

I'm going to posit that your 10+ years of professional experience in C++ might be skewing your perceptions here. I have nearly 20 years experience writing C++, and I rarely make mistakes any more that are not algorithmic (rather than dumb things like forgetting to initialize). I am so *used* to C++ that I've learned to compensate for its faults, potholes, and goofball error prone issues, such that they aren't a personal issue with me anymore. I suspect the same goes for you.

But then you work with D, and it's different. The comfortable old compensations are inapplicable, and even when it's the same, assume it's different (like the &this issue you had in another thread). Until you've programmed enough in D to build up a comfortable mental model of how it works, I wouldn't be surprised at all if *initially* you're going to be writing buggier code than you would in C++.

To me it's like driving a manual transmission all my life. I am so used to its quirks. Then, I try to drive an automatic. When stopping for a red light, my left foot comes down hard on the clutch. There is no clutch, and it hits the extended power brake pedal instead. The car slows down suddenly. My reflex is to hit the clutch harder so the engine won't stall. The car stands on its nose, much to the amusement of my passengers. It's not that the auto transmission is badly designed, it just requires a different mental model to operate.

P.S. I said these C++ problems rarely cause me trouble anymore in my own code. So why fix them? Because I remember the trouble that they used to cause me, and see the trouble they cause other people. Being in the compiler business, I wind up helping a lot of people with a very wide variety of code, so I tend to see what kinds of things bring on the grief.

P.P.S. I believe the C++ uninitialized variable problem is a far deeper and costlier one than you seem willing to credit it for. It just stands head, shoulders, and trunk above most of the other problems with the exception of goofing up managing memory allocations. It's one of those things that I have lost enormous time to, and as a consequence have learned a lot of compensations for. I suspect you have, too.

P.P.P.S. I know for a fact that I spend significantly less time writing an app in D than I would the same thing in C++, and that's including debugging time. There's no question about it. When I've translated working & debugged apps from C++ to D, D's extra error checking exposed several bugs that had gone undetected by C++ (array overflows was a big one), even one with the dreaded C++ implicit default break; in the switch statement <g>.


April 16, 2005
"Matthew" <admin@stlsoft.dot.dot.dot.dot.org> wrote in message news:d3po2o$1iek$2@digitaldaemon.com...
> Then if he did, he would need to supply a motivating example for "having a class of errors that is not catchable at all would be a mistake", because the word processor does not represent such. It can (do its best to) save the data without violating irrecoverability.

Here's one. One garbage collection strategy is to set pages in the gc heap as 'read only'. Then set up an exception handler to catch the GP faults from writes to those pages. Internally mark those pages as 'modified' and then turn off the read only protection on the page. Restart the instruction that caused the GP fault.

There are other things you can do by manipulating the page protections and intercepting the generated faults. The operating system, for example, catches stack overflow faults and extends the stack.


April 16, 2005
"Maxime Larose" <mlarose@broadsoft.com> wrote in message news:d3llc3$pas$1@digitaldaemon.com...
> My offer to implement stack tracing still stands. The more I think about
it,
> the more it seems to me like the way to go. I am still waiting on Walter's reply on that issue (hoping the email address I had was good.)

If you can make it work in a reasonable fashion, I'll add it in.


April 16, 2005
"Walter" <newshound@digitalmars.com> wrote in message news:d3otcn$sja$1@digitaldaemon.com...
>
> "Ben Hinkle" <ben.hinkle@gmail.com> wrote in message news:d3hnug$flk$1@digitaldaemon.com...
>> Hard failures during debugging is fine. My own experience in code
> robustness
>> comes from working with engineering companies (who use MATLAB to generate code for cars and planes) where an unrecoverable error means your car
> shuts
>> down when you are doing 65 on the highway. Or imagine if that happens
>> with
>> an airplane. That is not acceptable. They have drilled over and over into
>> our heads that a catastrophic error means people die. I don't mean to be
>> overly dramatic but it is a fact.
>
> The reason airliners are safe is because they are designed to be tolerant
> of
> any single failure. Computer systems are very unreliable, and the first
> thing the designer thinks of is "assume the computer system goes beserk
> and
> does the worst thing possible, how do I design the system to prevent that
> from bringing down the airliner?"
>
> Having worked on airliner design, I know how computer controlled
> subsystems
> handle self-detected faults. They do it by shutting themselves down and
> switching to the backup system. They don't try to soldier on. To do so
> would
> be to, by definition, be operating in an undefined, untested, and unknown
> configuration. I wouldn't want to bet my life on that.
>
> Even if the software was perfect, which it never is, the chips themselves are both prone to random failure and are uninspectable. Therefore, in my opinion from having worked on systems that must be safe, a system that cannot stand a catastrophic failure of a computer system is an inherently unsafe design to begin with. Making the computer more reliable does not solve the problem.
>
> What CP provides is another layer of security offering the capability of a
> program to self-detect a fault. The only reasonable thing it can do then
> is
> shut itself down and engage the backup.
>
> But if you're writing, say, a word processor, one might decide to attempt
> to
> save the user's data upon a CP violation and hope for the best. In a word
> processor, safety and security aren't the top priority.
>
> There isn't one size that fits all applications, the engineer writing the
> program will have to decide. Therefore, having a class of errors that is
> not
> catchable at all would be a mistake.

My analogy with systems on an airplane is that the code that realized a subsystem failed and decided to shut that particular subsystem down effectively "caught" the problem in the subsystem. The same approach applies to a piece of software that catches a problem in a subsystem and deals with it. Just as an error in a subsystem on an airplane doesn't shut down the entire airplane so an error in a subsystem in a piece of software should shut down the entire software. I agree the level of safety and reduncancy is much higher in an airplane but my argument was that *something* on that airplane realized the subsystem was in error and dealt with it.


April 16, 2005
"Walter" <newshound@digitalmars.com> wrote in message news:d3prc1$1k77$1@digitaldaemon.com...
>
> "Matthew" <admin@stlsoft.dot.dot.dot.dot.org> wrote in message news:d3po2n$1iek$1@digitaldaemon.com...
>> True. But I believe I did say that my skepticisim was from a
>> personal perspective, as, I believe, is the proposition itself.
>> Simply put, I think D does not fulfil the promise to me. I don't
>> doubt that it will do so to others. But my suspicion is that it
>> addresses the easy/trivial/neophytic gotchas - which is a great
>> thing, to be sure - while failing to address the
>> larger/deeper/harder problems and in some cases exacerbating
>> them.
>> The non-auto nature of exceptions is one, albeit quite arcane.
>> The
>> use of root-level potentially meaningless methods - opCmp() et
>> al -
>> is another. These, to me personally, present a much less robust
>> programming environment than the one I am experienced in. I don't
>> say that that's going to be a common perspective, much less the
>> majority one, but it is a valid perspective, and one which is
>> shared
>> by others. Thus, my skepticism about D's ability to make such
>> claims.
>>
>> And to be clear and forestall any mirepresentation my position: I
>> think the above examples show that D is making welcome advances
>> in
>> facilitating good coding. I just happen to think it's also taking
>> some backwards steps, and that those areas are the ones which
>> present a far greater challenge to people of all levels of
>> experience, skill and diligence.
>
> I'm going to posit that your 10+ years of professional experience
> in C++
> might be skewing your perceptions here. I have nearly 20 years
> experience
> writing C++, and I rarely make mistakes any more that are not
> algorithmic
> (rather than dumb things like forgetting to initialize). I am so
> *used* to
> C++ that I've learned to compensate for its faults, potholes, and
> goofball
> error prone issues, such that they aren't a personal issue with me
> anymore.
> I suspect the same goes for you.
>
> But then you work with D, and it's different. The comfortable old
> compensations are inapplicable, and even when it's the same,
> assume it's
> different (like the &this issue you had in another thread). Until
> you've
> programmed enough in D to build up a comfortable mental model of
> how it
> works, I wouldn't be surprised at all if *initially* you're going
> to be
> writing buggier code than you would in C++.
>
> To me it's like driving a manual transmission all my life. I am so
> used to
> its quirks. Then, I try to drive an automatic. When stopping for a
> red
> light, my left foot comes down hard on the clutch. There is no
> clutch, and
> it hits the extended power brake pedal instead. The car slows down
> suddenly.
> My reflex is to hit the clutch harder so the engine won't stall.
> The car
> stands on its nose, much to the amusement of my passengers. It's
> not that
> the auto transmission is badly designed, it just requires a
> different mental
> model to operate.
>
> P.S. I said these C++ problems rarely cause me trouble anymore in
> my own
> code. So why fix them? Because I remember the trouble that they
> used to
> cause me, and see the trouble they cause other people. Being in
> the compiler
> business, I wind up helping a lot of people with a very wide
> variety of
> code, so I tend to see what kinds of things bring on the grief.
>
> P.P.S. I believe the C++ uninitialized variable problem is a far
> deeper and
> costlier one than you seem willing to credit it for. It just
> stands head,
> shoulders, and trunk above most of the other problems with the
> exception of
> goofing up managing memory allocations. It's one of those things
> that I have
> lost enormous time to, and as a consequence have learned a lot of
> compensations for. I suspect you have, too.
>
> P.P.P.S. I know for a fact that I spend significantly less time
> writing an
> app in D than I would the same thing in C++, and that's including
> debugging
> time. There's no question about it. When I've translated working &
> debugged
> apps from C++ to D, D's extra error checking exposed several bugs
> that had
> gone undetected by C++ (array overflows was a big one), even one
> with the
> dreaded C++ implicit default break; in the switch statement <g>.

But, my friend, you've just done your usual and not answered the point I made. I disagree with none of the contents of this post, save for the implication that it addresses my point.

The issue I have with many languages that appear, on the surface, to be 'better' than C++ is that the limits they place on the experienced programmer are usually very hard. An example we've seen with D in recent days is that we need compiler support for irrecoverability.

Now whether as a result of the singular genius of Dr Stroustrup, or as an emergent property of its complexity, C++ has, for the ingenious/intrepid, almost infinite capacity for providing fixes to the language by dint of libraries. Look at my fast string concatenation, at Boost's Lambda, etc. etc. Other languages are left for dead in this regard.

In this regard, D is a second class citizen to C++, probably by design, and probably for good reason. After all, most of the ingenious solutions in C++ are not accessible to most programmers, even very experienced ones, as it's just so complex and susceptible to dialecticism. But if D proscribes this (arguably semi-insane) level of ingenuity, then, IMO, it should address the fundamental "big" issues more seriously than it does. Otherwise, people who can reach answers for their biggest questions in other languages are going to, as I currently do, find it impossible to digest blanket statements about D's superiority.



April 16, 2005
"Walter" <newshound@digitalmars.com> wrote in message news:d3pt4i$1lb6$1@digitaldaemon.com...
>
> "Matthew" <admin@stlsoft.dot.dot.dot.dot.org> wrote in message news:d3po2o$1iek$2@digitaldaemon.com...
>> Then if he did, he would need to supply a motivating example for
>> "having a class of errors that is not catchable at all would be a
>> mistake", because the word processor does not represent such. It
>> can
>> (do its best to) save the data without violating
>> irrecoverability.
>
> Here's one. One garbage collection strategy is to set pages in the
> gc heap
> as 'read only'. Then set up an exception handler to catch the GP
> faults from
> writes to those pages. Internally mark those pages as 'modified'
> and then
> turn off the read only protection on the page. Restart the
> instruction that
> caused the GP fault.

Sorry to be so brusque, but that's pure piffle. In such a case, the programmer who is designing their software is designating the (processing of the) hardware exception as a part of the normal processing of their application. It in no way represents a violation of the design, since it's part of the design.

As I've said innumerable times throughout this thread, what does and does not contradict the design of a component is entirely within the purview of the author of that component.

By the way, stack expansion is carried out by exactly the process you describe on many operating systems, as I mention in Chapter 32 of Imperfect C++. Given your implication, every program run on, say, Win32 is violating its design. (Here's where everyone can plug in their winks ;)

> There are other things you can do by manipulating the page
> protections and
> intercepting the generated faults. The operating system, for
> example,
> catches stack overflow faults and extends the stack.

Bugger, I just said that above.

Nice to see you contradict your own point within proximity of the next paragraph, anyway. <g>