July 19, 2004 Re: OT: checked exceptions. was Re: That override keyword ... | ||||
---|---|---|---|---|
| ||||
Posted in reply to Andy Friesen | On Mon, 19 Jul 2004 11:02:10 -0700, Andy Friesen wrote: >> Actually, although having to continually write throws ... in my code is a pain, I rather like that Java requires the caller to write a try-catch block. While this is also a pain, it makes for better, more stable code. > > There's a *great* interview on artima with Anders Hejlsberg on this issue. > <http://www.artima.com/intv/handcuffs.html> > > Basically, it amounts to two big things: programmers are lazy oafs (it takes one to know one) and will circumvent checked exceptions, and that most people just want the exception to a toplevel error handler, so the error can be reported to the user in one place. Try/finally blocks ensure that everything gets released properly on such an occasion. > > Besides, all that error checking is a lot of code. The whole point of exceptions is to make sure that error checking is NOT a lot of code. In my experience checked exceptions force you not to put a lot of checking code everywhere, but put checking code at the right locations, assuming you've done a decent job designing your classes and interfaces. The biggest code bloat, and the pain in the ass, is adding throws clauses to method signatures. This is what having non-checked exceptions rids one of. The biggest problem I have with non-checked exceptions is that there's no way to tell what exceptions might be throwing without looking at their source (unless it's documented, which I guarantee it won't be), so unless you're going to be catching the general Exception a lot, there's a strong possibility for lots of exceptions to propagate up to main(). How does one get around this? > >> I believe it's a general rule in programming that writing stable code to deal with all situations is a pain in the ass. > > I feel pretty confident saying that, if it's a pain in the ass, then the tools didn't do their job. (in their defense, they have hard jobs: there are lots of common problems for which there are no sufficiently powerful tools) This is true. The tools available for Java make dealing with the checked exceptions MUCH easier, so I use them without thinking twice. However, as the current eclipseD programmer, I'll admit the the tools for D right now suck. This will probably be true for some time. It's taken years for the good Java tools to emerge, and the C/C++ tools (I feel) are still pretty bad (at least compared to the Java ones.) John |
July 19, 2004 Re: OT: checked exceptions. was Re: That override keyword ... | ||||
---|---|---|---|---|
| ||||
Posted in reply to Andy Friesen | Andy Friesen wrote: > It's entirely reasonable to plow through a whole algorithm, expecting any thrown exceptions to be handled by the caller. What's less reasonable is letting open sockets dangle around because of it. (is there any reason at all to want this to occur?) That would be a killer feature of this "exception" detecter. The problem is, how does the compiler (or the external tool) know what is a "cleanable" resource and what is not? > For this reason, it would be better if a warning was raised when the compiler can verifiably prove that such a resource would be left hanging in the case of an uncaught exception. The best solution would probably be that all those resources are implemented as objects with proper destructors freeing the resource. Then the GC (for not auto types) makes the rest. Of course this will not always be the case. > (the trick is telling the compiler what's expensive. A pragma might be overkill for such a specific thing. Maybe it's worth it) I think it would be worth but I see it more like a 2.0 feature. |
July 20, 2004 Re: That override keyword ... | ||||
---|---|---|---|---|
| ||||
Posted in reply to Andy Friesen | "Andy Friesen" wrote
> It makes more sense to me if D forces us to emphasize the links which are there, as opposed to the ones which are not. :)
Yes indeed. The other thing about positive assertion is exemplified by the current use of the keyword: if the superclass has the original overridden method /removed/ or /renamed/ then the compiler can tell you about it. That's what override currently does, and no-one is suggesting changing that behaviour; it's very helpful.
I think there's a distinction emerging here; if I may be so bold I get the feeling that those in favour have extensive and sordid experience with long-term maintenance, while those detracting from the notion do so because they don't want to type in an additional keyword now and then. Enforcing the use of override does not restrict the language in any manner identified thus far. Rather, it simply enhances it.
Let's face it: actually writing the algorithmic 'decoration' for the compiler (the method-body, braces, data types, etc) is the very lowest on the scale of time consumption. It's the design, implementation, testing, documenting and debugging that consume at least 99%, right? Arguing that the "override" should not be required because one doesn't wish to type in the word is like saying you don't wish to type in the "class" or "struct" keyword.
Another vague detraction is that of "snow blindness". I think that clearly identifying those methods that override from those that don't is effective in helping either/or stand out against the background; regardless of whether your classes mostly/typically override or not. However, this is hardly an argument against something that's guaranteed to reduce the subsequent maintenance and debugging costs. So let's talk about that.
The overriding (heh heh) financial cost of any long-lived software project is typically borne long after the initial release has been shipped. It's the cycle of updates, bug fixes, "enhancements" and so on that really suck up the dollars. We're not talking about some dorm project that goes out to a few buddies and is then dropped after a month or two. We're talking projects involving potentially hundreds of man-years. Anything, and I really do mean *anything*, that a computer language can do to reduce the element of 'surprise' during that long-drawn-out-phase is a huge boon in terms of overall productivity and in terms of hard currency. This latter part is what gets the attention of management. If the use of a language can reduce the bottom-line over time, then it gets a great big pat on the back. DbC is one such notion embraced by Walter; a stricter application of the override keyword would be another. You start adding up all these little features, and pretty soon you have something that the commercial development sector will start to take notice of (purely from a bottom-line perspective).
Anyone who argues against such features based on personal laziness simply paints themselves as an ignorant fool. If you're too lazy to add in an appropriate keyword here and there, then you're almost certain to be unspeakably lazy elsewhere also (error condition? what error condition?). Frankly, I'd hate to see any code by such an individual, and they certainly would not get a job at my company.
- Kris
|
July 20, 2004 Re: That override keyword ... | ||||
---|---|---|---|---|
| ||||
Posted in reply to Kris | > That was intended only to place the implied importance of Berin's "experience" claim into perspective for him. But I may have stretched the point, and you make a fair comment. > > Perhaps there's an whiff of this going on <g>: http://www.ars-technica.com/news/posts/20040717-4003.html > > - Kris I'm on the choice side of the fense, although I don't have deep roots there. I am impressed by your main sponsor http://dmawww.epfl.ch/roso.mosaic/dm/murphy.html :-) |
July 20, 2004 Re: That override keyword ... | ||||
---|---|---|---|---|
| ||||
Posted in reply to Kris | Kris wrote: > "Berin Loritsch" wrote ... > >>He should have placed a call to super.pause() to have everything working >>as expected. However that is done in D. > > > With respect Berin, may I gently suggest that you go back and read the > initial posts on this topic? It's clear that you don't quite grasp the issue > at hand. Take a look at those who are supporting the notion of a stricter > use of "override": > > Matthew > Andy Friesen > tecDruid > Blandger > Derek > Juanjo Álvarez > Vathix > Daniel Horn > Kris Please add my name to the list. :) I don't claim to have the same amount of experience as the others arguing this position, but you make a compelling argument. It's a small price to pay for the prevention of subtle bugs. -- Justin (a/k/a jcc7) http://jcc_7.tripod.com/d/ |
July 20, 2004 Re: That override keyword ... | ||||
---|---|---|---|---|
| ||||
Posted in reply to Bent Rasmussen | "Bent Rasmussen" wrote:
> I am impressed by your main sponsor http://dmawww.epfl.ch/roso.mosaic/dm/murphy.html
Right! Although Murphy was apparently an optimist ... O'Brians Law stipulates that:
"Tis' fu%*ed up already, tae be shure now"
<g>
|
July 20, 2004 Re: OT: checked exceptions. was Re: That override keyword ... | ||||
---|---|---|---|---|
| ||||
Posted in reply to Juanjo Álvarez | Juanjo Álvarez wrote:
> Andy Friesen wrote:
>
>
>
>
>>There's a *great* interview on artima with Anders Hejlsberg on this
>>issue. <http://www.artima.com/intv/handcuffs.html>
>
>
> True, great interview. I'm a little on the middle side about checked
> exceptions. I think they could be one of the places where (with some
> parameter, like -Wuncatched or -Euncatched) it could have sense to have
> compiler warnings, or just errors, like:
>
> "Warning||Error: method cl.foo (line 300) can throw FooException and and
> you're not catching it."
>
> Then you could decide if you need to catch the exception or not (because
> depending of how you use cl.foo FooException it could be impossible to
> trigger FooException, or you just want the program to abort in that case)
>
> Wouldn't that be nice?
Having experienced with a language that supports checked and unchecked exceptions, I must say that if you are going to err at all, do it on the side of unchecked exceptions. I like checked exceptions, but I wouldn't want to have to check all of them all the time. The only time a checked exception should be concidered IMO is when there is something that happens outside the control of the runtime (in the D case, something that happens with the OS/device interaction).
I have worked on several projects where too many exceptions were required to be be checked, and the only real gain from it was code bloat. While you intend for people to be better programmers, ultimately being the lazy beasts that we are, the cost is too high so people circumvent dealing with exceptions alltogether.
If there is going to be checked and unchecked exceptions they should be done intelligently. For example, all the Java formatters (DateFormatter, CurrencyFormatter, MessageFormatter, etc.) require the user to catch a ParseException. The only time that might be necessary is if we are dealing with user input--nine times out of ten, the strings being formatted are already debugged.
However if you do have a checked exception you want it to be checked all the time.
I would learn from Java and provide a mechanism to handle exceptions that terminate a thread. In the more recent versions of Java there is a callback for Thread termination to deal with exceptions not handled in code. That will provide a decent mechanism to deal with things that slip through the cracks in user code.
|
July 20, 2004 Re: That override keyword ... | ||||
---|---|---|---|---|
| ||||
Posted in reply to Arcane Jill | Arcane Jill wrote: > In article <cdcbrr$1np2$1@digitaldaemon.com>, Vathix says... > >>I always use override; this sounds good to me. > > > And I. (Well, almost - see below). Now, pardon me for taking this reasoning one > step further, but, if override is to be compulsory, then we don't actually need > the keyword, do we? And here is where I resonate. > > That is, if it is to be compulsory that a function in a derived class which has > the same name as a function in a base class must have the same signature (which > is what "override" dictates), then we might as well simply have the compiler > enforce this at all times - in which case the keyword "override" becomes > entirely redundant. We can dispense with it. Throw it away. > > But ... might there be times when you /want/ a subclass to provide a same-name > function with a different signature? I can think of a good example - my Int > class sensibly overrides (with override keyword in place) the function > opEquals(Object) ... but it /also/ has a function opEquals(int), allowing you to > write stuff like: > > # Int n; > # if (n == 4) // calls opEquals(int) > > Now, I agree with majority opinion here, in the sense that overriding is what > you /normally/ want to do. But, like everything else, sometimes there are > exceptions. > > Maybe another approach might work. How about this: > (1) Ditch the "override" keyword - member functions override by default. > (2) Introduce a new keyword to allow same-name-different-signature functions. > Just as a thought, the keyword "new" springs to mind as possibly appropriate. Of course then there is the argument that we might want to *replace* a method by design. For example: class A { void foo(); // buggy code } class B { void foo(); // never calls super.foo() } Currently this compiles and doesn't provide any info to the uer invoking the compiler. This is the source of the discussion. In most cases we want the "overrides" semantics, but in a few cases we actually intend to replace the method. Truthfully, I am always of the mindset that whatever is most common should be easiest to do, and whatever is least common should require a little more. So in this case using the "new" keyword in this context would provide symantic clues as to what is intended by design. Alternatively one could use "replaces" as a keyword, but if "new" can do it, why add another keyword? I don't think anyone (not even me) is arguing that the behavior identified by overrides is bad. The only thing being discussed is whether it should be mandated all the time. My thinking is that if the language requires it to be used all the time, then it is probably what the default should be. Differences would be signified with a different keyword. Obviously, the final keyword would not loose any of its semantics. Once a method is declared final it cannot (or at least should not) be able to be overridden or replaced. |
July 21, 2004 OT: was Re: That override keyword ... | ||||
---|---|---|---|---|
| ||||
Posted in reply to Kris | "Kris" <someidiot@earthlink.dot.dot.dot.net> wrote in news:cdhnbk$svg$1@digitaldaemon.com: [snip] > I think there's a distinction emerging here; if I may be so bold I get the feeling that those in favour have extensive and sordid experience with long-term maintenance, while those detracting from the notion do so because they don't want to type in an additional keyword now and then. Enforcing the use of override does not restrict the language in any manner identified thus far. Rather, it simply enhances it. > > Let's face it: actually writing the algorithmic 'decoration' for the compiler (the method-body, braces, data types, etc) is the very lowest on the scale of time consumption. It's the design, implementation, testing, documenting and debugging that consume at least 99%, right? No comments really. Just repeated this paragraph because it's so rare to read such musings, here. > Arguing that the "override" should not be required because one doesn't wish to type in the word is like saying you don't wish to type in the "class" or "struct" keyword. How about preparing a proposal 'implicit class and struct declaration'? I bet, it gets plenty supporters but very few (if any) detractors. After all, OO-folks must write *so many* class declarations. > > Another vague detraction is that of "snow blindness". I think that clearly identifying those methods that override from those that don't is effective in helping either/or stand out against the background; regardless of whether your classes mostly/typically override or not. However, this is hardly an argument against something that's guaranteed to reduce the subsequent maintenance and debugging costs. So let's talk about that. > > The overriding (heh heh) financial cost of any long-lived software project is typically borne long after the initial release has been shipped. It's the cycle of updates, bug fixes, "enhancements" and so on that really suck up the dollars. But what about all those short-lived software: software projects that get canceled before they are finished or software that is abandoned shortly after their initial release? The overriding number of projects belong to this category. And the more effort is put upfront (e.g. into maintainability) the more likely the project is going to be canceled. > We're not talking about some dorm > project that goes out to a few buddies and is then dropped after a month > or two. We're talking projects involving potentially hundreds of > man-years. Anything, and I really do mean *anything*, that a computer > language can do to reduce the element of 'surprise' during that > long-drawn-out-phase is a huge boon in terms of overall productivity and > in terms of hard currency. You really do mean *anything*? Wow, that's uncompromising, but real real-world computer languages put your considerations into the 'nice to have' basket at best. > This latter part is what gets the attention > of management. If the use of a language can reduce the bottom-line over > time, then it gets a great big pat on the back. DbC is one such notion > embraced by Walter; a stricter application of the override keyword would > be another. You start adding up all these little features, and pretty > soon you have something that the commercial development sector will > start to take notice of (purely from a bottom-line perspective). Probably, management doesn't care much for the maintenance phase of software projects, since successful managers have moved to new projects, before that stage is reached. Consider, how Java took management by storm. Although, Java promotes several programming practices that a prone to bite maintainers, and dropped some of C++'s features to improve maintainability. I believe that the DbC thing isn't really about maintainability in first place, it's about cranking out reasonably bug-free code *fast*: If I crank out code with no other help than compile-time type-checking, I loose much time fixing all my bugs with a debugger. Writting full-blown test- cases with 100% test-coverage is a time trap, too. But D's integrated DbC feature might reduce the time to write prototype-quality code, since it catches a fair amount of bugs, but requires only a modest time to write and maintain. > > Anyone who argues against such features based on personal laziness simply paints themselves as an ignorant fool. If you're too lazy to add in an appropriate keyword here and there, then you're almost certain to be unspeakably lazy elsewhere also (error condition? what error condition?). Frankly, I'd hate to see any code by such an individual, and they certainly would not get a job at my company. > > - Kris > Unfortunately, it's always those belonging to the minority that are considered as fools no matter how foolish the majority (of developers) acts. Individuals with 'personal laziness' get job offers by most other companies, so they don't have to care. Frankly, I've heard of developers that got fired for not getting the job done, or not getting it done _fast enough_. But I've never heard of developers fired for writting code that isn't maintainable enough. My impression is that as long as you agree to format your code according to the company's coding style, you're fine in the 'code maintainablility' department. Farmer. |
July 21, 2004 Re: That override keyword ... | ||||
---|---|---|---|---|
| ||||
Posted in reply to Kris | In article <cdhnbk$svg$1@digitaldaemon.com>, Kris says... > >Another vague detraction is that of "snow blindness". I think that clearly identifying those methods that override from those that don't is effective in helping either/or stand out against the background; regardless of whether your classes mostly/typically override or not. However, this is hardly an argument against something that's guaranteed to reduce the subsequent maintenance and debugging costs. So let's talk about that. Kind of an aside, but in C++ I always declare methods that override parent methods "virtual" even though the virtual label is inherited. It makes it easy for me to see what's overriding inherited behavior and what's new. I've never been bitten by this particular bug myself, but it might be a nice rule to enforce. It would be an interesting feature, as traditional languages such as C++ only allow for setting overload requirements in a top down manner while this is almost bottom up. Sean |
Copyright © 1999-2021 by the D Language Foundation