On 6/4/13 12:53 AM, Manu wrote:Great.
I don't buy the flexibility argument as a plus. I think that's a
mistake, but I granted that's a value judgement.
You're framing the matter all wrongly. Changing a method from virtual to final breaks the code of people who chose to override it - i.e. EXACTLY those folks who found it useful to TAP into the FLEXIBILITY of the design.But it's a breaking change to the API no matter which way you slice it,
and I suspect this will be the prevalent pattern.
So it basically commits to a future of endless breaking changes when
someone wants to tighten up the performance of their library, and
typically only after it has had time in the wild to identify the problem.
Do you understand how you are wrong about this particular little thing?
Situation: I have a closed source library I want to use. I test and find that it doesn't meet our requirements for some trivial matter like the behavior of a few methods (super common, I assure you).Situation: I have a closed source library I want to use. I test and find
that it doesn't meet our requirements for some trivial matter like
performance (super common, I assure you).
The author is not responsive, possibly because it would be a potentially
breaking change to all the other customers of the library, I've now
wasted a month of production time in discussions in an already tight
schedule, and I begin the process of re-inventing the wheel.
I've spent 10 years repeating this pattern. It will still be present
with final-by-default, but it will be MUCH WORSE with
virtual-by-default. I don't want to step backwards on this front.
I've spent 10 years repeating this pattern. It will still be present with virtual-by-default, but it will be MUCH WORSE with final-by-default. I don't want to step backwards on this front.
The author is not responsive, possibly because it would be a potentially breaking change to all the other customers of the library, I've now wasted a month of production time in discussions in an already tight schedule, and I begin the process of re-inventing the wheel.
Destroyed?
Oh, so now the default doesn't matter. The amount of self-destruction is high in this post.Even with C++ final-by-default, we've had to avoid libraries because C++
developers can be virtual-tastic sticking it on everything.
It will also magnify the flexibility benefits.D will magnify this issue immensely with virtual-by-default.
I don't think a "high frequency situation" would use classes designed naively. Again, the kind of persona you are discussing are very weird chaps.At least in
C++, nobody ever writes virtual on trivial accessors.
virtual accessors/properties will likely eliminate many more libraries
on the spot for being used in high frequency situations.
Again:Again, refer to Steven's pattern. Methods will almost always be virtual
in D (because the author didn't care), until someone flags the issue
years later... and then can it realistically be changed? Is it too late?
Conversely, if virtual needs to be added at a later time, there are no
such nasty side effects. It is always safe.
- changing a method final -> overridable is nonbreaking. YOU ARE RIGHT HERE.
- changing a method overridable -> final will break PRECISELY code that was finding that design choice USEFUL. YOU SEEM TO BE MISSING THIS.
And I argue the subjective opinion, that code can't possibly be
correct
if the author never considered how the API may be used outside his
design premise, and can never test it.
I think you are wrong in thinking traditional procedural testing
methods should apply to OOP designs. I can see how that fails indeed.
Can you elaborate?
And can you convince me that an author of a class that can be
transformed/abused in any way that he may have never even considered,
can realistically reason about how to design his class well without
being explicit about virtuals?
I can try. You don't understand at least this aspect of OOP (honest affirmation, not intended to offend). If class A chooses to inherit class B, it shouldn't do so to reuse B, but to be reused by code that manipulates Bs. In a nutshell: "inherit not to reuse, but to be reused". I hope this link works: http://goo.gl/ntRrt
(If all A wants is to reuse B, it just uses composition.)
You should agree as a simple matter that there's no reasonable way one can design a software library that would be transformed, abused, and misused. Although class designers should definitely design to make good use easy and bad use difficult, they routinely are unable to predict all different ways in which clients would use the class, so designing with flexibility in mind is the safest route (unless concerns for performance overrides that). Your concern with performance overrides that for flexibility, and that's entirely fine. What I disagree with is that you believe what's best for everybody.
I don't know what a super-polymorphic class is, and google fails to list it: http://goo.gl/i53hSI've made the point before that the sorts of super-polymorphic classes
that might have mostly-virtuals are foundational classes, written once
and used many times.
I won't believe this just because you said it (inventing terminology in the process), it doesn't rhyme with my experience, so do you have any factual evidence to back that up?
These are not the classes that programmers sitting at their desk are
banging out day after day. This are not the common case. Such a
carefully designed and engineered base class can afford a moment to type
'virtual:' at the top.