June 04, 2013
On Mon, 03 Jun 2013 23:39:25 -0400, Manu <turkeyman@gmail.com> wrote:

> On 4 June 2013 12:50, Steven Schveighoffer <schveiguy@yahoo.com> wrote:
>
>> On Mon, 03 Jun 2013 12:25:11 -0400, Manu <turkeyman@gmail.com> wrote:
>>
>>  You won't break every single method, they already went through that
>>> recently when override was made a requirement.
>>> It will only break the base declarations, which are far less numerous.
>>>
>>
>> Coming off the sidelines:
>>
>> 1. I think in the general case, virtual by default is fine.  In code that
>> is not performance-critical, it's not a big deal to have virtual functions,
>> and it's usually more useful to have them virtual.  I've experienced plenty
>> of times with C++ where I had to go back and 'virtualize' a function.  Any
>> time you change that, you must recompile everything, it's not a simple
>> change.  It's painful either way.  To me, this is simply a matter of
>> preference.  I understand that it's difficult to go from virtual to final,
>> but in practice, breakage happens rarely, and will be loud with the new
>> override requirements.
>>
>
> I agree that in the general case, it's 'fine', but I still don't see how
> it's a significant advantage. I'm not sure what the loss is, but I can see
> clear benefits to being explicit from an API point of view about what is
> safe to override, and implicitly, how the API is intended to be used.
> Can you see my point about general correctness? How can a class be correct
> if everything can be overridden, but it wasn't designed for it, and
> certainly never been tested?

Since when is that on the base class author?  Doctor, I overrode this class, and it doesn't work.  Well, then don't override it :)

Also there is the possibility that a class that isn't designed from the start to be overridden.  But overriding one or two methods works, and has no adverse effects.  Then it is a happy accident.  And it even enables designs that take advantage of this default, like mock objects.  I would point out that in Objective-C, ALL methods are virtual, even class methods and properties.  It seems to work fine there.

What I'm really trying to say is, when final is the default, and you really should have made some method virtual (but didn't), then you have to pay for it later when you update the base class.  When virtual is the default, and you really wanted it to be final (but didn't do that), then you have to pay for it later when you update the base class.  There is no way that is advantageous to *everyone*.

>> 2. I think your background may bias your opinions :)  We aren't all
>> working on making lightning fast bare-metal game code.
>>
>
> Of course it does. But what I'm trying to do is show the relative merits of
> one default vs the other. I may be biased, but I feel I've presented a fair
> few advantages to final-by-default, and I still don't know what the
> advantages to virtual-by-default are, other than people who don't care
> about the matter feel it's an inconvenience to type 'virtual:'. But that
> inconvenience is going to be forced upon one party either way, so the
> choice needs to be based on relative merits.

It's advantageous to a particular style of coding.  If you know everything is virtual by default, then you write code expecting that.  Like mock objects.  Or extending a class simply to change one method, even when you weren't expecting that to be part of the design originally.

I look at making methods final specifically for optimization.  It doesn't occur to me that the fact that it's overridable is a "leak" in the API, it's at your own peril if you want to extend a class that I didn't intend to be extendable.  Like changing/upgrading engine parts in a car.

>> 3. It sucks to have to finalize all but N methods.  In other words, we
>> need a virtual *keyword* to go back to virtual-land.  Then, one can put
>> final: at the top of the class declaration, and virtualize a few methods.
>>  This shouldn't be allowed for final classes though.
>>
>
> The thing that irks me about that is that most classes aren't base classes,
> and most methods are trivial accessors and properties... why cater to the
> minority case?

I think it is unfair to say most classes are not base classes.  This would mean most classes are marked as final.  I don't think they are.  One of the main reasons to use classes in the first place is for extendability.

Essentially, making virtual the default enables the *extender* to determine whether it's a good base class, when the original author doesn't care.

I think classes fall into 3 categories:

1. Declared a base class (abstract)
2. Declared NOT a base class (final)
3. Don't care.

I'd say most classes fall in category 3.  For that, I think having virtual by default isn't a hindrance, it's simply giving the most flexibility to the user.

> It also doesn't really address the problem where programmers just won't do
> that. Libraries suffer, I'm still inventing wheels 10 years from now, and
> I'm wasting time tracking down slip ups.
> What are the relative losses to the if it were geared the other way?

The losses are that if category 3 were simply always final, some other anti-Manu who wanted to extend everything has to contact all the original authors to get them to change their classes to virtual :)

BTW, did you know you can extend a base class and simply make the extension final, and now all the methods on that derived class become non-virtual calls?  Much easier to do than making the original base virtual (Note I haven't tested this to verify, but if not, it should be changed in the compiler).

> My one real experience on this was with dcollections.  I had not declared
>> anything final, and I realized I was paying a performance penalty for it.
>>  I then made all the classes final, and nobody complained.
>>
>
> The userbase of a library will grow with time. Andrei wants a million D
> users, that's a lot more opportunities to break peoples code and gather
> complaints.
> Surely it's best to consider these sorts of changes sooner than later?

I think it vastly depends on the intent of the code.  If your classes simply don't lend themselves to extending, then making them final is a non-issue.

> And where is the most likely source of those 1 million new users to migrate
> from? Java?

From all over the place, I would say.  D seems to be an island of misfit programmers.

-Steve
June 04, 2013
On 6/4/13 12:13 AM, Manu wrote:
> The fact that virtual is a one way trip, and it can not safely be
> revoked later and therefore a very dangerous choice as the default is a
> maintenance problem.

Certainly you're omitting a good part of the setup, which I assume has to do with binary compatibility and prebuilt binaries. In other setups, final is the one-way trip by definition - it restricts potential flexibility.

> The fact that I'm yet to witness a single programmer ever declare their
> final methods at the time of authoring is a problem.

Too narrow a social circle? :o)

> The fact that many useful libraries might become inaccessible to what
> I'm sure is not an insignificant niche of potential D users is a problem.

Not getting this. I dare believe that a competent library designer would be able to choose which functions ought to be overridden and which oughtn't. The moment the issue gets raised, the way the default goes is irrelevant. (But maybe I'm just not getting this.)

> And I argue the subjective opinion, that code can't possibly be correct
> if the author never considered how the API may be used outside his
> design premise, and can never test it.

I think you are wrong in thinking traditional procedural testing methods should apply to OOP designs. I can see how that fails indeed.


Andrei
June 04, 2013
On Tuesday, 4 June 2013 at 04:13:57 UTC, Manu wrote:
> And I argue the subjective opinion, that code can't possibly be correct if
> the author never considered how the API may be used outside his design
> premise, and can never test it.

This very sentence show that you miss the point of OOP and Liskov substitution principle.

To make the argument cleared, let's consider a lib with a class A. The whole lib uses A and must now know about subclasses of A. Not even A itself.

As a consequence, A don't need to be tested for all kind of future possible override.

If I, as a programmer, create a class B that extends A, it is my responsibility to ensure that my class really behave as an A. As a matter of fact, the lib don't know anything about B, and that is the whole point of OOP.
June 04, 2013
On 4 June 2013 14:23, Andrei Alexandrescu <SeeWebsiteForEmail@erdani.org>wrote:

> On 6/4/13 12:13 AM, Manu wrote:
>
>> The fact that virtual is a one way trip, and it can not safely be revoked later and therefore a very dangerous choice as the default is a maintenance problem.
>>
>
> Certainly you're omitting a good part of the setup, which I assume has to do with binary compatibility and prebuilt binaries. In other setups, final is the one-way trip by definition - it restricts potential flexibility.


I don't buy the flexibility argument as a plus. I think that's a mistake, but I granted that's a value judgement.

 The fact that I'm yet to witness a single programmer ever declare their
>> final methods at the time of authoring is a problem.
>>
>
> Too narrow a social circle? :o)


Well let's consider Steven's example from a short while ago. He didn't
write final anywhere, and at some later time, retro-actively introduced it
because he realised it was a performance burden.
At which point, refer to point #1. He was lucky that he was able to do this
without any complaints from customers.
But it's a breaking change to the API no matter which way you slice it, and
I suspect this will be the prevalent pattern.
So it basically commits to a future of endless breaking changes when
someone wants to tighten up the performance of their library, and typically
only after it has had time in the wild to identify the problem.


 The fact that many useful libraries might become inaccessible to what
>> I'm sure is not an insignificant niche of potential D users is a problem.
>>
>
> Not getting this. I dare believe that a competent library designer would be able to choose which functions ought to be overridden and which oughtn't. The moment the issue gets raised, the way the default goes is irrelevant. (But maybe I'm just not getting this.)


Situation: I have a closed source library I want to use. I test and find
that it doesn't meet our requirements for some trivial matter like
performance (super common, I assure you).
The author is not responsive, possibly because it would be a potentially
breaking change to all the other customers of the library, I've now wasted
a month of production time in discussions in an already tight schedule, and
I begin the process of re-inventing the wheel.
I've spent 10 years repeating this pattern. It will still be present with
final-by-default, but it will be MUCH WORSE with virtual-by-default. I
don't want to step backwards on this front.

Even with C++ final-by-default, we've had to avoid libraries because C++
developers can be virtual-tastic sticking it on everything.
D will magnify this issue immensely with virtual-by-default. At least in
C++, nobody ever writes virtual on trivial accessors.
virtual accessors/properties will likely eliminate many more libraries on
the spot for being used in high frequency situations.

Again, refer to Steven's pattern. Methods will almost always be virtual in D (because the author didn't care), until someone flags the issue years later... and then can it realistically be changed? Is it too late? Conversely, if virtual needs to be added at a later time, there are no such nasty side effects. It is always safe.

 And I argue the subjective opinion, that code can't possibly be correct
>> if the author never considered how the API may be used outside his design premise, and can never test it.
>>
>
> I think you are wrong in thinking traditional procedural testing methods should apply to OOP designs. I can see how that fails indeed.


Can you elaborate?
And can you convince me that an author of a class that can be
transformed/abused in any way that he may have never even considered, can
realistically reason about how to design his class well without being
explicit about virtuals?

I've made the point before that the sorts of super-polymorphic classes that
might have mostly-virtuals are foundational classes, written once and used
many times.
These are not the classes that programmers sitting at their desk are
banging out day after day. This are not the common case. Such a carefully
designed and engineered base class can afford a moment to type 'virtual:'
at the top.


June 04, 2013
On Tue, 04 Jun 2013 06:16:45 +0200, Steven Schveighoffer <schveiguy@yahoo.com> wrote:

> I think it is unfair to say most classes are not base classes.  This would mean most classes are marked as final.  I don't think they are.  One of the main reasons to use classes in the first place is for extendability.

This is false. Consider this hierarchy: A->B->C, where x->y means 'x
derives from y'. There is only one base class (A), and only one class
that may be marked final (C). This will often be the case.


> BTW, did you know you can extend a base class and simply make the extension final, and now all the methods on that derived class become non-virtual calls?  Much easier to do than making the original base virtual (Note I haven't tested this to verify, but if not, it should be changed in the compiler).

This does however not help one iota when you have a reference to a base
class. This will also often be the case.

-- 
Simen
June 04, 2013
On Tuesday, 4 June 2013 at 04:53:48 UTC, Manu wrote:
> Situation: I have a closed source library I want to use. I test and find
> that it doesn't meet our requirements for some trivial matter like
> performance (super common, I assure you).
> The author is not responsive, possibly because it would be a potentially
> breaking change to all the other customers of the library, I've now wasted
> a month of production time in discussions in an already tight schedule, and
> I begin the process of re-inventing the wheel.
> I've spent 10 years repeating this pattern. It will still be present with
> final-by-default, but it will be MUCH WORSE with virtual-by-default. I
> don't want to step backwards on this front.
>
> Even with C++ final-by-default, we've had to avoid libraries because C++
> developers can be virtual-tastic sticking it on everything.
> D will magnify this issue immensely with virtual-by-default. At least in
> C++, nobody ever writes virtual on trivial accessors.
> virtual accessors/properties will likely eliminate many more libraries on
> the spot for being used in high frequency situations.
>

Paragraph 2 destroy paragraph one.

> Again, refer to Steven's pattern. Methods will almost always be virtual in
> D (because the author didn't care), until someone flags the issue years
> later... and then can it realistically be changed? Is it too late?
> Conversely, if virtual needs to be added at a later time, there are no such
> nasty side effects. It is always safe.
>

The solution is crystal clear to me from the beginning. You must pay the price when you actually override a method, not when you have the opportunity to do so.

You simply don't want to consider that option as it break your way of doing something currently unsupported (shared object), provide higher benefice, and do not break everybody else's code.
June 04, 2013
On 4 June 2013 14:16, Steven Schveighoffer <schveiguy@yahoo.com> wrote:

> On Mon, 03 Jun 2013 23:39:25 -0400, Manu <turkeyman@gmail.com> wrote:
>
>  On 4 June 2013 12:50, Steven Schveighoffer <schveiguy@yahoo.com> wrote:
>>
>>  On Mon, 03 Jun 2013 12:25:11 -0400, Manu <turkeyman@gmail.com> wrote:
>>>
>>>  You won't break every single method, they already went through that
>>>
>>>> recently when override was made a requirement.
>>>> It will only break the base declarations, which are far less numerous.
>>>>
>>>>
>>> Coming off the sidelines:
>>>
>>> 1. I think in the general case, virtual by default is fine.  In code that
>>> is not performance-critical, it's not a big deal to have virtual
>>> functions,
>>> and it's usually more useful to have them virtual.  I've experienced
>>> plenty
>>> of times with C++ where I had to go back and 'virtualize' a function.
>>>  Any
>>> time you change that, you must recompile everything, it's not a simple
>>> change.  It's painful either way.  To me, this is simply a matter of
>>> preference.  I understand that it's difficult to go from virtual to
>>> final,
>>> but in practice, breakage happens rarely, and will be loud with the new
>>> override requirements.
>>>
>>>
>> I agree that in the general case, it's 'fine', but I still don't see how
>> it's a significant advantage. I'm not sure what the loss is, but I can see
>> clear benefits to being explicit from an API point of view about what is
>> safe to override, and implicitly, how the API is intended to be used.
>> Can you see my point about general correctness? How can a class be correct
>> if everything can be overridden, but it wasn't designed for it, and
>> certainly never been tested?
>>
>
> Since when is that on the base class author?  Doctor, I overrode this class, and it doesn't work.  Well, then don't override it :)
>

Because it wastes your time (and money). And perhaps it only fails/causes
problems in edge cases, or obscure side effects, or in internal code that
you have no ability to inspect/debug.
You have no reason to believe you're doing anything wrong; you're using the
API in a perfectly valid way... it just happens that it is wrong (the
author never considered it), and it doesn't work.


Also there is the possibility that a class that isn't designed from the
> start to be overridden.  But overriding one or two methods works, and has no adverse effects.  Then it is a happy accident.  And it even enables designs that take advantage of this default, like mock objects.  I would point out that in Objective-C, ALL methods are virtual, even class methods and properties.  It seems to work fine there.
>

Even apple profess that Obj-C is primarily useful for UI code, and they use
C for tonnes of other stuff.
UI code is extremely low frequency by definition. I can't click my mouse
very fast ;)


What I'm really trying to say is, when final is the default, and you really
> should have made some method virtual (but didn't), then you have to pay for it later when you update the base class.


I recognise this, but I don't think that's necessarily a bad thing. It
forces you a moment of consideration wrt making the change, and if it will
affect anything else. If it feels like a significant change, you'll treat
it as such (which it is).
Even though you do need to make the change, it's not a breaking change, and
you don't risk any side effects.



> When virtual is the default, and you really wanted it to be final (but didn't do that), then you have to pay for it later when you update the base class.  There is no way that is advantageous to *everyone*.
>

But unlike the first situation, this is a breaking change. If you are not the only user of your library, then this can't be done safely.


 2. I think your background may bias your opinions :)  We aren't all
>>> working on making lightning fast bare-metal game code.
>>>
>>>
>> Of course it does. But what I'm trying to do is show the relative merits
>> of
>> one default vs the other. I may be biased, but I feel I've presented a
>> fair
>> few advantages to final-by-default, and I still don't know what the
>> advantages to virtual-by-default are, other than people who don't care
>> about the matter feel it's an inconvenience to type 'virtual:'. But that
>> inconvenience is going to be forced upon one party either way, so the
>> choice needs to be based on relative merits.
>>
>
> It's advantageous to a particular style of coding.  If you know everything is virtual by default, then you write code expecting that.  Like mock objects.  Or extending a class simply to change one method, even when you weren't expecting that to be part of the design originally.
>

If you write code like that, then write 'virtual:', it doesn't hurt anyone else. The converse is not true.


I look at making methods final specifically for optimization.  It doesn't
> occur to me that the fact that it's overridable is a "leak" in the API, it's at your own peril if you want to extend a class that I didn't intend to be extendable.  Like changing/upgrading engine parts in a car.
>

Precisely, this highlights one of the key issues. Optimising has now become a dangerous breaking process.


 3. It sucks to have to finalize all but N methods.  In other words, we
>>> need a virtual *keyword* to go back to virtual-land.  Then, one can put
>>> final: at the top of the class declaration, and virtualize a few methods.
>>>  This shouldn't be allowed for final classes though.
>>>
>>>
>> The thing that irks me about that is that most classes aren't base
>> classes,
>> and most methods are trivial accessors and properties... why cater to the
>> minority case?
>>
>
> I think it is unfair to say most classes are not base classes.  This would mean most classes are marked as final.  I don't think they are.  One of the main reasons to use classes in the first place is for extendability.
>

People rarely use the final keyword on classes, even though they could 90%
of the time.
Class hierarchies only typically extend to a certain useful extent, but
people usually leave the option to go further anyway. And the deeper the
average hierarchy, the more leaf's there are - and the less drastic this
change seems in contrast.


Essentially, making virtual the default enables the *extender* to determine
> whether it's a good base class, when the original author doesn't care.
>
> I think classes fall into 3 categories:
>
> 1. Declared a base class (abstract)
> 2. Declared NOT a base class (final)
> 3. Don't care.
>
> I'd say most classes fall in category 3.  For that, I think having virtual by default isn't a hindrance, it's simply giving the most flexibility to the user.


Precisely, we're back again at the only real argument for virtual-by-default: it'll slightly annoy some people to type 'virtual', but that goes both ways. I don't think this supports one position or the other.


 It also doesn't really address the problem where programmers just won't do
>> that. Libraries suffer, I'm still inventing wheels 10 years from now, and
>> I'm wasting time tracking down slip ups.
>> What are the relative losses to the if it were geared the other way?
>>
>
> The losses are that if category 3 were simply always final, some other anti-Manu who wanted to extend everything has to contact all the original authors to get them to change their classes to virtual :)
>

Fine, they'll probably be receptive since it's not a breaking change.
Can you guess how much traction I have when I ask an author of a popular
library to remove some 'virtual' keywords in C++ code?
"Oh we can't really do that, it could break any other users!", so then we
rewrite the library.

Who has been more inconvenienced in this scenario?

Additionally, if it's the sort of library that's so polymorphic as you
suggest, then what are the chances it also uses a lot of templates, and
therefore you have the source code...
I think the type of library you describe has a MUCH higher probability of
being open-source, or that you have the source available.

BTW, did you know you can extend a base class and simply make the extension
> final, and now all the methods on that derived class become non-virtual calls?  Much easier to do than making the original base virtual (Note I haven't tested this to verify, but if not, it should be changed in the compiler).
>

One presumes that the library that defines the base class deals with its own base pointers internally, and as such, the functions that I may have finalised in my code will still be virtual in the place that it counts.


 My one real experience on this was with dcollections.  I had not declared
>>
>>> anything final, and I realized I was paying a performance penalty for it.
>>>  I then made all the classes final, and nobody complained.
>>>
>>>
>> The userbase of a library will grow with time. Andrei wants a million D
>> users, that's a lot more opportunities to break peoples code and gather
>> complaints.
>> Surely it's best to consider these sorts of changes sooner than later?
>>
>
> I think it vastly depends on the intent of the code.  If your classes simply don't lend themselves to extending, then making them final is a non-issue.
>
>
>  And where is the most likely source of those 1 million new users to
>> migrate
>> from? Java?
>>
>
> From all over the place, I would say.  D seems to be an island of misfit programmers.
>
> -Steve
>


June 04, 2013
On 6/4/13 12:53 AM, Manu wrote:
> I don't buy the flexibility argument as a plus. I think that's a
> mistake, but I granted that's a value judgement.

Great.

> But it's a breaking change to the API no matter which way you slice it,
> and I suspect this will be the prevalent pattern.
> So it basically commits to a future of endless breaking changes when
> someone wants to tighten up the performance of their library, and
> typically only after it has had time in the wild to identify the problem.

You're framing the matter all wrongly. Changing a method from virtual to final breaks the code of people who chose to override it - i.e. EXACTLY those folks who found it useful to TAP into the FLEXIBILITY of the design.

Do you understand how you are wrong about this particular little thing?

> Situation: I have a closed source library I want to use. I test and find
> that it doesn't meet our requirements for some trivial matter like
> performance (super common, I assure you).
> The author is not responsive, possibly because it would be a potentially
> breaking change to all the other customers of the library, I've now
> wasted a month of production time in discussions in an already tight
> schedule, and I begin the process of re-inventing the wheel.
> I've spent 10 years repeating this pattern. It will still be present
> with final-by-default, but it will be MUCH WORSE with
> virtual-by-default. I don't want to step backwards on this front.

Situation: I have a closed source library I want to use. I test and find that it doesn't meet our requirements for some trivial matter like the behavior of a few methods (super common, I assure you).
The author is not responsive, possibly because it would be a potentially breaking change to all the other customers of the library, I've now wasted a month of production time in discussions in an already tight schedule, and I begin the process of re-inventing the wheel.
I've spent 10 years repeating this pattern. It will still be present with virtual-by-default, but it will be MUCH WORSE with final-by-default. I don't want to step backwards on this front.

Destroyed?

> Even with C++ final-by-default, we've had to avoid libraries because C++
> developers can be virtual-tastic sticking it on everything.

Oh, so now the default doesn't matter. The amount of self-destruction is high in this post.

> D will magnify this issue immensely with virtual-by-default.

It will also magnify the flexibility benefits.

> At least in
> C++, nobody ever writes virtual on trivial accessors.
> virtual accessors/properties will likely eliminate many more libraries
> on the spot for being used in high frequency situations.

I don't think a "high frequency situation" would use classes designed naively. Again, the kind of persona you are discussing are very weird chaps.

> Again, refer to Steven's pattern. Methods will almost always be virtual
> in D (because the author didn't care), until someone flags the issue
> years later... and then can it realistically be changed? Is it too late?
> Conversely, if virtual needs to be added at a later time, there are no
> such nasty side effects. It is always safe.

Again:

- changing a method final -> overridable is nonbreaking. YOU ARE RIGHT HERE.

- changing a method overridable -> final will break PRECISELY code that was finding that design choice USEFUL. YOU SEEM TO BE MISSING THIS.

>         And I argue the subjective opinion, that code can't possibly be
>         correct
>         if the author never considered how the API may be used outside his
>         design premise, and can never test it.
>
>
>     I think you are wrong in thinking traditional procedural testing
>     methods should apply to OOP designs. I can see how that fails indeed.
>
>
> Can you elaborate?
> And can you convince me that an author of a class that can be
> transformed/abused in any way that he may have never even considered,
> can realistically reason about how to design his class well without
> being explicit about virtuals?

I can try. You don't understand at least this aspect of OOP (honest affirmation, not intended to offend). If class A chooses to inherit class B, it shouldn't do so to reuse B, but to be reused by code that manipulates Bs. In a nutshell: "inherit not to reuse, but to be reused". I hope this link works: http://goo.gl/ntRrt

(If all A wants is to reuse B, it just uses composition.)

You should agree as a simple matter that there's no reasonable way one can design a software library that would be transformed, abused, and misused. Although class designers should definitely design to make good use easy and bad use difficult, they routinely are unable to predict all different ways in which clients would use the class, so designing with flexibility in mind is the safest route (unless concerns for performance overrides that). Your concern with performance overrides that for flexibility, and that's entirely fine. What I disagree with is that you believe what's best for everybody.

> I've made the point before that the sorts of super-polymorphic classes
> that might have mostly-virtuals are foundational classes, written once
> and used many times.

I don't know what a super-polymorphic class is, and google fails to list it: http://goo.gl/i53hS

> These are not the classes that programmers sitting at their desk are
> banging out day after day. This are not the common case. Such a
> carefully designed and engineered base class can afford a moment to type
> 'virtual:' at the top.

I won't believe this just because you said it (inventing terminology in the process), it doesn't rhyme with my experience, so do you have any factual evidence to back that up?


Andrei
June 04, 2013
On 6/4/13 1:05 AM, Simen Kjaeraas wrote:
> On Tue, 04 Jun 2013 06:16:45 +0200, Steven Schveighoffer
> <schveiguy@yahoo.com> wrote:
>
>> I think it is unfair to say most classes are not base classes. This
>> would mean most classes are marked as final. I don't think they are.
>> One of the main reasons to use classes in the first place is for
>> extendability.
>
> This is false. Consider this hierarchy: A->B->C, where x->y means 'x
> derives from y'. There is only one base class (A), and only one class
> that may be marked final (C). This will often be the case.

You two are in violent agreement. (B is also a base class, in addition to being a derived class.)


Andrei
June 04, 2013
Manu, I'm wondering that perhaps you should not be using classes at all. You can still create a similar overridable scheme for struct methods, and although it may not be as convenient, it will work. However a big failure point with stucts is the lack of inheritance.

Structs would IMO be far more useful if they had inheritance. Inheritence can be fully removed from the rest of polymorphism, so there's no reason why structs which are not polymorphic cannot inherit.

Actually I'd like to know why structs cannot inherit? I hate it when I end up creating classes when I have no other reason to create a class other than for the ability to inherit.

--rt