On 4 June 2013 12:50, Steven Schveighoffer <schveiguy@yahoo.com> wrote:
On Mon, 03 Jun 2013 12:25:11 -0400, Manu <turkeyman@gmail.com> wrote:

You won't break every single method, they already went through that
recently when override was made a requirement.
It will only break the base declarations, which are far less numerous.

Coming off the sidelines:

1. I think in the general case, virtual by default is fine.  In code that is not performance-critical, it's not a big deal to have virtual functions, and it's usually more useful to have them virtual.  I've experienced plenty of times with C++ where I had to go back and 'virtualize' a function.  Any time you change that, you must recompile everything, it's not a simple change.  It's painful either way.  To me, this is simply a matter of preference.  I understand that it's difficult to go from virtual to final, but in practice, breakage happens rarely, and will be loud with the new override requirements.

I agree that in the general case, it's 'fine', but I still don't see how it's a significant advantage. I'm not sure what the loss is, but I can see clear benefits to being explicit from an API point of view about what is safe to override, and implicitly, how the API is intended to be used.
Can you see my point about general correctness? How can a class be correct if everything can be overridden, but it wasn't designed for it, and certainly never been tested?
 
2. I think your background may bias your opinions :)  We aren't all working on making lightning fast bare-metal game code.

Of course it does. But what I'm trying to do is show the relative merits of one default vs the other. I may be biased, but I feel I've presented a fair few advantages to final-by-default, and I still don't know what the advantages to virtual-by-default are, other than people who don't care about the matter feel it's an inconvenience to type 'virtual:'. But that inconvenience is going to be forced upon one party either way, so the choice needs to be based on relative merits.
 
3. It sucks to have to finalize all but N methods.  In other words, we need a virtual *keyword* to go back to virtual-land.  Then, one can put final: at the top of the class declaration, and virtualize a few methods.  This shouldn't be allowed for final classes though.

The thing that irks me about that is that most classes aren't base classes, and most methods are trivial accessors and properties... why cater to the minority case?
It also doesn't really address the problem where programmers just won't do that. Libraries suffer, I'm still inventing wheels 10 years from now, and I'm wasting time tracking down slip ups.
What are the relative losses to the if it were geared the other way?

My one real experience on this was with dcollections.  I had not declared anything final, and I realized I was paying a performance penalty for it.  I then made all the classes final, and nobody complained.

The userbase of a library will grow with time. Andrei wants a million D users, that's a lot more opportunities to break peoples code and gather complaints.
Surely it's best to consider these sorts of changes sooner than later?

And where is the most likely source of those 1 million new users to migrate from? Java?