June 04, 2013
On 2013-06-04 09:38, Jonathan M Davis wrote:

> That would be good regardless of whether virtual or non-virtual is the
> default. In general, the function attributes other than access level specifiers
> and @safety attributes suffer from not being able to be undone once you use
> them with a colon or {}.

Something like !final would be a good idea. It can be generalized for all attributes. No new keywords need to be introduced.

-- 
/Jacob Carlborg
June 04, 2013
On Tuesday, 4 June 2013 at 00:19:39 UTC, Manu wrote:

>
> But this is a practical solution. The only counter-proposal I've heard is
> Andrei's 'all methods use ufcs' idea, but I think that one would be a much
> harder sell to the community. I'm certainly not convinced.

It would be hard to sell for at least one reason - protected non-virtual methods are quite common:

module a;

class A
{
}

// should be accessible to derived classes
// and is not part of A's public interface
private void foo(A this_);

----
module b;
import a;

class B : A
{
    virtual void bar()
    {
        foo(); // How?
    }
}
June 04, 2013
On Tuesday, 4 June 2013 at 05:58:32 UTC, Andrei Alexandrescu wrote:
> On 6/4/13 1:16 AM, Manu wrote:
>> But unlike the first situation, this is a breaking change. If you are
>> not the only user of your library, then this can't be done safely.
>
> Same fallacy all over again, for the third time in this thread. You keep on going about "breaking change" without recognizing that the now broken code was happily taking advantage of the very flexibility that you argue was useless and needed fixing.

I believe Manu's point is that the original flexibility was a mistake: the author of the library never intended for the method to be overridden; it was an accident of virtual-by-default. The fact that overriding methods on the class works for a client is just a coincidence, and could be dangerous.

The problem occurs when, later, the author profiles and notices that the virtual methods are causing performance issues, or notices that the method should not have been virtual (perhaps he relies on the base behaviour's semantics). Marking it final now would be a breaking change for the client relying on the accidental virtual functions.

If things were final by default, then you would have to opt-in to virtual, so it is unlikely that you will mark a method virtual by accident. There is no breakage changing a method from final to virtual.

FWIW: I think making method final by default now would cause too much breakage, but I do agree with Manu that it would have been a better choice in the beginning.
June 04, 2013
On 4 June 2013 15:58, Andrei Alexandrescu <SeeWebsiteForEmail@erdani.org>wrote:

> On 6/4/13 1:16 AM, Manu wrote:
>
>> But unlike the first situation, this is a breaking change. If you are not the only user of your library, then this can't be done safely.
>>
>
> Same fallacy all over again, for the third time in this thread. You keep on going about "breaking change" without recognizing that the now broken code was happily taking advantage of the very flexibility that you argue was useless and needed fixing.
>

And the same fallacious response.
The code you refer to wouldn't exist, because it wasn't possible to write,
because what they have allegedly overridden isn't virtual. Nobody's code
can break.
They may have lost an opportunity to twist some code into an unexpected use
case, but chances are, it wasn't their only possible solution, and possible
that it might have even been dangerous.

This is exactly the kind of argumentation that I disagree with,
> philosophically. Instead of carefully pondering the goods and the bads in a complex web of tradeoffs, it just clumsily gropes for the desired conclusion in ignorance of all that doesn't fit.


And this pisses me off, because you're implying that it's okay for you to
disagree on some points in principle, but not for anyone else. I've clearly
(and repeatedly) given my reasonings, and you haven't responded to many of
them head-on.
I don't agree the sacrifice is anywhere as significant as you suggest other
than a breakage which which we can quantify, it's entirely theoretical for
a start, whereas my points are all taken from a decade of experience that I
don't want to see become worse in the future.
I've also acknowledged that these are my opinions, but you can't say that I
haven't put any thought into my position and I'm 'clumsily groping for
desired conclusions' out of ignorance. That's just basically offensive.

If I'm found to be wrong by the majority here, that will become evident soon enough.

 If you write code like that, then write 'virtual:', it doesn't hurt
>> anyone else. The converse is not true.
>>
>
> Fourth.
>
>      I look at making methods final specifically for optimization.  It
>>     doesn't occur to me that the fact that it's overridable is a "leak"
>>     in the API, it's at your own peril if you want to extend a class
>>     that I didn't intend to be extendable.  Like changing/upgrading
>>     engine parts in a car.
>>
>>
>> Precisely, this highlights one of the key issues. Optimising has now become a dangerous breaking process.
>>
>
> Fifth.
>
> [snip]
>
> Allow me to summarize my understanding of the most valuable parts your argument.
>
> * At the top level you believe ultimate efficiency should be default and OOP flexibility should be opt-in.
>
> * Classes routinely should make most methods final because it's hard to imagine why one would override all but a few. Since those are a minority, it's so much the better to make final the default.
>
> * Even the most performance-conscious people would not care to annotate classes and methods. It just doesn't happen. In contrast, people who want flexibility will annotate things for the simple reason they have to, otherwise overriding won't work.
>
> * You don't consider it a problem that one must go back to base classes
> and changing methods from final to overridable, whenever such a need arises.
>
> (It would be awesome if you had a similar list with the opposite
> arguments.)
>
> If the above is an accurate summary, I'd say it's a matter in which reasonable people might disagree. I take issue with each of the points above (not flat out disagree with each, more like amend and qualify etc).
>
> Unless fresh arguments, facts, or perspectives come about, I am personally not convinced, based on this thread so far, that we should operate a language change.



I'll summarise my arguments, though I've done this at least 3 times now.
Sorry, I 'value' more of my points than you, so my summary is quite longer.
These are all supporting reasons why I think it would be a good change, and
naturally, some are of lower significance then than others.
I'd like to think that most of them should be objectively rejected, or the
counter arguments list grows in size a whole lot to justify the insults you
offer:

* At top level I believe D aspires to be a systems language, and
performance should certainly be a key concern.
  - 'Flexibility' [at the expense of performance] should be opt-in. It
comes at the expense of what I presume should be a core audience for a
systems language.
  - x86 is the most tolerant architecture _by far_, and we're committed to
a cost that isn't even known yet on the vast majority of computers in the
world.

* virtual is a one-way trip. It can't be undone without risking breaking
code once released to the wild. How can that state be a sensible default?
  - Can not be un-done by the compiler/linker like it can in other
(dynamic) languages. No sufficiently smart compiler can ever address this
problem as an optimisation.

* The result of said performance concern has a cost in time and money for
at least one of D's core audiences (realtime systems programming).
  - I don't believe the converse case, final-by-default, would present any
comparative loss for users that want to write 'virtual:' at the top of
their class.
  - 'Opportunistic de-virtualisation' is a time consuming and tedious
process, and tends to come up only during crunch times.

* "Classes routinely should make most methods final because it's hard to
imagine why one would override [the intended] few. Since those are a
minority, it's so much the better to make final the default."
  - The majority of classes are leaf's, and there's no reason for leaf
methods to be virtual by default. Likewise, most methods are trivial
accessors (the most costly) which have no business being virtual either.
  - It's also self-documenting. It makes it clear to a customer how the API
is to be used.

* Libraries written in D should hope to be made available to the widest
audience possible.
  - Library author's certainly don't consider (or care about) everyone's
usage cases, but they often write useful code that many people want to make
use of. This is the definition of a library.
  - They are almost certainly not going to annotate their classes with lots
of 'final'.
  - Given hard experience, when asked to revoke virtual, even if authors
agree in principle, they will refuse to do it given the risk of breakage
for unknown customers.
  - Adding final as an optimisation is almost always done post-release, so
it will almost always run the risk of breaking someones code somewhere.

* Experience has shown that programmers from C++/C# don't annotate 'final'
even when they know they should. Users from Java don't do it either, but
mainly because they don't consider it important.
  - Note: 'the most performance conscious users' that you refer to are
often not the ones writing the code. Programmers work in teams, sometimes
those teams are large, and many programmers are inexperienced.

* final-by-default promotes awareness of virtual-ness, and it's associated
costs.
  - If it's hidden, it will soon be forgotten or dismissed as a trivial
detail. It's not... at least, not in a systems language that attracts
high-frequency programmers.

* 'Flexibility' may actually be a fallacy anyway. I personally like the
idea of requiring an explicit change to 'virtual' in the base when a new
and untested usage pattern is to be exploited, it gives me confidence.
  - People are usually pretty permissive when marking functions virtual in
C++, and people like to consider many possibilities.
    - When was the last time you wanted to override a function in C++, but
the author didn't mark it virtual? Is there actually a reduction in
flexibility in practise? Is this actually a frequent reality?
  - Overriding unintended functions may lead to dangerous behaviours never
considered by the author in the first place.
    - How can I be confident in an API when I know the author couldn't have
possibly tested all obscure possibilities available. And how can I know the
extent of his consideration of usage scenarios when authoring the class?
      - At best, my obscure use case has never been tested.
    - 'virtual is self-documenting, succinctly communicating the authors
design/intent.

* Bonus: Improve interoperability with C++, which I will certainly appreciate, but this point manifested from the D-DFE guys at dconf.


And I'll summarise my perception of the counter arguments argument:

* It's a breaking change.

* 'Flexibility'; someone somewhere might want to make use of a class in a
creative way that wasn't intended or tested. They shouldn't be prohibited
from this practise _by default_, in principle.
  - They would have to contact the author to request a method be made
virtual in the unlikely event that source isn't available, and they want to
use it in some obscure fashion that the author never considered.
    - Note: This point exists on both sides, but on this side, the author
is likely to be accommodating to their requests.
  - Authors would have to write 'virtual:' if they want to offer this style
of fully extensible class.


June 04, 2013
On 06/04/2013 01:15 PM, Manu wrote:
> * virtual is a one-way trip. It can't be undone without risking breaking code
> once released to the wild. How can that state be a sensible default?
>   - Can not be un-done by the compiler/linker like it can in other (dynamic)
> languages. No sufficiently smart compiler can ever address this problem as an
> optimisation.

Have to say that for me, this is a bit of a killer point.  If a programmer mistakenly picks the default option instead of the desired qualifier, ideally you want the fix to be non-breaking.

It's complicated by the fact that code might get broken _now_ while changing the default, but the question is whether the price is worth it for saving future pain.
June 04, 2013
> - Can not be un-done by the compiler/linker like it can in other
> (dynamic) languages. No sufficiently smart compiler can ever address this
> problem as an optimisation.

It can be done if you are fine marking every single class supposed to be used across the binary boundaries as "export". See deadalnix explanations in this thread for details.
June 04, 2013
On 4 June 2013 21:43, Dicebot <m.strashun@gmail.com> wrote:

> - Can not be un-done by the compiler/linker like it can in other
>> (dynamic) languages. No sufficiently smart compiler can ever address this
>> problem as an optimisation.
>>
>
> It can be done if you are fine marking every single class supposed to be used across the binary boundaries as "export". See deadalnix explanations in this thread for details.
>

I don't see how it's an intuitive connection between 'export' and
'virtual-by-default'. And it doesn't address the accessor/property problem,
which would often remain inlined even across a DLL.
It also relies on non-standard/unexpected behaviour for .so's (which export
everything right?).


June 04, 2013
On 6/4/13 3:25 AM, Walter Bright wrote:
> On 6/3/2013 10:58 PM, Andrei Alexandrescu wrote:
>> Unless fresh arguments, facts, or perspectives come about, I am
>> personally not
>> convinced, based on this thread so far, that we should operate a
>> language change.
>
> One possibility is to introduce virtual as a storage class that
> overrides final. Hence, one could write a class like:
>
> class C {
> final:
> void foo();
> void baz();
> virtual int abc();
> void def();
> }
>
> This would not break any existing code, and Manu would just need to get
> into the habit of having "final:" as the first line in his classes.

This is generally good but I'd prefer a way to undo a storage class instead of introducing two keywords for each.

Andrei


June 04, 2013
On 2013-06-04 14:21, Andrei Alexandrescu wrote:

> This is generally good but I'd prefer a way to undo a storage class
> instead of introducing two keywords for each.

I like the idea of !final as someone suggested. That can be generalized for all attributes.

-- 
/Jacob Carlborg
June 04, 2013
On 2013-06-04, 13:15, Manu wrote:

> I'll summarise my arguments, though I've done this at least 3 times now.
> Sorry, I 'value' more of my points than you, so my summary is quite longer.
> These are all supporting reasons why I think it would be a good change, and
> naturally, some are of lower significance then than others.
> I'd like to think that most of them should be objectively rejected, or the
> counter arguments list grows in size a whole lot to justify the insults you
> offer:
>
> * At top level I believe D aspires to be a systems language, and
> performance should certainly be a key concern.
>   - 'Flexibility' [at the expense of performance] should be opt-in. It
> comes at the expense of what I presume should be a core audience for a
> systems language.
>   - x86 is the most tolerant architecture _by far_, and we're committed to
> a cost that isn't even known yet on the vast majority of computers in the
> world.
>
> * virtual is a one-way trip. It can't be undone without risking breaking
> code once released to the wild. How can that state be a sensible default?
>   - Can not be un-done by the compiler/linker like it can in other
> (dynamic) languages. No sufficiently smart compiler can ever address this
> problem as an optimisation.
>
> * The result of said performance concern has a cost in time and money for
> at least one of D's core audiences (realtime systems programming).
>   - I don't believe the converse case, final-by-default, would present any
> comparative loss for users that want to write 'virtual:' at the top of
> their class.
>   - 'Opportunistic de-virtualisation' is a time consuming and tedious
> process, and tends to come up only during crunch times.
>
> * "Classes routinely should make most methods final because it's hard to
> imagine why one would override [the intended] few. Since those are a
> minority, it's so much the better to make final the default."
>   - The majority of classes are leaf's, and there's no reason for leaf
> methods to be virtual by default. Likewise, most methods are trivial
> accessors (the most costly) which have no business being virtual either.
>   - It's also self-documenting. It makes it clear to a customer how the API
> is to be used.
>
> * Libraries written in D should hope to be made available to the widest
> audience possible.
>   - Library author's certainly don't consider (or care about) everyone's
> usage cases, but they often write useful code that many people want to make
> use of. This is the definition of a library.
>   - They are almost certainly not going to annotate their classes with lots
> of 'final'.
>   - Given hard experience, when asked to revoke virtual, even if authors
> agree in principle, they will refuse to do it given the risk of breakage
> for unknown customers.
>   - Adding final as an optimisation is almost always done post-release, so
> it will almost always run the risk of breaking someones code somewhere.
>
> * Experience has shown that programmers from C++/C# don't annotate 'final'
> even when they know they should. Users from Java don't do it either, but
> mainly because they don't consider it important.
>   - Note: 'the most performance conscious users' that you refer to are
> often not the ones writing the code. Programmers work in teams, sometimes
> those teams are large, and many programmers are inexperienced.
>
> * final-by-default promotes awareness of virtual-ness, and it's associated
> costs.
>   - If it's hidden, it will soon be forgotten or dismissed as a trivial
> detail. It's not... at least, not in a systems language that attracts
> high-frequency programmers.
>
> * 'Flexibility' may actually be a fallacy anyway. I personally like the
> idea of requiring an explicit change to 'virtual' in the base when a new
> and untested usage pattern is to be exploited, it gives me confidence.
>   - People are usually pretty permissive when marking functions virtual in
> C++, and people like to consider many possibilities.
>     - When was the last time you wanted to override a function in C++, but
> the author didn't mark it virtual? Is there actually a reduction in
> flexibility in practise? Is this actually a frequent reality?
>   - Overriding unintended functions may lead to dangerous behaviours never
> considered by the author in the first place.
>     - How can I be confident in an API when I know the author couldn't have
> possibly tested all obscure possibilities available. And how can I know the
> extent of his consideration of usage scenarios when authoring the class?
>       - At best, my obscure use case has never been tested.
>     - 'virtual is self-documenting, succinctly communicating the authors
> design/intent.
>
> * Bonus: Improve interoperability with C++, which I will certainly
> appreciate, but this point manifested from the D-DFE guys at dconf.
>
>
> And I'll summarise my perception of the counter arguments argument:
>
> * It's a breaking change.
>
> * 'Flexibility'; someone somewhere might want to make use of a class in a
> creative way that wasn't intended or tested. They shouldn't be prohibited
> from this practise _by default_, in principle.
>   - They would have to contact the author to request a method be made
> virtual in the unlikely event that source isn't available, and they want to
> use it in some obscure fashion that the author never considered.
>     - Note: This point exists on both sides, but on this side, the author
> is likely to be accommodating to their requests.
>   - Authors would have to write 'virtual:' if they want to offer this style
> of fully extensible class.


For whatever it's worth, I have the same impression as you - virtual-by-
default is one-way and unsafe, and its benefits seem questionable.

I admit at first I thought it was a good idea - "great, the compiler'll
figure it out for me!". But that's not the case, so I was forced to revise
my opinion.

-- 
Simen