June 05, 2013
On Wednesday, 5 June 2013 at 14:08:46 UTC, Regan Heath wrote:

> With virtual by default, could D statically verify/deny these?  What about with static by default?  Does it get easier or harder to detect/deny these in either case?

Without understanding dmd internals, the implementation effort should be exactly the same regardless of default. And it should be easy too, like checking a flag on function call expression:

walk each ast tree item in constructor
    if funcApplicattion.isVirtual then
       emit warning ...

The bug reports regarding this issue are
http://d.puremagic.com/issues/show_bug.cgi?id=5056
http://d.puremagic.com/issues/show_bug.cgi?id=3393
June 05, 2013
Am 05.06.2013 16:08, schrieb Regan Heath:
> On Wed, 05 Jun 2013 13:53:58 +0100, Michal Minich
> <michal.minich@gmail.com> wrote:
>
>> On Monday, 3 June 2013 at 17:18:55 UTC, Andrei Alexandrescu wrote:
>>   override is not comparable
>>> because it improves code correctness and maintainability, for which
>>> there is ample prior evidence. It's also a matter for which, unlike
>>> virtual/final, there is no reasonable recourse.
>>
>> Virtual by default makes it simpler to call method on object that is
>> not initialized yet (constructor not called yet). This situation is
>> possible regardless if virtual is default or not (it can just happen
>> more easily).
>
> Yeah, it happened to me in C++ .. same with virtuals in the destructor.
> Lesson learned first time tho :p
>

Me as well, I was used to being able to call virtual methods on the constructor in Object Pascal and it took me a while to find out why
my C++ program was crashing, back in the day.



June 05, 2013
On Tuesday, 4 June 2013 at 04:07:10 UTC, Andrei Alexandrescu wrote:
> Choosing virtual (or not) by default may be dubbed a mistake only in a context. With the notable exception of C#, modern languages aim for flexibility and then do their best to obtain performance. In the context of D in particular, there are arguments for the default going either way. If I were designing D from scratch it may even make sense to e.g. force a choice while offering no default whatsoever.

C# chose final-by-default not simply because of performance issues but because of issues with incorrect code. While performance is an issue (much more so in D than in C#), they can work around that using their JIT compiler, just like HotSpot does for Java. The real issue is that overriding methods that the author did not think about being overridden is *wrong*. It leads to incorrect code. Making methods virtual relies on some implementation details being fixed, such as whether to call fields or properties. Many people who wrap fields in properties use the fields still in various places in the class. Now someone overrides the property, and finds that it either makes no change or in one or it works everywhere except in one or two fields where the author refers to the field. By forcing the author to specifically make things virtual you force them to recognize that they should or shouldn't be using the property vs the field. There are many other examples for similar issues, but I feel properties are one of the biggest issues with virtual-by-default, both from a correctness standpoint and a performance standpoint. Having @property imply final seems rather hackish though perhaps.

Anders Hejlsberg talks about why they decided to use final by default in C# at http://www.artima.com/intv/nonvirtualP.html. See the Non-Virtual is the Default section. They do this *because* they saw the drawbacks of Java's virtual by default and were able to learn from it.

Switching to final-by-default doesn't have to be an immediate breaking change. If we had a virtual keyword it could go through a deprecation process where overriding a method not declared as virtual results in a warning, exactly like the override enforcement went through.
June 05, 2013
On 6/5/13 4:01 PM, Kapps wrote:
> Anders Hejlsberg talks about why they decided to use final by default in
> C# at http://www.artima.com/intv/nonvirtualP.html. See the Non-Virtual
> is the Default section. They do this *because* they saw the drawbacks of
> Java's virtual by default and were able to learn from it.

This is a solid piece of evidence.

Andrei
June 05, 2013
Am 05.06.2013 22:01, schrieb Kapps:
> On Tuesday, 4 June 2013 at 04:07:10 UTC, Andrei Alexandrescu wrote:
>> Choosing virtual (or not) by default may be dubbed a mistake only in a
>> context. With the notable exception of C#, modern languages aim for
>> flexibility and then do their best to obtain performance. In the
>> context of D in particular, there are arguments for the default going
>> either way. If I were designing D from scratch it may even make sense
>> to e.g. force a choice while offering no default whatsoever.
>
> C# chose final-by-default not simply because of performance issues but
> because of issues with incorrect code. While performance is an issue
> (much more so in D than in C#), they can work around that using their
> JIT compiler, just like HotSpot does for Java. The real issue is that
> overriding methods that the author did not think about being overridden
> is *wrong*. It leads to incorrect code. Making methods virtual relies on
> some implementation details being fixed, such as whether to call fields
> or properties. Many people who wrap fields in properties use the fields
> still in various places in the class. Now someone overrides the
> property, and finds that it either makes no change or in one or it works
> everywhere except in one or two fields where the author refers to the
> field. By forcing the author to specifically make things virtual you
> force them to recognize that they should or shouldn't be using the
> property vs the field. There are many other examples for similar issues,
> but I feel properties are one of the biggest issues with
> virtual-by-default, both from a correctness standpoint and a performance
> standpoint. Having @property imply final seems rather hackish though
> perhaps.
>
> Anders Hejlsberg talks about why they decided to use final by default in
> C# at http://www.artima.com/intv/nonvirtualP.html. See the Non-Virtual
> is the Default section. They do this *because* they saw the drawbacks of
> Java's virtual by default and were able to learn from it.
>

Oh, I though it was based on his experience with Object Pascal and Delphi, that also do final by default.

Thanks for sharing.

--
Paulo

June 05, 2013
Kapps:

> C# chose final-by-default not simply because of performance issues but because of [...]

One of the best posts of this thread, Kapps :-)

And indeed, the care for versioning is one important part of C# design.

Bye,
bearophile
June 05, 2013
On 6/5/2013 1:13 PM, Andrei Alexandrescu wrote:
> On 6/5/13 4:01 PM, Kapps wrote:
>> Anders Hejlsberg talks about why they decided to use final by default in
>> C# at http://www.artima.com/intv/nonvirtualP.html. See the Non-Virtual
>> is the Default section. They do this *because* they saw the drawbacks of
>> Java's virtual by default and were able to learn from it.
>
> This is a solid piece of evidence.

Yup.

June 05, 2013
On 6/5/2013 2:55 PM, Walter Bright wrote:
> On 6/5/2013 1:13 PM, Andrei Alexandrescu wrote:
>> On 6/5/13 4:01 PM, Kapps wrote:
>>> Anders Hejlsberg talks about why they decided to use final by default in
>>> C# at http://www.artima.com/intv/nonvirtualP.html. See the Non-Virtual
>>> is the Default section. They do this *because* they saw the drawbacks of
>>> Java's virtual by default and were able to learn from it.
>>
>> This is a solid piece of evidence.
>
> Yup.
>

We can do an upgrade path as follows:

1. Introduce 'virtual' storage class. 'virtual' not only means a method is virtual, but it is an *introducing* virtual, i.e. it starts a new vtbl[] entry even if there's a virtual of the same name in the base classes. This means that functions marked 'virtual' do not override functions marked 'virtual'.

2. Issue a warning if a function overrides a function that is not marked 'virtual'.

3. Deprecate (2).

4. Error (2), and make non-virtual the default.
June 05, 2013
On Wednesday, 5 June 2013 at 22:03:05 UTC, Walter Bright wrote:
> 1. Introduce 'virtual' storage class. 'virtual' not only means a method is virtual, but it is an *introducing* virtual, i.e. it starts a new vtbl[] entry even if there's a virtual of the same name in the base classes. This means that functions marked 'virtual' do not override functions marked 'virtual'.

Your upgrade path sounds generally good to me, I can live with that.

But I want to clearify this #1:

class A { virtual void foo(); }
class B : A { virtual void foo(); }

Error, yes? It should be "override void foo();" or "override final void foo();".

(override and virtual together would always be an error, correct?)

Whereas:

class A { virtual void foo(); }
class B : A { virtual void foo(int); }

is OK because foo(int) is a new overload, right?



If I have these right, then yeah, I think your plan is good and should happen.
June 05, 2013
On Wed, 05 Jun 2013 18:32:58 -0400, Adam D. Ruppe <destructionator@gmail.com> wrote:

> On Wednesday, 5 June 2013 at 22:03:05 UTC, Walter Bright wrote:
>> 1. Introduce 'virtual' storage class. 'virtual' not only means a method is virtual, but it is an *introducing* virtual, i.e. it starts a new vtbl[] entry even if there's a virtual of the same name in the base classes. This means that functions marked 'virtual' do not override functions marked 'virtual'.
>
> Your upgrade path sounds generally good to me, I can live with that.
>
> But I want to clearify this #1:
>
> class A { virtual void foo(); }
> class B : A { virtual void foo(); }
>
> Error, yes? It should be "override void foo();" or "override final void foo();".
>
> (override and virtual together would always be an error, correct?)
>
> Whereas:
>
> class A { virtual void foo(); }
> class B : A { virtual void foo(int); }
>
> is OK because foo(int) is a new overload, right?

No, I think it introduces a new foo.  Calling A.foo does not call B.foo.  In other words, it hides the original implementation, there are two vtable entries for foo.

At least, that is how I understood the C# description from that post, and it seems Walter is trying to specify that.  The idea is that B probably defined foo before A did, and A adding foo should not break B, B didn't even know about A's foo.

-Steve