June 05, 2013
On 2013-06-06, 00:32, Adam D. Ruppe wrote:

> But I want to clearify this #1:
>
> class A { virtual void foo(); }
> class B : A { virtual void foo(); }

With C# semantics (as has been suggested as a basis):

class A {
   virtual void foo() {
       writeln("A.foo");
   }
}

class B : A {
    virtual void foo() {
        writeln("B.foo");
    }
}

void bar() {
    B b = new B();
    A a = b;
    a.foo(); // Prints "A.foo"
    b.foo(); // Prints "B.foo"
}

-- 
Simen
June 05, 2013
Am 06.06.2013 00:37, schrieb Steven Schveighoffer:
> On Wed, 05 Jun 2013 18:32:58 -0400, Adam D. Ruppe
> <destructionator@gmail.com> wrote:
>
>> On Wednesday, 5 June 2013 at 22:03:05 UTC, Walter Bright wrote:
>>> 1. Introduce 'virtual' storage class. 'virtual' not only means a
>>> method is virtual, but it is an *introducing* virtual, i.e. it starts
>>> a new vtbl[] entry even if there's a virtual of the same name in the
>>> base classes. This means that functions marked 'virtual' do not
>>> override functions marked 'virtual'.
>>
>> Your upgrade path sounds generally good to me, I can live with that.
>>
>> But I want to clearify this #1:
>>
>> class A { virtual void foo(); }
>> class B : A { virtual void foo(); }
>>
>> Error, yes? It should be "override void foo();" or "override final
>> void foo();".
>>
>> (override and virtual together would always be an error, correct?)
>>
>> Whereas:
>>
>> class A { virtual void foo(); }
>> class B : A { virtual void foo(int); }
>>
>> is OK because foo(int) is a new overload, right?
>
> No, I think it introduces a new foo.  Calling A.foo does not call
> B.foo.  In other words, it hides the original implementation, there are
> two vtable entries for foo.
>
> At least, that is how I understood the C# description from that post,
> and it seems Walter is trying to specify that.  The idea is that B
> probably defined foo before A did, and A adding foo should not break B,
> B didn't even know about A's foo.
>
> -Steve

Here it is described the C# semantics for  virtual methods

http://msdn.microsoft.com/en-us/library/6fawty39.aspx
June 05, 2013
On 6/5/2013 3:37 PM, Steven Schveighoffer wrote:
> No, I think it introduces a new foo.  Calling A.foo does not call B.foo.  In
> other words, it hides the original implementation, there are two vtable entries
> for foo.
>
> At least, that is how I understood the C# description from that post, and it
> seems Walter is trying to specify that.  The idea is that B probably defined foo
> before A did, and A adding foo should not break B, B didn't even know about A's
> foo.

That's right.

June 05, 2013
On Wednesday, 5 June 2013 at 22:53:36 UTC, Paulo Pinto wrote:
> Here it is described the C# semantics for  virtual methods
>
> http://msdn.microsoft.com/en-us/library/6fawty39.aspx

Ah thanks, I've done very little C# so this is all pretty new to me.

This seems ok too, and actually now that I think about it, the override already takes care of my biggest concern, that we'd accidentally hide something. Given:

class A { void foo(); }
class B:A { override void foo(); }
class C:B { override void foo(); }

The error can pretty easily be "B.foo overrides non-virtual function A.foo" as well as "C.foo overrides non-virtual function A.foo" - the warning on C doesn't need to mention B, since it is already marked override. Thus newbies like me won't incorrectly put virtual on B and accidentally hide A.

So this deprecation will easily point us to put virtual in all the right places and none of the wrong places.


I like it.
June 05, 2013
On Wed, 05 Jun 2013 18:56:28 -0400, Walter Bright <newshound2@digitalmars.com> wrote:

> On 6/5/2013 3:37 PM, Steven Schveighoffer wrote:
>> No, I think it introduces a new foo.  Calling A.foo does not call B.foo.  In
>> other words, it hides the original implementation, there are two vtable entries
>> for foo.
>>
>> At least, that is how I understood the C# description from that post, and it
>> seems Walter is trying to specify that.  The idea is that B probably defined foo
>> before A did, and A adding foo should not break B, B didn't even know about A's
>> foo.
>
> That's right.
>

Prompted by Paulo's reference, I see there is another twist not clear from the interview article.

I think it is important to examine the *exact* semantics of C# so we can judge which parts to have.

Here is a complete excerpt from the ECMA submission for C#, which I think is very informative (bear with the unicode errors, this is copy pasted from this document: http://www.ecma-international.org/publications/files/ECMA-ST-WITHDRAWN/ECMA-334,%202nd%20edition,%20December%202002.pdf ):

8.13 Versioning
Versioning is the process of evolving a component over time in a compatible manner. A new version of a component is source compatible with a previous version if code that depends on the previous version can, when recompiled, work with the new version. In contrast, a new version of a component is binary compatible if an application that depended on the old version can, without recompilation, work with the new version.
Most languages do not support binary compatibility at all, and many do little to facilitate source compatibility. In fact, some languages contain flaws that make it impossible, in general, to evolve a class over time without breaking at least some client code.
As an example, consider the situation of a base class author who ships a class named Base. In the first version, Base contains no method F. A component named Derived derives from Base, and introduces an F. This Derived class, along with the class Base on which it depends, is released to customers, who deploy to numerous clients and servers.

     // Author A
     namespace A
     {
        public class Base // version 1
        {
        }
     }

     // Author B
     namespace B
     {
         class Derived: A.Base
         {
            public virtual void F() {
               System.Console.WriteLine("Derived.F");
            }
         }
     }

So far, so good, but now the versioning trouble begins. The author of Base produces a new version, giving it its own method F.

      // Author A
      namespace A
      {
         public class Base // version 2
         {
            public virtual void F() // added in version 2
            {
               System.Console.WriteLine("Base.F");
            }
         }
      }

This new version of Base should be both source and binary compatible with the initial version. (If it werenít possible to simply add a method then a base class could never evolve.) Unfortunately, the new F in Base makes the meaning of Derivedís F unclear. Did Derived mean to override Baseís F? This seems unlikely, since when Derived was compiled, Base did not even have an F! Further, if Derivedís F does override Baseís F, then it must adhere to the contract specified by Baseóa contract that was unspecified when Derived was written. In some cases, this is impossible. For example, Baseís F might require that overrides of it always call the base. Derivedís F could not possibly adhere to such a contract.

C# addresses this versioning problem by requiring developers to state their intent clearly. In the original code example, the code was clear, since Base did not even have an F. Clearly, Derivedís F is intended as a new method rather than an override of a base method, since no base method named F exists.
If Base adds an F and ships a new version, then the intent of a binary version of Derived is still clearó Derivedís F is semantically unrelated, and should not be treated as an override.

However, when Derived is recompiled, the meaning is unclearóthe author of Derived may intend its F to override Baseís F, or to hide it. Since the intent is unclear, the compiler produces a warning, and by default makes Derivedís F hide Baseís F. This course of action duplicates the semantics for the case in which Derived is not recompiled. The warning that is generated alerts Derivedís author to the presence of the
F method in Base.

If Derived's F is semantically unrelated to Baseís F, then Derivedís author can express this intentóand,
in effect, turn off the warningóby using the new keyword in the declaration of F.

      // Author A
      namespace A
      {
         public class Base // version 2
         {
            public virtual void F() // added in version 2
            {
               System.Console.WriteLine("Base.F");
            }
         }
      }
// Author B
namespace B
{
   class Derived: A.Base   // version 2a: new
   {
            new public virtual void F() {
               System.Console.WriteLine("Derived.F");
            }
   }
}

On the other hand, Derivedís author might investigate further, and decide that Derivedís F should override Baseís F. This intent can be specified by using the override keyword, as shown below.

      // Author A
      namespace A
      {
         public class Base // version 2
         {
            public virtual void F() // added in version 2
            {
               System.Console.WriteLine("Base.F");
            }
         }
      }
// Author B
namespace B
{
   class Derived: A.Base   // version 2b: override
   {
            public override virtual void F() {
	       base.F();
               System.Console.WriteLine("Derived.F");
            }
   }
}

The author of Derived has one other option, and that is to change the name of F, thus completely avoiding the name collision. Although this change would break source and binary compatibility for Derived, the importance of this compatibility varies depending on the scenario. If Derived is not exposed to other programs, then changing the name of F is likely a good idea, as it would improve the readability of the programóthere would no longer be any confusion about the meaning of F.
June 05, 2013
Thank you!
A hyperlink is always so much more substantial than a reasoned claim ;)
On 6 Jun 2013 06:05, "Kapps" <opantm2+spam@gmail.com> wrote:

> On Tuesday, 4 June 2013 at 04:07:10 UTC, Andrei Alexandrescu wrote:
>
>> Choosing virtual (or not) by default may be dubbed a mistake only in a context. With the notable exception of C#, modern languages aim for flexibility and then do their best to obtain performance. In the context of D in particular, there are arguments for the default going either way. If I were designing D from scratch it may even make sense to e.g. force a choice while offering no default whatsoever.
>>
>
> C# chose final-by-default not simply because of performance issues but because of issues with incorrect code. While performance is an issue (much more so in D than in C#), they can work around that using their JIT compiler, just like HotSpot does for Java. The real issue is that overriding methods that the author did not think about being overridden is *wrong*. It leads to incorrect code. Making methods virtual relies on some implementation details being fixed, such as whether to call fields or properties. Many people who wrap fields in properties use the fields still in various places in the class. Now someone overrides the property, and finds that it either makes no change or in one or it works everywhere except in one or two fields where the author refers to the field. By forcing the author to specifically make things virtual you force them to recognize that they should or shouldn't be using the property vs the field. There are many other examples for similar issues, but I feel properties are one of the biggest issues with virtual-by-default, both from a correctness standpoint and a performance standpoint. Having @property imply final seems rather hackish though perhaps.
>
> Anders Hejlsberg talks about why they decided to use final by default in
> C# at http://www.artima.com/intv/**nonvirtualP.html<http://www.artima.com/intv/nonvirtualP.html>.
> See the Non-Virtual is the Default section. They do this *because* they saw
> the drawbacks of Java's virtual by default and were able to learn from it.
>
> Switching to final-by-default doesn't have to be an immediate breaking change. If we had a virtual keyword it could go through a deprecation process where overriding a method not declared as virtual results in a warning, exactly like the override enforcement went through.
>


June 06, 2013
On 6/5/13 7:39 PM, Manu wrote:
> Thank you!
> A hyperlink is always so much more substantial than a reasoned claim ;)

It's a hyperlink to an extensive argument. And assuming you refer to yours as the reasoned claim, I'd have to raise a finger to parts of that :o).

That being said, a gentleman must be a gentleman. You destroyed, and I got destroyed. If Walter is on board with the change, I won't oppose. Congratulations.


Andrei
June 06, 2013
On Wednesday, 5 June 2013 at 20:01:06 UTC, Kapps wrote:
> Anders Hejlsberg talks about why they decided to use final by default in C# at http://www.artima.com/intv/nonvirtualP.html. See the Non-Virtual is the Default section. They do this *because* they saw the drawbacks of Java's virtual by default and were able to learn from it.
>

The first point : Anders Hejlsberg: There are several reasons. One is performance. We can observe that as people write code in Java, they forget to mark their methods final. Therefore, those methods are virtual. Because they're virtual, they don't perform as well. There's just performance overhead associated with being a virtual method. That's one issue.

It is blatantly false. Maybe it was true at the time, I don't know, but I find quite disturbing that the first argument is 100% moot.
June 06, 2013
On 6/5/2013 4:29 PM, Steven Schveighoffer wrote:
> On Wed, 05 Jun 2013 18:56:28 -0400, Walter Bright <newshound2@digitalmars.com>
> wrote:
>
>> On 6/5/2013 3:37 PM, Steven Schveighoffer wrote:
>>> No, I think it introduces a new foo.  Calling A.foo does not call B.foo.  In
>>> other words, it hides the original implementation, there are two vtable entries
>>> for foo.
>>>
>>> At least, that is how I understood the C# description from that post, and it
>>> seems Walter is trying to specify that.  The idea is that B probably defined foo
>>> before A did, and A adding foo should not break B, B didn't even know about A's
>>> foo.
>>
>> That's right.
>>
>
> Prompted by Paulo's reference, I see there is another twist not clear from the
> interview article.
>
> I think it is important to examine the *exact* semantics of C# so we can judge
> which parts to have.

I think we accomplish this in a simpler way:

1. 'virtual' means a method is an "introducing" one.
2. 'override' means a method overrides a base virtual function with a final function.
3. 'override virtual' means override with a non-final function.
4. none means final and non-overriding.

June 06, 2013
On Wednesday, 5 June 2013 at 22:50:27 UTC, Simen Kjaeraas wrote:
> On 2013-06-06, 00:32, Adam D. Ruppe wrote:
>
>> But I want to clearify this #1:
>>
>> class A { virtual void foo(); }
>> class B : A { virtual void foo(); }
>
> With C# semantics (as has been suggested as a basis):
>
> class A {
>    virtual void foo() {
>        writeln("A.foo");
>    }
> }
>
> class B : A {
>     virtual void foo() {
>         writeln("B.foo");
>     }
> }
>
> void bar() {
>     B b = new B();
>     A a = b;
>     a.foo(); // Prints "A.foo"
>     b.foo(); // Prints "B.foo"
> }

If that is true, it is fair to assume that C# designer's completely miss the point of OOP.

On the same path, in the previously linked document : Every time you say virtual in an API, you are creating a call back hook.

Which seems that OOP is limited to the observer pattern according to Anders Hejlsberg.

Finally since then, tooling have been introduced in C# to revirtualize everything. This is possible in C# because of the VM, but won't be possible in D.

The whole case about C# isn't very strong IMO.