June 04, 2013
On Tue, 04 Jun 2013 01:16:22 -0400, Manu <turkeyman@gmail.com> wrote:

> On 4 June 2013 14:16, Steven Schveighoffer <schveiguy@yahoo.com> wrote:
>
>> Since when is that on the base class author?  Doctor, I overrode this
>> class, and it doesn't work.  Well, then don't override it :)
>>
>
> Because it wastes your time (and money). And perhaps it only fails/causes
> problems in edge cases, or obscure side effects, or in internal code that
> you have no ability to inspect/debug.
> You have no reason to believe you're doing anything wrong; you're using the
> API in a perfectly valid way... it just happens that it is wrong (the
> author never considered it), and it doesn't work.

Technically and narrow-mindedly, yes, it will not waste your time and money to try extending it -- you will know right up front that you can't use it via extension, and therefore cannot use the library if it doesn't fit exactly what you need.  You will simply waste time and money re-implementing it.

There is also a quite likely possibility that you have the source to the base class, in which case you can determine whether it's possible to extend.

This view that you've taken is that if I can do something, then the library developer has expected that usage, simply by it being possible.  This is a bad way to look at APIs.  Documentation and intent are important to consider.

> Also there is the possibility that a class that isn't designed from the
>> start to be overridden.  But overriding one or two methods works, and has
>> no adverse effects.  Then it is a happy accident.  And it even enables
>> designs that take advantage of this default, like mock objects.  I would
>> point out that in Objective-C, ALL methods are virtual, even class methods
>> and properties.  It seems to work fine there.
>>
>
> Even apple profess that Obj-C is primarily useful for UI code, and they use
> C for tonnes of other stuff.

First, I've never heard that statement or read it anywhere (you have a link?).  Second, the idea that if you use Objective C objects for your API, then you must use method calls for EVERYTHING is ridiculous.  Pretty much all the OS functionality is exposed via Objective-C objects.  It doesn't mean the underlying implementation is pure objects, like wrapping ints in objects or something.  I don't know of any language that would do that.  The public API is all virtual, including networking, I/O, image processing, threading, etc. and it works quite well.

C is a subset of Objective-C, so it's quite easy to switch back and forth.

> What I'm really trying to say is, when final is the default, and you really
>> should have made some method virtual (but didn't), then you have to pay for
>> it later when you update the base class.
>
>
> I recognise this, but I don't think that's necessarily a bad thing. It
> forces you a moment of consideration wrt making the change, and if it will
> affect anything else. If it feels like a significant change, you'll treat
> it as such (which it is).
> Even though you do need to make the change, it's not a breaking change, and
> you don't risk any side effects.

I find this VERY ironic :)

Library Author: After careful consideration, we have decided that we are going to make all our classes virtual, to allow more flexibility.

Library user Manu: NOOOOO! That will make all my code horribly slow!

Library Author: Don't worry!  Your code will still compile and work!  It's a non-breaking change with no risk of side effects.

>> When virtual is the default, and you really wanted it to be final (but
>> didn't do that), then you have to pay for it later when you update the base
>> class.  There is no way that is advantageous to *everyone*.
>>
>
> But unlike the first situation, this is a breaking change. If you are not
> the only user of your library, then this can't be done safely.

I think it breaks both ways, just in different ways.

>> It's advantageous to a particular style of coding.  If you know everything
>> is virtual by default, then you write code expecting that.  Like mock
>> objects.  Or extending a class simply to change one method, even when you
>> weren't expecting that to be part of the design originally.
>>
>
> If you write code like that, then write 'virtual:', it doesn't hurt anyone
> else. The converse is not true.

This really is simply a matter of preference.  Your preference for performance over flexibility is biasing your judgment.  You can just as easly write 'final'.  The default is an arbitrary decision.

When I first came across D, I was experiencing "D euphoria" and I wholeheartedly considered the decision to have virtual-by-default a very wise one.  At this point, I'm indifferent.  It could have been either way, and I think we would be fine.

But to SWITCH mid-stream would be a horrible breaking change, and needs to have a very compelling reason.

>>
>> I think it is unfair to say most classes are not base classes.  This would
>> mean most classes are marked as final.  I don't think they are.  One of the
>> main reasons to use classes in the first place is for extendability.
>>
>
> People rarely use the final keyword on classes, even though they could 90%
> of the time.

Let me fix that for you:

"People rarely use the final keyword on classes, even though I wish they would 90% of the time."

A non-final class is, by definition, a base class.  To say that a non-final class is not a base class because it 'could be' final is just denial :)


>> The losses are that if category 3 were simply always final, some other
>> anti-Manu who wanted to extend everything has to contact all the original
>> authors to get them to change their classes to virtual :)
>>
>
> Fine, they'll probably be receptive since it's not a breaking change.
> Can you guess how much traction I have when I ask an author of a popular
> library to remove some 'virtual' keywords in C++ code?
> "Oh we can't really do that, it could break any other users!", so then we
> rewrite the library.

This is a horrible argument.  C++ IS final by default.  They HAVE TO opt in by default.  You have been spending all this time arguing we should go the C++ route only to tell me that your experience with C++ is that you can't get what you want there either?!!!

Alternatively, we can say the two situations aren't the same.  In the C++ situation, the author opted for virtuality.  In the D case, the author may have simply not cared.  In the not caring case, they may be much more open to adding final (I did).  In the case where they specifically want virtuality, they aren't going to drop it whether it's the default or not.

> BTW, did you know you can extend a base class and simply make the extension
>> final, and now all the methods on that derived class become non-virtual
>> calls?  Much easier to do than making the original base virtual (Note I
>> haven't tested this to verify, but if not, it should be changed in the
>> compiler).
>>
>
> One presumes that the library that defines the base class deals with its
> own base pointers internally, and as such, the functions that I may have
> finalised in my code will still be virtual in the place that it counts.

Methods take the base pointer, but will be inlinable on a final class, and any methods they call will be inlinable and final.

Any closed source code is already compiled, and it's too bad you can't fix it.  But that is simply a missed optimization for the library writer.  It's no different than someone having a poorly implemented algorithm, or doing something stupid like unaligned simd loads :)

-Steve
June 04, 2013
On Tuesday, 4 June 2013 at 07:33:04 UTC, Dicebot wrote:
> On Tuesday, 4 June 2013 at 05:41:16 UTC, Rob T wrote:
>> Structs would IMO be far more useful if they had inheritance. Inheritence can be fully removed from the rest of polymorphism, so there's no reason why structs which are not polymorphic cannot inherit.
>
> If no polymorphism is needed, there is no reason to use inheritance instead of template mixins.

mixins make me shudder, however, if you can point out an example of this working for simulating struct inheritance, I'd be interested to have a look. Of course I strongly suspect that it will suffer from the same problems as the other suggested methods, not obvious and difficult to implement and maintain.

--rt
June 04, 2013
On Tue, 04 Jun 2013 07:32:31 -0400, Joseph Rushton Wakeling <joseph.wakeling@webdrake.net> wrote:

> On 06/04/2013 01:15 PM, Manu wrote:
>> * virtual is a one-way trip. It can't be undone without risking breaking code
>> once released to the wild. How can that state be a sensible default?
>>   - Can not be un-done by the compiler/linker like it can in other (dynamic)
>> languages. No sufficiently smart compiler can ever address this problem as an
>> optimisation.
>
> Have to say that for me, this is a bit of a killer point.  If a programmer
> mistakenly picks the default option instead of the desired qualifier, ideally
> you want the fix to be non-breaking.

Define non-breaking.

If your code still compiles but all of a sudden becomes horrendously slow, is that a non-breaking change?

-Steve
June 04, 2013
+1
June 04, 2013
On Tuesday, June 04, 2013 14:41:30 Jerry wrote:
> +1

Please always quote at least some of the post that you're replying to. Posts don't always thread properly on all clients, so it's not always obvious who someone is replying to if they don't quote anything. And some people don't use threading at all when they view posts.

- Jonathan M Davis
June 04, 2013
"Jonathan M Davis" <jmdavisProg@gmx.com> writes:

> On Tuesday, June 04, 2013 14:41:30 Jerry wrote:
>> +1
>
> Please always quote at least some of the post that you're replying to. Posts don't always thread properly on all clients, so it's not always obvious who someone is replying to if they don't quote anything. And some people don't use threading at all when they view posts.

Sorry about that.  I was endorsing Manu's view of the virtual vs final class method by default argument.

I also work with researchers and sloppy research code and have spend plenty of time fixing performance problems due to slow methods inside tight loops.  In C++, it's more likely people not making simple calls inline, but I've also seen poor choice of virtual functions as well.

Jerry
June 04, 2013
On Sunday, 2 June 2013 at 14:34:43 UTC, Manu wrote:
> Yeah, this is an interesting point. These friends of mine all write C code,
> not even C++.

Maybe you should mention to them Julia. It's quite a good scientific language.
June 05, 2013
On 5 June 2013 02:21, Rob T <alanb@ucora.com> wrote:

> On Tuesday, 4 June 2013 at 05:58:30 UTC, Andrei Alexandrescu wrote:
>
>> On 6/4/13 1:41 AM, Rob T wrote:
>>
>>> Structs would IMO be far more useful if they had inheritance.
>>>
>>
>> We do offer subtyping via alias this.
>>
>> Andrei
>>
>
> Yeah, I saw that method described in another thread. The technique is not even remotely obvious, but the major problem is that it's very limited. After you do one alias this, you can't use alias this again for other things. Maybe that'll eventually change, I don't know. It seems like a hack to me, I'd rather see real inheritance.
>
> On Tuesday, 4 June 2013 at 05:58:58 UTC, Jonathan M Davis wrote:
>
>> How would it even work for a struct to inherit without polymorphism? The
>> whole
>> point of inheritance is to make it so that you can create types that can
>> be
>> used in place of another,
>>
> ....
>
> The other significant reason for inheritance is to reuse pre-built sub-components. I rarely use polymorphism, but I make a lot of use out of inheritance, so what happens is that I end up creating classes when all I really need is structs. I cannot be the only person doing this either, and I suspect its very common.
>
>  Use composition, and if you want to be able to call members of the inner
>> struct on the outer struct as if they were members of the outer struct,
>> then
>> use alias this or opDispatch to forward them to the inner struct.
>>
>>
> For simulating inheritance, yes, you probably can make use out of inner structs, but how to make it all work seamlessly is not obvious and using opDispatch to make it stick together is time consuming and error prone.
>
> On Tuesday, 4 June 2013 at 05:56:49 UTC, deadalnix wrote: ...
>
>> struct are value type. You can't know the size of a polymorphic type. So you'll have trouble sooner than you imagine.
>>
>
> That's not an issue if you cut out the polymorphism nonsense from the feature set, which means that for structs the size is always knowable. I see no reason why structs cannot inherit and unfortunate that D forbids it.
>
> I'd like to hear what Manu says about it, because from what I am reading between the lines is that he probably does not need to be using classes but cannot use structs because the are too limited - that's my guess, but I really don't know. For me, I'd use structs much more often except that they cannot inherit.


I certainly have and do write shallow inheritance structures with no
virtuals, it does occur from time to time, and I have missed struct
inheritance in D, but alias this has met my needs so far.
But I'd say the majority of classes are polymorphic. There's usually at
least some sort of 'update()', or 'doWork()' function that needs to be
virtual, but the vast majority of methods are trivial accessors throughout
the hierarchy.


June 05, 2013
On Monday, 3 June 2013 at 17:18:55 UTC, Andrei Alexandrescu wrote:
 override is not comparable
> because it improves code correctness and maintainability, for which there is ample prior evidence. It's also a matter for which, unlike virtual/final, there is no reasonable recourse.

Virtual by default makes it simpler to call method on object that is not initialized yet (constructor not called yet). This situation is possible regardless if virtual is default or not (it can just happen more easily). I think this calling virtual function in constructor should generate a warning. (I wouldn't be surprised if there is enhancement request filed for this already)

module test;
import std.stdio;

class Base
{
    this ()	{ writeln("Base.this"); foo();  }
    void foo () { writeln("Base.foo"); }
}

class Derived : Base
{
    this () { writeln("Derived.this"); }
    override void foo () { writeln("Derifed.foo"); }
}

void main () { auto d = new Derived; }

Program output:
Base.this
Derifed.foo // Derived.foo is called before object constructor
Derived.this
June 05, 2013
On Wed, 05 Jun 2013 13:53:58 +0100, Michal Minich <michal.minich@gmail.com> wrote:

> On Monday, 3 June 2013 at 17:18:55 UTC, Andrei Alexandrescu wrote:
>   override is not comparable
>> because it improves code correctness and maintainability, for which there is ample prior evidence. It's also a matter for which, unlike virtual/final, there is no reasonable recourse.
>
> Virtual by default makes it simpler to call method on object that is not initialized yet (constructor not called yet). This situation is possible regardless if virtual is default or not (it can just happen more easily).

Yeah, it happened to me in C++ .. same with virtuals in the destructor.  Lesson learned first time tho :p

> I think this calling virtual function in constructor should generate a warning. (I wouldn't be surprised if there is enhancement request filed for this already)

With virtual by default, could D statically verify/deny these?  What about with static by default?  Does it get easier or harder to detect/deny these in either case?

R

-- 
Using Opera's revolutionary email client: http://www.opera.com/mail/