June 04, 2013
On Tuesday, 4 June 2013 at 05:22:39 UTC, Andrei Alexandrescu wrote:
> Situation: I have a closed source library I want to use. I test and find that it doesn't meet our requirements for some trivial matter like the behavior of a few methods (super common, I assure you).
> The author is not responsive, possibly because it would be a potentially breaking change to all the other customers of the library, I've now wasted a month of production time in discussions in an already tight schedule, and I begin the process of re-inventing the wheel.
> I've spent 10 years repeating this pattern. It will still be present with virtual-by-default, but it will be MUCH WORSE with final-by-default. I don't want to step backwards on this front.
>
> Destroyed?

I don't buy this.

Overriding a method from a class in a closed source library is only a sane thing to do if the docs explicitly say you can.
If the docs explicitly say you can, then one can assume that the author will have marked it virtual.

This virtual-by-default flexibility only exists when you're working with classes that you understand the internals of.


Consider these situations, assuming lazy but not completely incompetent library authors:

Hidden source, virtual by default:
    You can override most things, but you're playing with fire unless you have a written promise that it's safe to do so.

Open source, virtual by default:
    Once you understand the internals of a class, you can safely override whatever you want. You are exposed to breakage due to implementation detail, but documentation represents a promise of sorts.

Hidden source, final by default:
    You can only override what the author allows you to. This will have at least some connection with what is safe to override.

Open source, final by default:
    Once you understand the internals of a class, you can fork the library and add virtual on the methods you need to override that the author did not consider.*



Basically, final-by-default is safer and faster, virtual-by-default is more convenient when working with open source libraries.



* you might consider this an unacceptable extra burden, especially considering distribution problems. However, I would counter that if you're going to override a method that isn't explicitly intended to be, you are exposing yourself to breakage due to implementation detail and therefore it would be distributing your own version anyway.
June 04, 2013
On 6/4/13 4:36 AM, Manu wrote:
[snip]

I've read this, thanks for answering. Unfortunately I need to retire from this thread - there's only so many hours in the day, and it seems we got to the point where all sides shout the same malarkey over and over again past one another.

It would be great if this thread results in a language improvement - a means to negate a storage class label inside a class.


Andrei
June 04, 2013
On Tuesday, 4 June 2013 at 12:29:10 UTC, John Colvin wrote:
> On Tuesday, 4 June 2013 at 05:22:39 UTC, Andrei Alexandrescu wrote:
>> Situation: I have a closed source library I want to use. I test and find that it doesn't meet our requirements for some trivial matter like the behavior of a few methods (super common, I assure you).
>> The author is not responsive, possibly because it would be a potentially breaking change to all the other customers of the library, I've now wasted a month of production time in discussions in an already tight schedule, and I begin the process of re-inventing the wheel.
>> I've spent 10 years repeating this pattern. It will still be present with virtual-by-default, but it will be MUCH WORSE with final-by-default. I don't want to step backwards on this front.
>>
>> Destroyed?
>
> I don't buy this.
>
> Overriding a method from a class in a closed source library is only a sane thing to do if the docs explicitly say you can.

For what it's worth, I did it a countless number of time in software that is in production right now.

> This virtual-by-default flexibility only exists when you're working with classes that you understand the internals of.
>

No you understand its usage.

> Basically, final-by-default is safer and faster, virtual-by-default is more convenient when working with open source libraries.
>

Once again the fast claim fail to address or even consider other technique that can be used to finalize methods.
June 04, 2013
On Tuesday, 4 June 2013 at 12:47:46 UTC, Andrei Alexandrescu wrote:
> It would be great if this thread results in a language improvement - a means to negate a storage class label inside a class.
>

Yes please ! I didn't felt that need specifically in cases of classes of final, but in many other context I wish this was possible.

I propose ~storageclass to mimick the this/~this patner
June 04, 2013
On 4 June 2013 22:47, Andrei Alexandrescu <SeeWebsiteForEmail@erdani.org>wrote:

> On 6/4/13 4:36 AM, Manu wrote:
> [snip]
>
> I've read this, thanks for answering. Unfortunately I need to retire from this thread - there's only so many hours in the day, and it seems we got to the point where all sides shout the same malarkey over and over again past one another.
>
> It would be great if this thread results in a language improvement - a means to negate a storage class label inside a class.


I think that's required anyway, separately to this discussion.


June 04, 2013
On Tuesday, 4 June 2013 at 13:06:32 UTC, Manu wrote:
> I think that's required anyway, separately to this discussion.

+1
June 04, 2013
On Tuesday, 4 June 2013 at 12:51:35 UTC, deadalnix wrote:
> On Tuesday, 4 June 2013 at 12:29:10 UTC, John Colvin wrote:
>> On Tuesday, 4 June 2013 at 05:22:39 UTC, Andrei Alexandrescu wrote:
>>> Situation: I have a closed source library I want to use. I test and find that it doesn't meet our requirements for some trivial matter like the behavior of a few methods (super common, I assure you).
>>> The author is not responsive, possibly because it would be a potentially breaking change to all the other customers of the library, I've now wasted a month of production time in discussions in an already tight schedule, and I begin the process of re-inventing the wheel.
>>> I've spent 10 years repeating this pattern. It will still be present with virtual-by-default, but it will be MUCH WORSE with final-by-default. I don't want to step backwards on this front.
>>>
>>> Destroyed?
>>
>> I don't buy this.
>>
>> Overriding a method from a class in a closed source library is only a sane thing to do if the docs explicitly say you can.
>
> For what it's worth, I did it a countless number of time in software that is in production right now.
>

What happens when the library author adds some critical book-keeping to a method that you're overriding?

>> This virtual-by-default flexibility only exists when you're working with classes that you understand the internals of.
>>
>
> No you understand its usage.

See my point above. you need to be certain that the exact behaviour of the original function is not in some way critical to the correctness of the class in general.

>
>> Basically, final-by-default is safer and faster, virtual-by-default is more convenient when working with open source libraries.
>>
>
> Once again the fast claim fail to address or even consider other technique that can be used to finalize methods.

I agree it would be nice to follow another route on this. Final vs virtual defaults is probably an endless debate, sidestepping it completely with a clever finalizing technique would be ideal.
June 04, 2013
On Tuesday, 4 June 2013 at 13:26:26 UTC, John Colvin wrote:
> What happens when the library author adds some critical book-keeping to a method that you're overriding?
>

It shouldn't do so on public method, as the problem is the exact same as override. It the method is private, then the problem goes away. Finally, if the method is protected, it doesn't make sense.

> I agree it would be nice to follow another route on this. Final vs virtual defaults is probably an endless debate, sidestepping it completely with a clever finalizing technique would be ideal.

The information missing for the compiler at link time right now is the overridability of a method in a shared object. This can be solved for a lot of code by enforcing stringer semantic for extern.
June 04, 2013
On Tue, 04 Jun 2013 01:05:28 -0400, Simen Kjaeraas <simen.kjaras@gmail.com> wrote:

> On Tue, 04 Jun 2013 06:16:45 +0200, Steven Schveighoffer <schveiguy@yahoo.com> wrote:
>
>> I think it is unfair to say most classes are not base classes.  This would mean most classes are marked as final.  I don't think they are.  One of the main reasons to use classes in the first place is for extendability.
>
> This is false. Consider this hierarchy: A->B->C, where x->y means 'x
> derives from y'. There is only one base class (A), and only one class
> that may be marked final (C). This will often be the case.

I think you mean the other way around.  x->y means 'y derives from x'.  But I get your point.

However, it's an invalid point.  By this logic there is exactly one base class, Object.  I think it's safe to say that way of thinking is not productive.  Any class that is not final can be a base class.  The classes it derives from are not relevant (including Object).

>> BTW, did you know you can extend a base class and simply make the extension final, and now all the methods on that derived class become non-virtual calls?  Much easier to do than making the original base virtual (Note I haven't tested this to verify, but if not, it should be changed in the compiler).
>
> This does however not help one iota when you have a reference to a base
> class. This will also often be the case.

I believe this is a red herring.  If you are not in control of the creation of the object, the system may actually REQUIRE virtuality, since the base pointer might actually be to a derived type.

-Steve
June 04, 2013
On Tuesday, 4 June 2013 at 05:58:30 UTC, Andrei Alexandrescu wrote:
> On 6/4/13 1:41 AM, Rob T wrote:
>> Structs would IMO be far more useful if they had inheritance.
>
> We do offer subtyping via alias this.
>
> Andrei

Yeah, I saw that method described in another thread. The technique is not even remotely obvious, but the major problem is that it's very limited. After you do one alias this, you can't use alias this again for other things. Maybe that'll eventually change, I don't know. It seems like a hack to me, I'd rather see real inheritance.

On Tuesday, 4 June 2013 at 05:58:58 UTC, Jonathan M Davis wrote:
> How would it even work for a struct to inherit without polymorphism? The whole
> point of inheritance is to make it so that you can create types that can be
> used in place of another,
....

The other significant reason for inheritance is to reuse pre-built sub-components. I rarely use polymorphism, but I make a lot of use out of inheritance, so what happens is that I end up creating classes when all I really need is structs. I cannot be the only person doing this either, and I suspect its very common.

> Use composition, and if you want to be able to call members of the inner
> struct on the outer struct as if they were members of the outer struct, then
> use alias this or opDispatch to forward them to the inner struct.
>

For simulating inheritance, yes, you probably can make use out of inner structs, but how to make it all work seamlessly is not obvious and using opDispatch to make it stick together is time consuming and error prone.

On Tuesday, 4 June 2013 at 05:56:49 UTC, deadalnix wrote:
...
> struct are value type. You can't know the size of a polymorphic type. So you'll have trouble sooner than you imagine.

That's not an issue if you cut out the polymorphism nonsense from the feature set, which means that for structs the size is always knowable. I see no reason why structs cannot inherit and unfortunate that D forbids it.

I'd like to hear what Manu says about it, because from what I am reading between the lines is that he probably does not need to be using classes but cannot use structs because the are too limited - that's my guess, but I really don't know. For me, I'd use structs much more often except that they cannot inherit.

--rt