June 04, 2013
On Tuesday, June 04, 2013 00:12:30 Walter Bright wrote:
> On 6/3/2013 11:49 PM, deadalnix wrote:
> > We can do it in a D specific way (using our own metadata and providing an optimization pas for LLVM) but most likely we won't even need to as the same feature is planned to be added to clang and we can most likely simply reuse clang's metadata.
> 
> There is another way.
> 
> D can be made aware that it is building an executable (after all, that is why it invokes the linker). If you shove all the source code into the compiler in one command, for an executable, functions that are not overridden can be made final.

Shared libraries kill that - especially those that get loaded explicitly rather than linked to. They could have classes which derive from classes in the executable which are never derived from within the executable itself.

- Jonathan M Davis
June 04, 2013
On Tuesday, 4 June 2013 at 07:39:04 UTC, Jonathan M Davis wrote:
> On Tuesday, June 04, 2013 00:25:39 Walter Bright wrote:
>> On 6/3/2013 10:58 PM, Andrei Alexandrescu wrote:
>> > Unless fresh arguments, facts, or perspectives come about, I am personally
>> > not convinced, based on this thread so far, that we should operate a
>> > language change.
>> One possibility is to introduce virtual as a storage class that overrides
>> final. Hence, one could write a class like:
>> 
>> class C {
>>    final:
>>      void foo();
>>      void baz();
>>      virtual int abc();
>>      void def();
>> }
>> 
>> This would not break any existing code, and Manu would just need to get into
>> the habit of having "final:" as the first line in his classes.
>
> That would be good regardless of whether virtual or non-virtual is the
> default. In general, the function attributes other than access level specifiers
> and @safety attributes suffer from not being able to be undone once you use
> them with a colon or {}.
>
> - Jonathan M Davis

Yeah, it's basically removing D's inherent bias against programmers concerned with performance as opposed to flexibility by allowing performance people such as Manu to structure their code however they want. The price is a keyword new to D but so common elsewhere it hardly seems noticeable as such.
June 04, 2013
On Tuesday, 4 June 2013 at 07:50:31 UTC, Zach the Mystic wrote:
> Yeah, it's basically removing D's inherent bias against programmers concerned with performance as opposed to flexibility by allowing performance people such as Manu to structure their code however they want. The price is a keyword new to D but so common elsewhere it hardly seems noticeable as such.

Right?
June 04, 2013
On 6/4/2013 2:46 AM, Walter Bright wrote:
> On 6/4/2013 12:32 AM, Sean Cavanaugh wrote:
>> The problem isn't going to be in your own code, it will be in using
>> everyone elses.
>
> If you're forced to use someone else's code and are not allowed to
> change it in any way, then you're always going to have problems with
> badly written APIs.
>
> Even Manu mentioned that he's got problems with C++ libraries because of
> this, and C++ has non-virtual by default.
>

Changing third party libraries is a maintenance disaster, as they can rename files and make other changes that cause your local modifications to disappear into the ether after a merge.  We have a good number of customizations to wxWidgets here, and I had to carefully port them all up to the current wx codebase because the old one wasn't safe to use in 64 bits on windows.

Also, final-izing a third party library is going to be a pretty dangerous thing to do and likely introduce some serious bugs along the way.


June 04, 2013
On Tuesday, 4 June 2013 at 07:39:19 UTC, Flamaros wrote:
> On Tuesday, 4 June 2013 at 07:12:34 UTC, Walter Bright wrote:
>> On 6/3/2013 11:49 PM, deadalnix wrote:
>>> We can do it in a D specific way (using our own metadata and providing an
>>> optimization pas for LLVM) but most likely we won't even need to as the same
>>> feature is planned to be added to clang and we can most likely simply reuse
>>> clang's metadata.
>>
>> There is another way.
>>
>> D can be made aware that it is building an executable (after all, that is why it invokes the linker). If you shove all the source code into the compiler in one command, for an executable, functions that are not overridden can be made final.
>
> I think is interesting, because all open source software can be build from sources and can also be done on some commercial products in certain conditions. And compiling the world with D is realistic, due to small compilation time.
>
> I also don't understand why compilers don't generate executable directly and use a linker, as they already know the binary format and do optimization. I case of DMD which take all source file in a raw, I don't see any issues. Do DMD do best inlining optimizations than the linker when it get all sources as parameter?


Because C language tooling still persists around us.

In the Pascal family of languages, the linker is part of the compiler, like in Turbo Pascal, Delphi, Oberon, ....

--
Paulo
June 04, 2013
On Tuesday, June 04, 2013 09:51:31 Zach the Mystic wrote:
> On Tuesday, 4 June 2013 at 07:50:31 UTC, Zach the Mystic wrote:
> > Yeah, it's basically removing D's inherent bias against programmers concerned with performance as opposed to flexibility by allowing performance people such as Manu to structure their code however they want. The price is a keyword new to D but so common elsewhere it hardly seems noticeable as such.
> 
> Right?

Yes. That's essentially right. For some attributes, it's not currently possible to mark functions with them en masse while not having that attribute on some of the functions (this is particularly bad wtih the colon syntax, since it affects the rest of the file rather than just a specific scope like the braces do). Adding virtual as a keyword which would undo a final: or final {} on any functions that it's marked with would be useful.

- Jonathan M Davis
June 04, 2013
On 2013-06-04 02:19, Manu wrote:

> That's not quite the case though. Even if I could retrain internal staff
> to start doing that everywhere

I have an idea that could potentially help you here. Have a look at my new thread: "Idea to verify virtual/final methods".

http://forum.dlang.org/thread/kok86c$126l$1@digitalmars.com

-- 
/Jacob Carlborg
June 04, 2013
On 4 June 2013 15:22, Andrei Alexandrescu <SeeWebsiteForEmail@erdani.org>wrote:

> On 6/4/13 12:53 AM, Manu wrote:
>
>> I don't buy the flexibility argument as a plus. I think that's a mistake, but I granted that's a value judgement.
>>
>
> Great.


That doesn't mean it's wrong, just that there are other opinions.

 But it's a breaking change to the API no matter which way you slice it,
>> and I suspect this will be the prevalent pattern.
>> So it basically commits to a future of endless breaking changes when
>> someone wants to tighten up the performance of their library, and
>> typically only after it has had time in the wild to identify the problem.
>>
>
> You're framing the matter all wrongly. Changing a method from virtual to final breaks the code of people who chose to override it - i.e. EXACTLY those folks who found it useful to TAP into the FLEXIBILITY of the design.
>
> Do you understand how you are wrong about this particular little thing?


Well first, there's a very high probability the number of people in that
group is precisely zero, but since you can't know the size of your
audience, library dev's will almost always act conservatively on that
matter.
In the alternate universe, those folks that really want to extend the class
in unexpected ways may need to contact the author and request the change.
Unlike the situation where I need to do that (where it will probably be
rejected), the author will either give them advice about a better solution,
or will probably be happy to help and make the change, since it's not a
breaking change, and there's no risk of collateral damage.
There's a nice side-effect that comes from the inconvenience too, which is
that the author now has more information from his customers about how his
library is being used, and can factor that into future thought/design.

Surely you can see this point right?
Going virtual is a one-way change.

 Situation: I have a closed source library I want to use. I test and find
>> that it doesn't meet our requirements for some trivial matter like
>> performance (super common, I assure you).
>> The author is not responsive, possibly because it would be a potentially
>> breaking change to all the other customers of the library, I've now
>> wasted a month of production time in discussions in an already tight
>> schedule, and I begin the process of re-inventing the wheel.
>> I've spent 10 years repeating this pattern. It will still be present
>> with final-by-default, but it will be MUCH WORSE with
>> virtual-by-default. I don't want to step backwards on this front.
>>
>
> Situation: I have a closed source library I want to use. I test and find that it doesn't meet our requirements for some trivial matter like the behavior of a few methods (super common, I assure you).
>
> The author is not responsive, possibly because it would be a potentially
> breaking change to all the other customers of the library, I've now wasted
> a month of production time in discussions in an already tight schedule, and
> I begin the process of re-inventing the wheel.
> I've spent 10 years repeating this pattern. It will still be present with
> virtual-by-default, but it will be MUCH WORSE with final-by-default. I
> don't want to step backwards on this front.
>
> Destroyed?


What? I don't really know what you're saying here, other than mocking me
and trivialising the issue.
This is a very real and long-term problem.

 Even with C++ final-by-default, we've had to avoid libraries because C++
>> developers can be virtual-tastic sticking it on everything.
>>
>
> Oh, so now the default doesn't matter. The amount of self-destruction is high in this post.


No, you're twisting my words and subverting my point. I'm saying that
virtual-by-default will _make the problem much worse_. It's already a
problem enough.
Where once there might be one or 2 important methods that can't be used
inside a loop, now there's a situation where we can't even do
'thing.length', or 'entity.get', which appear completely benign, but
they're virtual accessors.
This has now extended the problem into the realm of the most trivial of
loops, and the most basic of interactions with the class in question.

The point of my comment is to demonstrate that it's a REAL problem that does happen, and under the virtual-by-default standard, it will become much worse.

 D will magnify this issue immensely with virtual-by-default.
>>
>
> It will also magnify the flexibility benefits.


And this (dubious) point alone is compelling enough to negate everything
I've presented?

Tell me honestly, when was the last time you were working with a C++ class,
and you wanted to override a method that the author didn't mark virtual?
Has that ever happened to you?
It's never happened to me in 15 years. So is there a real loss of
flexibility, or just a theoretical one?

 At least in
>> C++, nobody ever writes virtual on trivial accessors.
>> virtual accessors/properties will likely eliminate many more libraries
>> on the spot for being used in high frequency situations.
>>
>
> I don't think a "high frequency situation" would use classes designed naively. Again, the kind of persona you are discussing are very weird chaps.


No it wouldn't, but everyone needs to make use of 3rd party code.
And even the internal code is prone to forgetfulness and mistakes, as I've
said countless times. Which cost time and money to find and fix.

 Again, refer to Steven's pattern. Methods will almost always be virtual
>> in D (because the author didn't care), until someone flags the issue years later... and then can it realistically be changed? Is it too late? Conversely, if virtual needs to be added at a later time, there are no such nasty side effects. It is always safe.
>>
>
> Again:
>
> - changing a method final -> overridable is nonbreaking. YOU ARE RIGHT HERE.
>
> - changing a method overridable -> final will break PRECISELY code that was finding that design choice USEFUL. YOU SEEM TO BE MISSING THIS.
>

No it won't break, it wouldn't be there in the first place, because the
function wasn't virtual.
I realise I've eliminated a (potentially dangerous) application of a class,
but the author is more than welcome to use 'virtual:' if it's important to
them.

I also think that saying people might want to override something is purely
theoretical, and I've certainly never encountered a problem of this sort in
C++.
In my opinion, C++ users often tend to over-use virtual if anything, and I
expect that practise would continue unchanged.

And you've missed (or at least not addressed) why I actually think this is
positive.
Again to repeat myself. I think this sort of code is highly more likely to
be open source, it's also highly more likely to contain templates (in which
case the source is available anyway), and in lieu of those points, it's
also of some benefit for the user that wants to bend this object in an
unexpected direction to have some contact with the author. The author will
surely have some opinion on the new usage pattern, and will now know the
library is being used in this previously unexpected way, and can consider
that user-base in the future.

Again, both are still possible. But which should be the DEFAULT? Which is a more dangerous default?

         And I argue the subjective opinion, that code can't possibly be
>>         correct
>>         if the author never considered how the API may be used outside his
>>         design premise, and can never test it.
>>
>>
>>     I think you are wrong in thinking traditional procedural testing
>>     methods should apply to OOP designs. I can see how that fails indeed.
>>
>>
>> Can you elaborate?
>> And can you convince me that an author of a class that can be
>> transformed/abused in any way that he may have never even considered,
>> can realistically reason about how to design his class well without
>> being explicit about virtuals?
>>
>
> I can try. You don't understand at least this aspect of OOP (honest affirmation, not intended to offend). If class A chooses to inherit class B, it shouldn't do so to reuse B, but to be reused by code that manipulates Bs. In a nutshell: "inherit not to reuse, but to be reused". I hope this link works: http://goo.gl/ntRrt


I understand the scripture, but I don't buy it outright. In practise,
people derive to 'reuse' just as often (or even more often) than they do to
be reused.
API's are often defined to be used by deriving and implementing some little
thing. Java is probably the most guilty of this pattern I've ever seen, you
typically need to derive a class to do something trivial like provide a
delegate.
I'm not suggesting it should be that way, just that it's often not that way
in practise.

And regardless, I don't see how the default virtual-ness interferes with the reuse of A in any way. Why do these principles require that EVERYTHING be virtual.

(If all A wants is to reuse B, it just uses composition.)
>
> You should agree as a simple matter that there's no reasonable way one can design a software library that would be transformed, abused, and misused. Although class designers should definitely design to make good use easy and bad use difficult, they routinely are unable to predict all different ways in which clients would use the class, so designing with flexibility in mind is the safest route (unless concerns for performance overrides that). Your concern with performance overrides that for flexibility, and that's entirely fine. What I disagree with is that you believe what's best for everybody.


D usually has quite an obsession with correctness, how can it be safe to
encourage use of classes in ways that it was never designed or considered
for? Outside of the simplest of classes, I can't imagine any designer can
consider all possibilities, they will have had a very specific usage
pattern in mind. At best, your 'creative' application won't have been
tested.
As new usage scenario's develop, it's useful for the author to know about
it, and consider it in future.

But this isn't a rule, only a default (in this case, paranoid safety first, a typical pattern for D). A class that wants to offer the flexibility you desire can easily use 'virtual:', or if the author is sufficiently confident that any part can be safely extended, they're perfectly welcome to make everything virtual. There's no loss of possibility, just that the default would offer some more confidence that your usage of a given API is correct; you'll get a compile error if you use beyond the author's intent.

 I've made the point before that the sorts of super-polymorphic classes
>> that might have mostly-virtuals are foundational classes, written once and used many times.
>>
>
> I don't know what a super-polymorphic class is, and google fails to list it: http://goo.gl/i53hS
>
>
>  These are not the classes that programmers sitting at their desk are
>> banging out day after day. This are not the common case. Such a carefully designed and engineered base class can afford a moment to type 'virtual:' at the top.
>>
>
> I won't believe this just because you said it (inventing terminology in the process), it doesn't rhyme with my experience, so do you have any factual evidence to back that up?


It's very frustrating working with proprietary code, I can't paste a class
diagram or anything, but I'm sure you've seen a class diagram before.
You understand that classes have a many:1 relationship with their base
class?
So logically, for every 1 day spent writing a base, there are 'many' days
working on specialisations. So which is the common case?


June 04, 2013
On 2013-06-04 09:12, Walter Bright wrote:

> There is another way.
>
> D can be made aware that it is building an executable (after all, that
> is why it invokes the linker). If you shove all the source code into the
> compiler in one command, for an executable, functions that are not
> overridden can be made final.

Will that work with static libraries?

-- 
/Jacob Carlborg
June 04, 2013
On 4 June 2013 17:46, Walter Bright <newshound2@digitalmars.com> wrote:

> On 6/4/2013 12:32 AM, Sean Cavanaugh wrote:
>
>> The problem isn't going to be in your own code, it will be in using everyone elses.
>>
>
> If you're forced to use someone else's code and are not allowed to change it in any way, then you're always going to have problems with badly written APIs.
>
> Even Manu mentioned that he's got problems with C++ libraries because of this, and C++ has non-virtual by default.
>

Indeed, I was just trying to illustrate that it is a real problem.
Surely you can see how a language where accessors and properties are
virtual by default will magnify the issue immensely right?
Even the most trivial interactions with a class can't be used inside loops
anymore.