June 25, 2006
Jari-Matti Mäkelä wrote:
> Sean Kelly wrote:
>> Jari-Matti Mäkelä wrote:
>>> Damn, now all visibility / accessibility rules in D are totally broken.
>>>
>>> I really don't think the system is ever going to work the way it is
>>> documented now. The interface stuff and part of classes use the
>>> covariance
>>> rule in inheritance. It's also possible to contravariantly inherit a base
>>> class using class foo: private bar {} (a la C++). Then there are private
>>> attributes in classes and modules that don't work so well either. Hmmpf,
>>> hope someone has time to sort this out. We're counting on you, Walter :)
>> I think a careful programmer could make it work through extensive use of
>> 'final'.  However, it's obviously preferable to simply fix the bug :-)
> 
> True, a careful programmer doesn't need these protection mechanics at
> all. In fact we could write some l33t stuff in pure asm as well! But
> that's not my point. I really cannot believe there's only one bug
> preventing us from having a working compiler.
> 
> There's not only some base classes and derived classes. They can also be
> nested in multiple levels and then there's modules & packages that need
> a consistent protection mechanics system with all classes. (then there's
> also templates and other possible stuff implementing protection) I don't
> get it - those all should be pretty consistent on a very high level. Now
> it seems like DMD has pretty ugly ad-hoc solutions for different situations.
> 
> Then, to me it also seems to be totally braindead to mix both covariance
> and contravariance in inheritance semantics. Overriding in Java uses
> covariance (and that's the way it should work in D too, I think). Okay.
> Then there's public/protected/etc. inheritance from C++. Not only does
> it break possible elegant uses of interfaces a la Java, but it's also
> badly documented and implemented. In fact last time I checked it wasn't
> implemented at all. If it will be implemented some day, should we expect
> some kind of runtime exceptions when using these privately inherited
> classes covariantly? That works fine in C++ because C++ does not have
> interfaces, but IMO here it only adds confusion. Well, it might just be
> that I don't "get" it yet.
> 
> Another problem related to these mechanics is that DMD is not able to
> handle the whole module hierarchy when judging the visibility /
> protection rules. A simple diamond shaped import hierarchy breaks the
> system. I wonder how it works with diamond shaped inner class hierarchies.
> 

I think this can all be boiled down to 2 things:

- Since D has chosen its OOP semantics to be based on those of Java, it should follow that model consistently w.r.t. co/contra-variance (after all it's seen pretty wide-spread use).

- Even more importantly, make the darn protection attributes consistent between structs, classes, modules and packages no matter if parts of which are implemented with templates or not!

- Dave
June 25, 2006
Dave wrote:

> Jari-Matti Mäkelä wrote:
>> Sean Kelly wrote:
>>> Jari-Matti Mäkelä wrote:
>>>> Damn, now all visibility / accessibility rules in D are totally broken.
>>>>
>>>> I really don't think the system is ever going to work the way it is
>>>> documented now. The interface stuff and part of classes use the
>>>> covariance
>>>> rule in inheritance. It's also possible to contravariantly inherit a
>>>> base class using class foo: private bar {} (a la C++). Then there are
>>>> private attributes in classes and modules that don't work so well
>>>> either. Hmmpf, hope someone has time to sort this out. We're counting
>>>> on you, Walter :)
>>> I think a careful programmer could make it work through extensive use of 'final'.  However, it's obviously preferable to simply fix the bug :-)
>> 
>> True, a careful programmer doesn't need these protection mechanics at all. In fact we could write some l33t stuff in pure asm as well! But that's not my point. I really cannot believe there's only one bug preventing us from having a working compiler.
>> 
>> There's not only some base classes and derived classes. They can also be nested in multiple levels and then there's modules & packages that need a consistent protection mechanics system with all classes. (then there's also templates and other possible stuff implementing protection) I don't get it - those all should be pretty consistent on a very high level. Now it seems like DMD has pretty ugly ad-hoc solutions for different situations.
>> 
>> Then, to me it also seems to be totally braindead to mix both covariance and contravariance in inheritance semantics. Overriding in Java uses covariance (and that's the way it should work in D too, I think). Okay. Then there's public/protected/etc. inheritance from C++. Not only does it break possible elegant uses of interfaces a la Java, but it's also badly documented and implemented. In fact last time I checked it wasn't implemented at all. If it will be implemented some day, should we expect some kind of runtime exceptions when using these privately inherited classes covariantly? That works fine in C++ because C++ does not have interfaces, but IMO here it only adds confusion. Well, it might just be that I don't "get" it yet.
>> 
>> Another problem related to these mechanics is that DMD is not able to handle the whole module hierarchy when judging the visibility / protection rules. A simple diamond shaped import hierarchy breaks the system. I wonder how it works with diamond shaped inner class hierarchies.
>> 
> 
> I think this can all be boiled down to 2 things:
> 
> - Since D has chosen its OOP semantics to be based on those of Java, it should follow that model consistently w.r.t. co/contra-variance (after all it's seen pretty wide-spread use).
> 
> - Even more importantly, make the darn protection attributes consistent between structs, classes, modules and packages no matter if parts of which are implemented with templates or not!
> 
> - Dave

Actually, Walter has stated in the past that if Java and C++ do something differently, he choose the C++ way. Now that almost nothing work in a way that anyone understand, I think it might be time to start over with OOP semantics that is best for D, no matter how it is done in C++, Java or C#.

-- 
Lars Ivar Igesund
blog at http://larsivi.net
DSource & #D: larsivi
June 25, 2006
Bruno Medeiros wrote:
> kris wrote:
> 
>> Lars Ivar Igesund wrote:
>>
>>> kris wrote:
>>>
>>>
>>>> Used to be that overriding methods in subclasses had some logic to it.
>>>> Now? Well, who knows:
>>>>
>>>> 1) you can override a protected method and make it public. Doh!
> 
> But #1 is not a bug or incorrect, see below.
> 
>>
>> The original behaviour limited the exposure of an overridden method to be less than or equal to the exposure of the original. For example, protected could not be made public via an override. The compiler would give you an error if you attempted to do so. The compiler used to prevent you from intercepting a superclass method where the original intent (of the designer) was that said method whould be internal usage only. For example, final, package, or private methods.
>>
> 
> If that was the original behavior, that behavior was broken. You see, when overriding a method, limiting the protection level is unsafe, whereas widening the protection level is safe. The protection level is a call-site contract (like the return type) and as such is only safe when overridden invariantly or *covariantly*.
> 

LOL ... you might choose to state "in the research-paper entitled zzzz it is noted that ....", or maybe even "language X does it this way". But alas, no.

I find it unfortunate that when reporting issues, it often gets turned into a "lovefest" over one particular point whilst vaguely dispatching those which clearly indicating things are broken. Almost as though debunking any part of a report is viewed as a grand opportunity to satiate some individual ego. Given that the apparent changes were never opened up for discussion in the first place, I suspect it would be foolish to partake in an open debate at this point -- concerning various aspects of one philosophy over another.

So let's stay on track here: what we have is a collection of buggy behaviour plus conflicting/confusing implementation rules. The end-result is something pretty much guaranteed to turn people away from D -- heck, I'm one of D strongest proponents, and continuing issues like this are driving me away. This simply needs fixing, and it needs to operate in an obvious, consistent, and dependable manner. Period.


> If you designed your classes around that original behavior, they're broken.

Indeed they are. They were designed in strict accordance to the written D spec. Sean notes that certain original parts of the spec have now somehow silently disappeared ... no mention of such fundamental change was ever apparently announced or discussed.

June 25, 2006
Lars Ivar Igesund wrote:
> Actually, Walter has stated in the past that if Java and C++ do something differently, he choose the C++ way. Now that almost nothing work in a way that anyone understand, I think it might be time to start over with OOP semantics that is best for D, no matter how it is done in C++, Java or C#.

That would be the best way to go. The primary goal would be to obey the guidelines of language D and only after that some secondary goals like complexity of the implementation (compiler, not application programs).

AFAIK the biggest problem with covariance is that derived classes reveal too much "internal" functionality to the rest of the world. C++ fixes this by hiding these members and Java does it by allowing access to classes through different interfaces. IMHO the way Java does it is overall much better since it does not generate runtime exceptions, when used correctly. The drawback is that since the whole implementation of the class is left open, it requires careful design when implementing individual class level components. Another option is to use has-a relationships where applicable.

The way c++ does it is much more stricter. It really denies all illegal use, but needs to rely more on compile time checks. The problem here is that the reference compiler does not support this functionality (yet, I think). Another problem is that it might cause additional testing.

Well, in principle it's possible to support both approaches at the same time, but I prefer the former approach myself. The only problem is that interfaces in D cannot be easily used as "roles" without much explicit casting. xs0 told in D.announce that it's possible to extend the functionality of interfaces without any extra overhead at all (but it would require more from the optimization logic of the compiler).

Ok, this might be a bit OT, but still I think these all are a bit related. If Walter will do anything to "fix" these, I promise I will shut up, stop complaining and go write some useful programs. :)

-- 
Jari-Matti
June 25, 2006
Lars Ivar Igesund wrote:
> Dave wrote:
> 
>>>
>> I think this can all be boiled down to 2 things:
>>
>> - Since D has chosen its OOP semantics to be based on those of Java, it
>> should follow that model consistently w.r.t. co/contra-variance (after
>> all it's seen pretty wide-spread use).
>>
>> - Even more importantly, make the darn protection attributes consistent
>> between structs, classes, modules and packages no matter if parts of
>> which are implemented with templates or not!
>>
>> - Dave
> 
> Actually, Walter has stated in the past that if Java and C++ do something
> differently, he choose the C++ way. Now that almost nothing work in a way

IMHO, that sounds like a good way to fall into the "corner case" trap - trying to merge C++ semantics w/ an OOP model more closely based on Java. I'd guess that probably the OP from Kris is a result of this (either way - if the rules were changed and/or the implementation is broken).

I'd generally agree about the 'C++ default', but preferable to trying to please both camps, a common and readily implementable (working!) 'ground' must be found, even if Java programmers come out of this better than C++ programmers.

If programmers and compiler writers are going to switch from C++ to D, I'd say for #1 above that what has been proved to work (and most easily implementable [Java]) is most important to get v1 out the door. For #2, consistency is just vital even if it is not the particular way that C++ programmers are used to. Otherwise D's going to scare off all of the OOP purists, OOP newbies and compiler implementors all in one shot.

> that anyone understand, I think it might be time to start over with OOP
> semantics that is best for D, no matter how it is done in C++, Java or C#.
> 
June 26, 2006
Just a slight correction :)

> xs0 told in D.announce that it's possible to extend the
> functionality of interfaces without any extra overhead at all (but it
> would require more from the optimization logic of the compiler).

It's possible to eliminate (time) overhead for calls on the same reference beyond the first (like in a loop or whatever). The first call of an interface method would still be about twice as expensive* as a "regular" virtual call (now it's just a few percent slower).

On the other hand, there'd be no need for implicit casting on function boundaries that is happening now (currently, if you pass/return a class reference to a function expecting an interface reference, an implicit (runtime) cast happens, which would no longer be necessary).


xs0

*) that's just the speed of the call sequence, obviously the function itself executes at "regular" speed, so the difference is not that large in practice
June 26, 2006
xs0 wrote:
> Just a slight correction :)
> 
>> xs0 told in D.announce that it's possible to extend the functionality of interfaces without any extra overhead at all (but it would require more from the optimization logic of the compiler).
> 
> It's possible to eliminate (time) overhead for calls on the same
> reference beyond the first (like in a loop or whatever). The first call
> of an interface method would still be about twice as expensive* as a
> "regular" virtual call (now it's just a few percent slower).

Right, forgot that. Sorry. I was already thinking about a situation where all this initialization could be done before the control jumps to the first line of the main(). But I guess that's not even possible because the classes can be dynamically allocated.

But I think it should be obvious that this overhead is pretty minimal even without any optimizations at all. Traditionally a programmer would inline expensive function calls one way or another. But now it would not be necessary anymore. Better modularity, better software engineering. I still think it's a bit ridiculous to worry about an overhead of only a few opcodes - D is supposed to run on fast >= 32bit architectures, for god's sake. :)
June 26, 2006
kris wrote:
> Bruno Medeiros wrote:
>> kris wrote:
>>
>>> Lars Ivar Igesund wrote:
>>>
>>>> kris wrote:
>>>>
>>>>
>>>>> Used to be that overriding methods in subclasses had some logic to it.
>>>>> Now? Well, who knows:
>>>>>
>>>>> 1) you can override a protected method and make it public. Doh!
>>
>> But #1 is not a bug or incorrect, see below.
>>
>>>
>>> The original behaviour limited the exposure of an overridden method to be less than or equal to the exposure of the original. For example, protected could not be made public via an override. The compiler would give you an error if you attempted to do so. The compiler used to prevent you from intercepting a superclass method where the original intent (of the designer) was that said method whould be internal usage only. For example, final, package, or private methods.
>>>
>>
>> If that was the original behavior, that behavior was broken. You see, when overriding a method, limiting the protection level is unsafe, whereas widening the protection level is safe. The protection level is a call-site contract (like the return type) and as such is only safe when overridden invariantly or *covariantly*.
>>
> 
> LOL ... you might choose to state "in the research-paper entitled zzzz it is noted that ....", or maybe even "language X does it this way". But alas, no.
> 

Yes, I choose not to do that, and why is that a problem?


> I find it unfortunate that when reporting issues, it often gets turned into a "lovefest" over one particular point whilst vaguely dispatching those which clearly indicating things are broken. Almost as though debunking any part of a report is viewed as a grand opportunity to satiate some individual ego. Given that the apparent changes were never opened up for discussion in the first place, I suspect it would be foolish to partake in an open debate at this point -- concerning various aspects of one philosophy over another.
> 
> So let's stay on track here: what we have is a collection of buggy behaviour plus conflicting/confusing implementation rules. The end-result is something pretty much guaranteed to turn people away from D -- heck, I'm one of D strongest proponents, and continuing issues like this are driving me away. This simply needs fixing, and it needs to operate in an obvious, consistent, and dependable manner. Period.
> 

Yes, the things "which clearly indicating [they] are broken" are "vaguely dispatch[ed]"... isn't that natural? What else would we do about it? There's nothing to discuss since they are are obviously broken, and since it is Walter and not us who has to do the fixing, what else would we do about it? This not a rhetorical question, I would really like to know why you find it unfortunate that these things are "vaguely dispatch[ed]". I am not dismissing the importance of what needs to be fixed (and I'm sure #2 and #3 will) because I'm silent about it.

And as for the things that are not so clearly broken (i.e., which may not be broken at all), yes those are the ones who are "debunked". Because if one is asking to fix something that is not broken, then that adds noise to the information and reports Walter's receives, and overall difficults his job. Wouldn't you agree?

> 
>  > If you designed your classes around that original behavior, they're broken.
> 
> Indeed they are. They were designed in strict accordance to the written D spec. Sean notes that certain original parts of the spec have now somehow silently disappeared ... no mention of such fundamental change was ever apparently announced or discussed.
> 

I meant your classes themselves, their design and conception, not your classes running on some implementation. I'll give an example of why is that. If the original behavior allowed protection level contravariance, then we could have the following:

  class Foo {
    public void func() { writefln("Foo.func"); }
  }

  class FooBar : Foo {
    override private void func() { writefln("private FooBar.func"); }
  }
...
  Foo foobar = new FooBar();
  foobar.func();  // FooBar.func is executed, despite being private

And so that behaviour is broken.

-- 
Bruno Medeiros - CS/E student
http://www.prowiki.org/wiki4d/wiki.cgi?BrunoMedeiros#D
June 26, 2006
Bruno Medeiros wrote:
> kris wrote:
>>>> The original behaviour limited the exposure of an overridden method to be less than or equal to the exposure of the original. For example, protected could not be made public via an override. The compiler would give you an error if you attempted to do so. The compiler used to prevent you from intercepting a superclass method where the original intent (of the designer) was that said method whould be internal usage only. For example, final, package, or private methods.
>>>>
>>>
>>> If that was the original behavior, that behavior was broken. You see, when overriding a method, limiting the protection level is unsafe, whereas widening the protection level is safe. The protection level is a call-site contract (like the return type) and as such is only safe when overridden invariantly or *covariantly*.
>>>
>>
>> LOL ... you might choose to state "in the research-paper entitled zzzz it is noted that ....", or maybe even "language X does it this way". But alas, no.
>>
> 
> Yes, I choose not to do that, and why is that a problem?

Because in most of your posts you make out as though you are the "one and true" beacon of knowledge. But you're most certainly not: you're a student. It might add some credence to your perspective if you were to reign in your ego for a moment. This is exactly the problem with the NG -- people can't even make a bug report without one of a small-but-vocal group using it to bolter their ego. You, for example :)

A bit of humilty makes the NG a much more vibrant and useful place to be around. Alternatively, perhaps we should stop posting reports to the NG altogether.


>> I find it unfortunate that when reporting issues, it often gets turned into a "lovefest" over one particular point whilst vaguely dispatching those which clearly indicating things are broken. Almost as though debunking any part of a report is viewed as a grand opportunity to satiate some individual ego. Given that the apparent changes were never opened up for discussion in the first place, I suspect it would be foolish to partake in an open debate at this point -- concerning various aspects of one philosophy over another.
>>
>> So let's stay on track here: what we have is a collection of buggy behaviour plus conflicting/confusing implementation rules. The end-result is something pretty much guaranteed to turn people away from D -- heck, I'm one of D strongest proponents, and continuing issues like this are driving me away. This simply needs fixing, and it needs to operate in an obvious, consistent, and dependable manner. Period.
>>
> 
> Yes, the things "which clearly indicating [they] are broken" are "vaguely dispatch[ed]"... isn't that natural? What else would we do about it? There's nothing to discuss since they are are obviously broken, and since it is Walter and not us who has to do the fixing, what else would we do about it? This not a rhetorical question, I would really like to know why you find it unfortunate that these things are "vaguely dispatch[ed]". I am not dismissing the importance of what needs to be fixed (and I'm sure #2 and #3 will) because I'm silent about it.
> 
> And as for the things that are not so clearly broken (i.e., which may not be broken at all), yes those are the ones who are "debunked". Because if one is asking to fix something that is not broken, then that adds noise to the information and reports Walter's receives, and overall difficults his job. Wouldn't you agree?

As noted above, given that Walter was not interested in discussing in the first place, there's little point in discussing any of it now. The issue is reported -- you should just let it go.


>>  > If you designed your classes around that original behavior, they're broken.
>>
>> Indeed they are. They were designed in strict accordance to the written D spec. Sean notes that certain original parts of the spec have now somehow silently disappeared ... no mention of such fundamental change was ever apparently announced or discussed.
>>
> 
> I meant your classes themselves, their design and conception, not your classes running on some implementation. 

Oh, the notable condescension was palpable the first time around; again, the code was written to the D spec. You can imply those who followed said spec are outright morons for doing so, but that makes no difference. The fact is that the D spec *was* clear, precise, and followed a well-trodden path in its exposed design. The mere fact that /you/ don't approve of some finer points hardly matters, and doesn't place anyone at fault who followed said spec.


> I'll give an example of why is that. If the original behavior allowed > protection level contravariance, then we could have the following:

Indeed; and it may come as a shocking surprise that you're not the only one aware of some OOP fragilities. However, you paint a heavily lopsided picture since you're either blind to the other issues or choose to be sly about not mentioning them. So, though you're trying to force this into an NG discussion, Bruno, it's already done with until Walter asks for opinions.

Good luck with your exams.
June 26, 2006
Sean Kelly wrote:
> Bruno Medeiros wrote:
>> kris wrote:
>>>
>>> The original behaviour limited the exposure of an overridden method to be less than or equal to the exposure of the original. For example, protected could not be made public via an override. The compiler would give you an error if you attempted to do so. The compiler used to prevent you from intercepting a superclass method where the original intent (of the designer) was that said method whould be internal usage only. For example, final, package, or private methods.
>>>
>>
>> If that was the original behavior, that behavior was broken. You see, when overriding a method, limiting the protection level is unsafe, whereas widening the protection level is safe. The protection level is a call-site contract (like the return type) and as such is only safe when overridden invariantly or *covariantly*.
>>
>> If you designed your classes around that original behavior, they're broken.
> 
> I'm not sure I agree.  However, if overriding behaves this way then it makes sense that aliasing should as well, and I would be surprised if aliasing ever behaved this way.  Perhaps this is one area where one should rely on programming style and not on the compiler?
> 
> 
> Sean

Not sure you agree with what exactly? There is no discussion to what kind of overriding is "protection safe" or not.
Still, as for what the language behavior should be, that's arguable (but I didn't comment on that).
For instance, one could allow covariant protection levels, that's the behavior of Java. But even though it is safe, I do find it a bit odd in practice.
One could allow invariant protection levels only (that's the behavior of C#).
And even contravariant protection levels, although not "protection safe", is actually a behavior that can have a certain sense. (calling it broken was perhaps inadequate)

-- 
Bruno Medeiros - CS/E student
http://www.prowiki.org/wiki4d/wiki.cgi?BrunoMedeiros#D