February 17, 2012
On 02/17/2012 04:23 AM, Jonathan M Davis wrote:
> On Thursday, February 16, 2012 18:49:40 Walter Bright wrote:
>> Given:
>>
>>       class A { void foo() { } }
>>       class B : A { override pure void foo() { } }
>>
>> This works great, because B.foo is covariant with A.foo, meaning it can
>> "tighten", or place more restrictions, on foo. But:
>>
>>       class A { pure void foo() { } }
>>       class B : A { override void foo() { } }
>>
>> fails, because B.foo tries to loosen the requirements, and so is not
>> covariant.
>>
>> Where this gets annoying is when the qualifiers on the base class function
>> have to be repeated on all its overrides. I ran headlong into this when
>> experimenting with making the member functions of class Object pure.
>>
>> So it occurred to me that an overriding function could *inherit* the
>> qualifiers from the overridden function. The qualifiers of the overriding
>> function would be the "tightest" of its explicit qualifiers and its
>> overridden function qualifiers. It turns out that most functions are
>> naturally pure, so this greatly eases things and eliminates annoying
>> typing.
>>
>> I want do to this for @safe, pure, nothrow, and even const.
>>
>> I think it is semantically sound, as well. The overriding function body will
>> be semantically checked against this tightest set of qualifiers.
>>
>> What do you think?
>
> No. Absolutely not. I hate the fact that C++ does this with virtual. It makes
> it so that you have to constantly look at the base classes to figure out what's
> virtual and what isn't. It harms maintenance and code understandability. And
> now you want to do that with @safe, pure, nothrow, and const? Yuck.
>

Whether a function is virtual or not has far-reaching semantic consequences in C++ (overriding vs hiding). Whether a function is pure/nothrow/const/@safe does not, because those are just annotations that give some additional guarantees, which *ought* to be clear from what the function actually does.  It is not like anyone would look up the signature before using some method inside a pure function if what it does seems to be pure.

> I can understand wanting to save some typing,

:o)

Seriously, the average programmer is exceedingly lazy. Any language feature that might reduce the annotation overhead is a plus. Annotations are for the compiler, not for people.

> but I really think that this
> harms code maintainability. It's the sort of thing that an IDE is good for. It
> does stuff like generate the function signatures for you or fill in the
> attributes that are required but are missing.

An IDE can also fill in the attributes that are not required but missing if this is implemented, or directly display only the interface to some class, so that is simply not a valid point. Having all the proper annotations can become an IDE style warning for those who like IDEs.

> I grant you that many D developers don't use IDEs at this point (at least not for D) and that those
> sort of capabilities are likely to be in their infancy for the IDEs that we
> _do_ have, but I really think that this is the sort of thing that should be
> left up to the IDE. Inferring attribtutes like that is just going to harm code
> maintainibility.

It makes re-factoring a lot easier which helps maintainability: The programmer can annotate some method with pure, hit compile and he will immediately see all the non-pure overrides if there are any and may fix them.

> It's bad enough that we end up with them not being marked on
> templates due to inferrence, but we _have_ to do it that way, because the
> attributes vary per instantiation. That is _not_ the case with class member
> functions.
>
> Please, do _not_ do this.
>
> - Jonathan M Davis

I think you are severely overstating the issues. What is the most harmful thing that might happen (except that the code gets less verbose)?


February 17, 2012
On 2/16/2012 7:54 PM, Jonathan M Davis wrote:
> On Thursday, February 16, 2012 19:41:00 Walter Bright wrote:
>> On 2/16/2012 7:23 PM, Jonathan M Davis wrote:
>>> No. Absolutely not. I hate the fact that C++ does this with virtual. It
>>> makes it so that you have to constantly look at the base classes to
>>> figure out what's virtual and what isn't. It harms maintenance and code
>>> understandability. And now you want to do that with @safe, pure, nothrow,
>>> and const? Yuck.
>> I do not see how it harms maintainability. It does not break any existing
>> code. It makes it easier to convert a function hierarchy to nothrow, pure,
>> etc.
>
> It makes it harder to maintain the code using the derived classes, because you
> end up with a bunch of functions which aren't labeled with their attributes.
> You have to go and find all of the base classes and look at them to find which
> attributes are on their functions to know what the attributes of the functions
> of the derived classes actually are. It will make using all D classes harder.

I doubt one would ever need to dig through to see what the attributes are, because:

1. The user of the override will be using it via the base class function.

2. The compiler will tell you if it, for example, violates purity. There won't be any guesswork involved. Right now, the compiler will give you a covariant error.

3. It isn't different in concept than auto declarations and all the other type inference that goes in D, including automatic inference of purity and safety.


> You should be able to look at a function and know whether it's pure, @safe,
> nothrow, or const without having to dig through documentation and/or code
> elsewhere to figure it out.
>
> Doing this would make the conversion to const easier but be harmful in the
> long run.

We should be encouraging people to use pure, @safe, etc. Not doing the inference makes it annoying to use, and so people don't bother. My experience poking through the druntime and phobos codebase is that the overwhelming majority of the functions are safe, const & pure, but they aren't marked that way.

February 17, 2012
On 02/17/2012 04:59 AM, bearophile wrote:
> Jonathan M Davis:
>
>> I hate the fact that C++ does this with virtual. It makes
>> it so that you have to constantly look at the base classes to figure out what's
>> virtual and what isn't. It harms maintenance and code understandability. And
>> now you want to do that with @safe, pure, nothrow, and const? Yuck.
>
> This is a problem.
>

It is not a problem at all. This can happen in C++:

struct S: T{
    void foo(){ ... }
}

int main(){
    T* x = new S();
    x->foo(); // what will this do? No way to know without looking up T, bug prone.
}

This is the worst-case scenario for D:

class S: T{
    void foo(){ ... }
}

void bar()pure{
    T x = new S;
    S.foo(); // see below
}

'foo' sounds like a pure method name... Hit compile... Oh, it is not pure... It should be! Look up class T, patch in purity annotation, everything works - awesome!


The analogy is so broken it is not even funny.


> On the other hand I presume Walter is now converting Phobos all at once to fix const correctness, so he's writing tons of attributes. So he desires to quicken this boring work.
>
> On the other hand fixing const correctness in Phobos is not a common operation, I think it needs to be done only once. Once one or two future DMD versions are out, programmers will not need to introduce a large amount of those annotations at once. So "fixing" forever D2 for an operation done only once seems risky, especially if future IDEs will be able to insert those annotations cheaply.
>
> So a possible solution is to wait 2.059 or 2.060 before introducing this "Inheritance of purity" idea. I think at that time we'll be more able to judge how much useful this feature is once Phobos is already fully const corrected and no need to fix a lot of code at once exists.
>
> Another idea is to activate this "Inheritance of purity" only if you compile with "-d" (allow deprecated features) for few months and then remove it, to help porting of today D2 code to const correctness in a more gradual way.
>
> Bye,
> bearophile

Are you really suggesting that making code const correct and the right methods pure etc. is not a common operation?
February 17, 2012
On 2/16/2012 8:51 PM, Timon Gehr wrote:
> It makes re-factoring a lot easier which helps maintainability: The programmer
> can annotate some method with pure, hit compile and he will immediately see all
> the non-pure overrides if there are any and may fix them.

Exactly.
February 17, 2012
On Friday, 17 February 2012 at 03:24:50 UTC, Jonathan M Davis wrote:
> On Thursday, February 16, 2012 18:49:40 Walter Bright wrote:
>> Given:
>> 
>>      class A { void foo() { } }
>>      class B : A { override pure void foo() { } }
>> 
>> This works great, because B.foo is covariant with A.foo, meaning it can
>> "tighten", or place more restrictions, on foo. But:
>> 
>>      class A { pure void foo() { } }
>>      class B : A { override void foo() { } }
>> 
>> fails, because B.foo tries to loosen the requirements, and so is not
>> covariant.
>> 
>> Where this gets annoying is when the qualifiers on the base class function
>> have to be repeated on all its overrides. I ran headlong into this when
>> experimenting with making the member functions of class Object pure.
>> 
>> So it occurred to me that an overriding function could *inherit* the
>> qualifiers from the overridden function. The qualifiers of the overriding
>> function would be the "tightest" of its explicit qualifiers and its
>> overridden function qualifiers. It turns out that most functions are
>> naturally pure, so this greatly eases things and eliminates annoying
>> typing.
>> 
>> I want do to this for @safe, pure, nothrow, and even const.
>> 
>> I think it is semantically sound, as well. The overriding function body will
>> be semantically checked against this tightest set of qualifiers.
>> 
>> What do you think?
>
> No. Absolutely not. I hate the fact that C++ does this with virtual. It makes it so that you have to constantly look at the base classes to figure out what's virtual and what isn't. It harms maintenance and code understandability. And now you want to do that with @safe, pure, nothrow, and const? Yuck.
>
> I can understand wanting to save some typing, but I really think that this harms code maintainability. It's the sort of thing that an IDE is good for. It does stuff like generate the function signatures for you or fill in the attributes that are required but are missing. I grant you that many D developers don't use IDEs at this point (at least not for D) and that those sort of capabilities are likely to be in their infancy for the IDEs that we _do_ have, but I really think that this is the sort of thing that should be left up to the IDE. Inferring attribtutes like that is just going to harm code maintainibility. It's bad enough that we end up with them not being marked on templates due to inferrence, but we _have_ to do it that way, because the attributes vary per instantiation. That is _not_ the case with class member functions.
>
> Please, do _not_ do this.
>
> - Jonathan M Davis

In the situation where the IDE writes it for you, said IDE will help you only when you write the code.

In the situation where the IDE tells you what they are (through something like hovering over it), it will help you no matter who writes the code. It is also significantly easier to implement, particularly taking into consideration things like style, comments, etc.
February 17, 2012
On 2/16/2012 8:53 PM, Walter Bright wrote:
> 1. The user of the override will be using it via the base class function.
>
> 2. The compiler will tell you if it, for example, violates purity. There won't
> be any guesswork involved. Right now, the compiler will give you a covariant error.
>
> 3. It isn't different in concept than auto declarations and all the other type
> inference that goes in D, including automatic inference of purity and safety.

4. It's also much like how contracts get inherited.
February 17, 2012
> What do you think?

Sounds good! Only, like Michel said, please make errors output helpful hints as well. I can see why David is concerned.
February 17, 2012
On 02/17/2012 06:12 AM, Walter Bright wrote:
> On 2/16/2012 8:53 PM, Walter Bright wrote:
>> 1. The user of the override will be using it via the base class function.
>>
>> 2. The compiler will tell you if it, for example, violates purity.
>> There won't
>> be any guesswork involved. Right now, the compiler will give you a
>> covariant error.
>>
>> 3. It isn't different in concept than auto declarations and all the
>> other type
>> inference that goes in D, including automatic inference of purity and
>> safety.
>
> 4. It's also much like how contracts get inherited.

This needs some love ;)
http://d.puremagic.com/issues/show_bug.cgi?id=6856
February 17, 2012
On 2/16/12 8:49 PM, Walter Bright wrote:
> Given:
>
> class A { void foo() { } }
> class B : A { override pure void foo() { } }
>
> This works great, because B.foo is covariant with A.foo, meaning it can
> "tighten", or place more restrictions, on foo. But:
>
> class A { pure void foo() { } }
> class B : A { override void foo() { } }
>
> fails, because B.foo tries to loosen the requirements, and so is not
> covariant.
>
> Where this gets annoying is when the qualifiers on the base class
> function have to be repeated on all its overrides. I ran headlong into
> this when experimenting with making the member functions of class Object
> pure.
>
> So it occurred to me that an overriding function could *inherit* the
> qualifiers from the overridden function. The qualifiers of the
> overriding function would be the "tightest" of its explicit qualifiers
> and its overridden function qualifiers. It turns out that most functions
> are naturally pure, so this greatly eases things and eliminates annoying
> typing.
>
> I want do to this for @safe, pure, nothrow, and even const.
>
> I think it is semantically sound, as well. The overriding function body
> will be semantically checked against this tightest set of qualifiers.
>
> What do you think?

I thought about this for a while and seems to work well. The maintenance scenarios have already been discussed (people add or remove some attribute or qualifier) and I don't see ways in which things become inadvertently broken.

The const qualifier is a bit different because it allows overloading. Attention must be paid there so only the appropriate overload is overridden.

Congratulations Walter for a great idea. Inference is definitely the way to go.


Andrei
February 17, 2012
On 2012-02-17 04:15, H. S. Teoh wrote:
> On Thu, Feb 16, 2012 at 06:49:40PM -0800, Walter Bright wrote:
> [...]
>> So it occurred to me that an overriding function could *inherit* the
>> qualifiers from the overridden function. The qualifiers of the
>> overriding function would be the "tightest" of its explicit qualifiers
>> and its overridden function qualifiers. It turns out that most
>> functions are naturally pure, so this greatly eases things and
>> eliminates annoying typing.
>
> I like this idea.
>
>
>> I want do to this for @safe, pure, nothrow, and even const.
>
> Excellent!
>
>
>> I think it is semantically sound, as well. The overriding function
>> body will be semantically checked against this tightest set of
>> qualifiers.
>>
>> What do you think?
>
> Semantically, it makes sense. And reducing typing is always good.
> (That's one of my pet peeves about Java: too much typing just to achieve
> something really simple. It feels like being forced to kill a mosquito
> with a laser-guided missile by specifying 3D coordinates accurate to 10
> decimal places.)
>
> The one disadvantage I can think of is that it will no longer be clear
> exactly what qualifiers are in effect just by looking at the function
> definition in a derived class. Which is not terrible, I suppose, but I
> can see how it might get annoying if you have to trace the overrides all
> the way up the inheritance hierarchy just to find out what qualifiers a
> function actually has.
>
> OTOH, if ddoc could automatically fill in the effective qualifiers, then
> this will be a non-problem. ;-)

And if ddoc could show the inheritance hierarchy as well.

-- 
/Jacob Carlborg