May 10, 2017
This is what FQNs are for. At least, it was before FQNs were broken, first by an incomplete "package.d" system and second by a goofy half-baked change to import rules.

FQNs need fixed. This DIP is just a questionable workaround for our borked FQNs that that smacks of C++-style "no breakage at all costs" design philosophies being applied to both D language and D libraries.
May 10, 2017
On 05/10/2017 09:51 PM, Nick Sabalausky (Abscissa) wrote:
> This is what FQNs are for. At least, it was before FQNs were broken,
> first by an incomplete "package.d" system and second by a goofy
> half-baked change to import rules.
>
> FQNs need fixed. This DIP is just a questionable workaround for our
> borked FQNs that that smacks of C++-style "no breakage at all costs"
> design philosophies being applied to both D language and D libraries.

I guess that's my overview. More specifically, what I mean is this:

D's system of fully-qualified names ("FQNs") was intended to address the matter of conflicting symbols: When a symbol name is ambiguous, instead of blindly choosing one, the compiler errors and forces the programmer to clarify.

For this DIP to be convincing, I would need to see this DIP address the following:

1. Why it would be insufficient to simply rely on D's system of FQNs (if fixed from their current admittedly half-broken state)? Currently, FQNs are not addressed, or even mentioned at all, in the "Existing solutions" section.

2. Why having to disambiguate a symbol with a FQN when upgrading a library is a sufficiently large problem to justify a new language feature. (This would seem very difficult to justify, since with or without this DIP, programmer will be informed by the compiler either way that they need to disambiguate which symbol they want).

Aside from FQNs, I have additional concerns:

3. Can we even except a library's author to *reliably* know ahead of time they're going to add a symbol with a particular name? Seems to me a library author would need a magic crystal ball to make effective use of this feature...

4. Unless...once they've developed a new version of their lib that's sufficiently close to release that they know that their API changes won't...umm...change any further, then they retroactively create an interim "transition" release which adds these "future" declarations...so that the programmer knows they need to adjust their code to disambiguate. But again, as in #2, the programmer will be told that *anyway* when compiling with the new version. So what's the point?

5. Once the programmer *is* informed they need to disambiguate a symbol (via this DIP or via a normal ambiguous symbol error), it's an inherently trivial (and inherently backwards-compatible) fix (which can't always be said of fixing removed-symbol deprecations - as this DIP tries to draw parallels to). So, unlike deprecated symbols, I fail to see what non-trivial benefit is to be gained by an "early notification" of a to-be-added symbol.

I think this DIP would have merit in a language that had a tendency for symbols to hijack each other. But D's module system already eliminates that, so AFAICT, the only thing this DIP is left "improving" is to allow lib authors, only under very select circumstances, to go out of their way to provide ahead-of-time notice of a guaranteed-trivial fix they must make (at least once FQNs are actually fixed). Therefore, I find the whole idea extremely unconvincing.

May 10, 2017
On 05/10/2017 11:04 PM, Nick Sabalausky (Abscissa) wrote:
> On 05/10/2017 09:51 PM, Nick Sabalausky (Abscissa) wrote:
>> This is what FQNs are for. At least, it was before FQNs were broken,
>> first by an incomplete "package.d" system and second by a goofy
>> half-baked change to import rules.
>>
>> FQNs need fixed. This DIP is just a questionable workaround for our
>> borked FQNs that that smacks of C++-style "no breakage at all costs"
>> design philosophies being applied to both D language and D libraries.
>
> I guess that's my overview. More specifically, what I mean is this:
>

Ugh, frustrating... Seems like every time I try to clarify something I manage to make a jumbled mess of it. Lemme see one last time if I can distill that down my basic counter-arguments:

1. Why are FQNs alone (assume they still worked like they're supposed to) not good enough? Needs to be addressed in DIP. Currently isn't.

2. The library user is already going to be informed they need to fix an ambiguity anyway, with or without this DIP.

3. Updating code to fix the ambiguity introduced by a new symbol is always trivial (or would be if FQNs were still working properly and hadn't become needlessly broken) and inherently backwards-compatible with the previous version of the lib.

3. Unlike deprecations, this only provides a heads-up for trivial matters that don't really need a heads-up notice:

Unlike when symbols being added to a lib, the fix in user-code for a deprecation *can* be non-trivial and *can* be non-backwards-compatible with the previous version of the lib, depending on the exact circumstances. Therefore, unlike added symbols, the "deprecation" feature for removed symbols is justified.

4. Unlike deprecation, this feature works contrary to the actual flow of development and basic predictability:

When a lib author wants to remove a symbol, they already what the symbol is, what it's named and that they have X or Y reason to remove it. But when a lib author wants to add a symbol, it's more speculative: They don't actually KNOW such details until some feature is actually written, implemented and just about ready for release. At which point it's a bit late, and awkward, to go putting in a "foo *will* be added".


May 11, 2017
On 04/25/2017 08:33 AM, Steven Schveighoffer wrote:
>
> In the general case, one year is too long. A couple compiler releases
> should be sufficient.
>
>>
>> * When the @future attribute is added, would one add it on a dummy
>> symbol or would one provide the implementation as well?
>
> dummy symbol. Think of it as @disable, but with warning output instead
> of error.
>

This is a pointless limitation. What is the benefit of requiring the author to *not* provide an implementation until the transition period is over? It runs counter to normal workflow.

Instead, why not just say "Here's a new function. But !!ZOMG!! what if somebody is already using a function by that name??!? They'd have use FQNs to disambiguate! Gasp!!! We can't have that! So, fine, if it's that big of a deal, we'll just instruct the compiler to just NOT pick up this function unless it's specifically requested via FQN".

That sounds FAR better to me than "Here's a new function, but we gotta keep it hidden in a separate branch/version/etc and not let anyone use it until we waste a bunch of time making sure everyone's code is all updated and ready so that once we let people use it nobody will have to update their code with FQNs, because we can't have that, can we?"

Pardon me for saying so, and so bluntly, but honestly, this whole discussion is just stupid. It's full-on C++-grade anti-breakage hysteria. There are times when code breakage is a legitimate problem. This is not REMOTELY one of them.


May 11, 2017
On Thursday, 11 May 2017 at 00:04:52 UTC, Steven Schveighoffer wrote:
> I prefer the first one. The reason is simply because it doesn't require any new grammar. The override requirement is already a protection against changing base class. In this case, we have two possible outcomes:
>
> 1. The base class finally implements the method and removes future. In this case, the derived class continues to function as expected, overriding the new function.
>
> 2. The base class removes the method. In this case, the override now fails to compile. This is not as ideal, as this does not result in a version that will compile with two consecutive versions of the base. But there is a possible path for this too -- mark it as @deprecated @future :)
>
> -Steve

Sounds reasonable. I'll submit an update to the DIP.
May 11, 2017
On Thursday, 11 May 2017 at 03:46:55 UTC, Nick Sabalausky (Abscissa) wrote:
> 1. Why are FQNs alone (assume they still worked like they're supposed to) not good enough? Needs to be addressed in DIP. Currently isn't.

It is already addressed in the DIP. FQNs only help if they are used and current idiomatic D code tends to rely on unqualified imports/names.

> 2. The library user is already going to be informed they need to fix an ambiguity anyway, with or without this DIP.

Only if you consider "after compiler/library upgrade your project doesn't work anymore" a sufficient "informing" which we definitely don't.

> 3. Updating code to fix the ambiguity introduced by a new symbol is always trivial (or would be if FQNs were still working properly and hadn't become needlessly broken) and inherently backwards-compatible with the previous version of the lib.

Trivial compilation error fixup that takes 5 minutes to address in a single project takes up to one month to propagate across all our libraries in projects per my experience. Actually fixing code is hardly a problem with breaking changes, ever. It is synchronization between developers and projects that makes it so painful.

And in override case, there is no backwards compatible solution available at all (see Steven comment).


> Unlike when symbols being added to a lib, the fix in user-code for a deprecation *can* be non-trivial and *can* be non-backwards-compatible with the previous version of the lib, depending on the exact circumstances. Therefore, unlike added symbols, the "deprecation" feature for removed symbols is justified.

Please elaborate. User code fix is always either using FQN or renaming, what non-trivial case comes to your mind?

> 4. Unlike deprecation, this feature works contrary to the actual flow of development and basic predictability:
>
> When a lib author wants to remove a symbol, they already what the symbol is, what it's named and that they have X or Y reason to remove it. But when a lib author wants to add a symbol, it's more speculative: They don't actually KNOW such details until some feature is actually written, implemented and just about ready for release. At which point it's a bit late, and awkward, to go putting in a "foo *will* be added".

You describe a typical library that doesn't follow SemVer and generally doesn't bother much about providing any upgrade stability. Naturally, such library developer will ignore `@future` completely and keep following same development patterns.

Not everyone is like that though. This document (https://github.com/sociomantic-tsunami/neptune/blob/master/doc/library-maintainer.rst) explains the versioning/development model we use for all D libraries and within such model feature that is written in one major version can be added as `@future` in the previous major version at the same time.

And for druntime object.d case it is pretty much always worth the gain to delay merging already implemented addition for one release, putting `@future` stub in the one before. There can never be any hurry so there is no way to be "late".
May 11, 2017
On 5/11/17 12:11 AM, Nick Sabalausky (Abscissa) wrote:
> On 04/25/2017 08:33 AM, Steven Schveighoffer wrote:
>>
>> In the general case, one year is too long. A couple compiler releases
>> should be sufficient.
>>
>>>
>>> * When the @future attribute is added, would one add it on a dummy
>>> symbol or would one provide the implementation as well?
>>
>> dummy symbol. Think of it as @disable, but with warning output instead
>> of error.
>>
>
> This is a pointless limitation. What is the benefit of requiring the
> author to *not* provide an implementation until the transition period is
> over? It runs counter to normal workflow.

The idea (I think) is to have a version of the library that functions *exactly* like the old version, but helpfully identifies where a future version will not function properly. This is like @deprecate. You don't put a @deprecate on a symbol and at the same time remove the symbol's implementation -- you leave it as it was, and just tag it so the warning shows up. That's step one.

> Instead, why not just say "Here's a new function. But !!ZOMG!! what if
> somebody is already using a function by that name??!? They'd have use
> FQNs to disambiguate! Gasp!!! We can't have that! So, fine, if it's that
> big of a deal, we'll just instruct the compiler to just NOT pick up this
> function unless it's specifically requested via FQN".

The point is not to break code without fair warning. This is the progression I have in mind:

In Library version 1 (LV1), the function doesn't exist.
In LV2, the function is marked as @future.
In LV3, the function is implemented and the @future tag is removed.

LV1 + user code version 1 (UCV1) -> works
  * library writer updates his version
LV2 + UCV1 -> works, but warns that it will not work in a future version.
  * user updates his code to mitigate the potential conflict
LV2 + UCV2 -> works, no warnings.
LV3 + UCV2 -> works as expected.

The important thing here is that the library writer gives fair warning that a breaking change is coming, giving the user time to update his code at his convenience. If he does so before the next version of the library comes out, then his code works for both the existing library version AND the new one without needing to rush a change through.

> That sounds FAR better to me than "Here's a new function, but we gotta
> keep it hidden in a separate branch/version/etc and not let anyone use
> it until we waste a bunch of time making sure everyone's code is all
> updated and ready so that once we let people use it nobody will have to
> update their code with FQNs, because we can't have that, can we?"

It depends on both the situation and the critical nature of the symbol in question. I'd say the need for this tag is going to be very rare, but necessary when it is needed. I don't think there's a definitive methodology for deciding when it's needed and when it's not. Would be case-by-case.

> Pardon me for saying so, and so bluntly, but honestly, this whole
> discussion is just stupid. It's full-on C++-grade anti-breakage
> hysteria. There are times when code breakage is a legitimate problem.
> This is not REMOTELY one of them.

This is not anti-breakage. Code is going to break. It's just a warning that the breaking is coming.

-Steve
May 11, 2017
On 05/11/2017 07:19 AM, Steven Schveighoffer wrote:
> On 5/11/17 12:11 AM, Nick Sabalausky (Abscissa) wrote:
>>
>> This is a pointless limitation. What is the benefit of requiring the
>> author to *not* provide an implementation until the transition period is
>> over? It runs counter to normal workflow.
>
> The idea (I think) is to have a version of the library that functions
> *exactly* like the old version, but helpfully identifies where a future
> version will not function properly. This is like @deprecate. You don't
> put a @deprecate on a symbol and at the same time remove the symbol's
> implementation -- you leave it as it was, and just tag it so the warning
> shows up. That's step one.
>

Yes, I'm aware that's the idea the author had in mind, but that still doesn't begin to address this:

What is the *benefit* of requiring of requiring the author to *not* provide an implementation until the transition period is over?

I maintain there is no benefit to that. Drawing a parallel to "how you do it with deprecated symbols" is not demonstrating a benefit. For that matter, I see the parallel with deprecated symbols as being "The deprecation tag goes with an implemented function. Symmetry would imply that a 'newly added' tag also goes on an implemented function." So the symmetry arguments goes both ways. But regardless, what we *don't* usually do is develop functionality *after* first finalizing its name. That's just silly.

> The point is not to break code without fair warning. This is the
> progression I have in mind:
>
> In Library version 1 (LV1), the function doesn't exist.
> In LV2, [the lib author makes a guess that they're going to write a function with a particular name and the] function is marked as @future.
> In LV3, the function is implemented and the @future tag is removed.

Fixed step 2 for you.

And yes, that *is* the progression suggested by this DIP, but one of my key points is: that's a downright silly progression. This is better:

- In Library version 1 (LV1), the function doesn't exist.
- In LV2, the new function is marked as @new_symbol to prevent the (somehow) TERRIBLE HORRIBLE AWFUL consequence of the new symbol causing people to be required to toss in a FQN, but there's no harm in STOPPING people from actually using the new functionality if they request it unambiguously, now is there? No, there isn't.
- In LV3, the @new_symbol tag is removed.


> The important thing here is that the library writer gives fair warning
> that a breaking change is coming, giving the user time to update his
> code at his convenience.

Or, if the tag is added to the actual implementation then there IS NO FREAKING BREAKING CHANGE until the @new_func or whatever tag is removed, but the library user is STILL given fair (albiet useless, imo) warning that it will be (kinda sorta) broken (with a downright trivial fix) in a followup release.


> I'd say the need for this tag is going to be very rare,

That's for certain.

> but necessary when it is needed.

I can't even begin to comprehend a situation where a heads-up about a mere "FQN needed here" qualifies as something remotely as strong as "necessary". Unless the scenario hinges on the current brokenness of FQNs, which seriously need to be fixed anyway.


> I don't think there's a definitive
> methodology for deciding when it's needed and when it's not. Would be
> case-by-case.

Sounds like useless cognitive bother on the library author for extremely minimal (at best) benefit to the library user. Doesn't sound like sufficient justification for a new language feature to me.

> This is not anti-breakage. Code is going to break. It's just a
> warning that the breaking is coming.
>

It's going out of the way to create and use a new language feature purely out of fear of a trivial breakage situation. Actual breakage or not, it's "all breakages are scary and we must bend over backwards because of them" paranoia, just the same.

May 11, 2017
On 05/11/2017 06:10 AM, Dicebot wrote:
> On Thursday, 11 May 2017 at 03:46:55 UTC, Nick Sabalausky (Abscissa) wrote:
>> 1. Why are FQNs alone (assume they still worked like they're supposed
>> to) not good enough? Needs to be addressed in DIP. Currently isn't.
>
> It is already addressed in the DIP. FQNs only help if they are used and
> current idiomatic D code tends to rely on unqualified imports/names.
>

I didn't see that. Certainly not in the "Existing solutions" section. It needs to be there.

But in any case, I'm not talking about the "existing solution" of projects *already* using FQNs for things, I'm talking about the "existing solution" of just letting a library user spend two seconds adding an FQN when they need to disambiguate.

>> 2. The library user is already going to be informed they need to fix
>> an ambiguity anyway, with or without this DIP.
>
> Only if you consider "after compiler/library upgrade your project
> doesn't work anymore" a sufficient "informing" which we definitely don't.

I definitely do. But even if you don't, see my "@new_func" alternate suggestion.

>
>> 3. Updating code to fix the ambiguity introduced by a new symbol is
>> always trivial (or would be if FQNs were still working properly and
>> hadn't become needlessly broken) and inherently backwards-compatible
>> with the previous version of the lib.
>
> Trivial compilation error fixup that takes 5 minutes to address in a
> single project takes up to one month to propagate across all our
> libraries in projects per my experience. Actually fixing code is hardly
> a problem with breaking changes, ever. It is synchronization between
> developers and projects that makes it so painful.
>

This needs to go in the DIP.

> And in override case, there is no backwards compatible solution
> available at all (see Steven comment).

This needs to be made explicit in the DIP. Currently, I see nothing in the DIP clarifying that FQNs cannot address the override case.

>> Unlike when symbols being added to a lib, the fix in user-code for a
>> deprecation *can* be non-trivial and *can* be non-backwards-compatible
>> with the previous version of the lib, depending on the exact
>> circumstances. Therefore, unlike added symbols, the "deprecation"
>> feature for removed symbols is justified.
>
> Please elaborate. User code fix is always either using FQN or renaming,
> what non-trivial case comes to your mind?
>

For *added* symbols, yes. Which is why I find this DIP to be of questionable value compared to "@deprecated".

That's what my quoted paragraph above is referring to: *removed* (ie, deprecated) symbols. When a symbol is *removed*, the user code fix is NOT always guaranteed to be trivial. That's what justifies the existence of @deprecated. @future, OTOH, doesn't meet the same criteria strength because, as you say, when a symbol is added, "User code fix is always either using FQN or renaming".

May 11, 2017
On 5/11/17 1:21 PM, Nick Sabalausky (Abscissa) wrote:
> On 05/11/2017 07:19 AM, Steven Schveighoffer wrote:
>> On 5/11/17 12:11 AM, Nick Sabalausky (Abscissa) wrote:
>>>
>>> This is a pointless limitation. What is the benefit of requiring the
>>> author to *not* provide an implementation until the transition period is
>>> over? It runs counter to normal workflow.
>>
>> The idea (I think) is to have a version of the library that functions
>> *exactly* like the old version, but helpfully identifies where a future
>> version will not function properly. This is like @deprecate. You don't
>> put a @deprecate on a symbol and at the same time remove the symbol's
>> implementation -- you leave it as it was, and just tag it so the warning
>> shows up. That's step one.
>>
>
> Yes, I'm aware that's the idea the author had in mind, but that still
> doesn't begin to address this:
>
> What is the *benefit* of requiring of requiring the author to *not*
> provide an implementation until the transition period is over?

How does this work?

class Base
{
  void foo() @future { ... }
}

class Derived : Base
{
  void foo() { ... }
}

void bar(Base b) // could be instance of Derived
{
   // which one is called? Derived.foo may not have been intended for
   // the same purpose as Base.foo
   b.foo();
}

>> The point is not to break code without fair warning. This is the
>> progression I have in mind:
>>
>> In Library version 1 (LV1), the function doesn't exist.
>> In LV2, [the lib author makes a guess that they're going to write a
> function with a particular name and the] function is marked as @future.
>> In LV3, the function is implemented and the @future tag is removed.
>
> Fixed step 2 for you.

No, an implementation is in mind and tested. Just not available. You could even have the implementation commented out. In Phobos/Druntime, we wouldn't accept such a prospect without requiring a fleshing out of the details ahead of time.

If it makes sense to just add the symbol with an implementation, then I'd rather do that. Otherwise, we create a new way to overload/override, and suddenly things work very differently than people are used to. Suddenly templates start calling the wrong thing and code actually breaks before a change is actually made.

> And yes, that *is* the progression suggested by this DIP, but one of my
> key points is: that's a downright silly progression. This is better:
>
> - In Library version 1 (LV1), the function doesn't exist.
> - In LV2, the new function is marked as @new_symbol to prevent the
> (somehow) TERRIBLE HORRIBLE AWFUL consequence of the new symbol causing
> people to be required to toss in a FQN, but there's no harm in STOPPING
> people from actually using the new functionality if they request it
> unambiguously, now is there? No, there isn't.
> - In LV3, the @new_symbol tag is removed.

It's also possible to implement the symbol with a different temporary name, and use that name if you need it before it's ready.

I'm just more comfortable with a symbol that changes absolutely nothing about how a function can be called, but is a warning that something is coming than I am with a callable symbol that acts differently in terms of overloading and overriding.

I'll admit, I'm not the DIP author, and I don't know the intention of whether the implementation is allowed to be there or not.

>> The important thing here is that the library writer gives fair warning
>> that a breaking change is coming, giving the user time to update his
>> code at his convenience.
>
> Or, if the tag is added to the actual implementation then there IS NO
> FREAKING BREAKING CHANGE until the @new_func or whatever tag is removed,
> but the library user is STILL given fair (albiet useless, imo) warning
> that it will be (kinda sorta) broken (with a downright trivial fix) in a
> followup release.

Not sure I agree there would be no breakage. The symbol is there, it can be called in some cases. This changes behavior without warning. I've had my share of is(typeof(trycallingThis())) blow up spectacularly in ways I didn't predict. To change what happens there is a bad idea IMO.

-Steve