July 13, 2012
On 13/07/12 09:11, Jacob Carlborg wrote:
> On 2012-07-13 08:52, Adam Wilson wrote:
>
>> I hope Walter isn't against this, because I'm not seeing much community
>> disagreement with this...
>
> If he's not against it, I see know reason why this haven't been done
> already.
>


It has. It's called D1.



July 13, 2012
On 13/07/2012 15:17, Don Clugston wrote:
> On 13/07/12 09:11, Jacob Carlborg wrote:
>> On 2012-07-13 08:52, Adam Wilson wrote:
>>
>>> I hope Walter isn't against this, because I'm not seeing much community
>>> disagreement with this...
>>
>> If he's not against it, I see know reason why this haven't been done
>> already.
>>
>
>
> It has. It's called D1.
>

No.

D1 code in interleaved with D2 and you can choose which one you want when compiling the stuff. It means we still have to consider the whole D1 stuff when doing any D2 evolution now.
July 13, 2012
Jonathan M Davis:

> I think that we'll need to switch to a model like that eventually,

When D1 bugfixes stop?

Bye,
bearophile
July 13, 2012
On Friday, July 13, 2012 15:24:03 bearophile wrote:
> Jonathan M Davis:
> > I think that we'll need to switch to a model like that eventually,
> 
> When D1 bugfixes stop?

I don't see what the state of D1 has to do with anything other than the fact that the closest that D has ever had to this sort of model is the fact that D1 was split off specifically so that those using it could continue to use it rather than having everything broken when const was introduced.

I would think that if/when we switch is likely to be highly tied to when things stabilize much better and we don't even have to discuss things like removing 4 major functions from Object. We're passed the point where D is unstable enough that we're constantly reworking how things without need, and the target feature set is essentially frozen, but we still make major changes once in while in order to make the language work as-designed. The proposed model works much better when what you have already is fully stable, and you want to make it so that new stuff can be introduced in a clean and stable manner, and we're still having to make large changes at least once in while in order to make stuff that we already have work properly.

- Jonathan M Davis
July 13, 2012
On Thursday, July 12, 2012 18:49:16 deadalnix wrote:
> The system adopted in PHP works with a 3 number version. The first number is used for major languages changes (for instance 4 > 5 imply passing object by reference when it was by copy before, 5 > 6 switched the whole thing to unicode).
> 
> The second number imply language changes, but either non breaking or very specific, rarely used stuff. For instance 5.2 > 5.3 added GC, closures and namespace which does not break code.
> 
> The last one is reserved for bug fixes. Several version are maintained at the same time (even if a large amount of code base is common, so bug fixes can be used for many version at the time).

You know, the more that I think about this, the less that I think that this idea buys us anything right now. This sort of versioning scheme is great when you need to maintain ABI compatibility, when you want to restrict adding new features to only specific releases, and when you want to specify which kind of release is allowed to introduce breaking changes, but it doesn't really fit where D is right now.

We're not at a point where we're trying to maintain ABI compatibility, so that doesn't matter, but the major reason that this wouldn't buy us much is that almost all breaking changes are not introduced by new features or major redesigns or whatnot. They're introduced by bug fixes (either by the fix themselves changing how things work, since the way they worked was broken - though code may have accidentally been relying on it - or because a regression was introduced as part of the fix). Once in a while, fleshing out a partially working feature breaks some stuff (generally causing regressions of some kind), but most of it's bug fixes, and if it's due to an incomplete feature being fleshed out, then fewer people are going to be relying on it anyway. The few new features that we've added since TDPL have not really been breaking changes. They've added new functionality on top of what we've already had.

The few cases where we _do_ introduce breaking changes on purpose, we do so via a deprecation path. We try and inform people (sometimes badly) that a feature is going to be changed, removed, or replaced - that it's been scheduled for deprecation. Later, we deprecate it, and even later, we remove it. In the case of Phobos, this is fairly well laid out, with things generally being scheduled for deprecation for about 6 months, and deprecated stuff sticking around for about 6 months before being removed. In the case of the compiler, it's less organized.

Features generally don't actually get deprecated for a long time after it's been decided that they'll be deprecated, and they stick around as deprecated for quite a while. Newer functionality which breaks code is introduced with -w (or in one case, with a whole new flag: -property) so that programmers have a chance to switch to it more slowly rather than breaking their code immediately when the next release occurs. Later, they'll become part of the normal build in many cases, but that generally takes forever. And we don't even add new stuff like that very often, so even if you always compile with -w, it should be fairly rare that your code breaks with a new release due to something like that being added to -w. The last one that I can think of was disallowing implicit fallthrough on case statements.

So, in general, when stuff breaks, it's on accident or because how things worked before was broken, and some code accidentally relied on the buggy behavior. Even removing opEquals, opCmp, toHash, and toString will be done in a way which minimizes (if not completely avoids) immediate breakage. People will need to change their code to work with the new scheme, but they won't have to do so immediately, because we'll find a way to introduce the changes such that they're phased in rather than immediately breaking everything.

All that being the case, I don't know what this proposal actually buys us. The very thing that causes the most breaking changes (bug fixes) is the thing that still occurs in every release.

- Jonathan M Davis
July 13, 2012
On Fri, 13 Jul 2012 09:58:22 -0700, Jonathan M Davis <jmdavisProg@gmx.com> wrote:

> On Thursday, July 12, 2012 18:49:16 deadalnix wrote:
>> The system adopted in PHP works with a 3 number version. The first
>> number is used for major languages changes (for instance 4 > 5 imply
>> passing object by reference when it was by copy before, 5 > 6 switched
>> the whole thing to unicode).
>>
>> The second number imply language changes, but either non breaking or
>> very specific, rarely used stuff. For instance 5.2 > 5.3 added GC,
>> closures and namespace which does not break code.
>>
>> The last one is reserved for bug fixes. Several version are maintained
>> at the same time (even if a large amount of code base is common, so bug
>> fixes can be used for many version at the time).
>
> You know, the more that I think about this, the less that I think that this
> idea buys us anything right now. This sort of versioning scheme is great when
> you need to maintain ABI compatibility, when you want to restrict adding new
> features to only specific releases, and when you want to specify which kind of
> release is allowed to introduce breaking changes, but it doesn't really fit
> where D is right now.
>
> We're not at a point where we're trying to maintain ABI compatibility, so that
> doesn't matter, but the major reason that this wouldn't buy us much is that
> almost all breaking changes are not introduced by new features or major
> redesigns or whatnot. They're introduced by bug fixes (either by the fix
> themselves changing how things work, since the way they worked was broken -
> though code may have accidentally been relying on it - or because a regression
> was introduced as part of the fix). Once in a while, fleshing out a partially
> working feature breaks some stuff (generally causing regressions of some kind),
> but most of it's bug fixes, and if it's due to an incomplete feature being
> fleshed out, then fewer people are going to be relying on it anyway. The few
> new features that we've added since TDPL have not really been breaking
> changes. They've added new functionality on top of what we've already had.
>
> The few cases where we _do_ introduce breaking changes on purpose, we do so
> via a deprecation path. We try and inform people (sometimes badly) that a
> feature is going to be changed, removed, or replaced - that it's been
> scheduled for deprecation. Later, we deprecate it, and even later, we remove
> it. In the case of Phobos, this is fairly well laid out, with things generally
> being scheduled for deprecation for about 6 months, and deprecated stuff
> sticking around for about 6 months before being removed. In the case of the
> compiler, it's less organized.
>
> Features generally don't actually get deprecated for a long time after it's
> been decided that they'll be deprecated, and they stick around as deprecated
> for quite a while. Newer functionality which breaks code is introduced with -w
> (or in one case, with a whole new flag: -property) so that programmers have a
> chance to switch to it more slowly rather than breaking their code immediately
> when the next release occurs. Later, they'll become part of the normal build
> in many cases, but that generally takes forever. And we don't even add new
> stuff like that very often, so even if you always compile with -w, it should be
> fairly rare that your code breaks with a new release due to something like
> that being added to -w. The last one that I can think of was disallowing
> implicit fallthrough on case statements.
>
> So, in general, when stuff breaks, it's on accident or because how things
> worked before was broken, and some code accidentally relied on the buggy
> behavior. Even removing opEquals, opCmp, toHash, and toString will be done in
> a way which minimizes (if not completely avoids) immediate breakage. People
> will need to change their code to work with the new scheme, but they won't
> have to do so immediately, because we'll find a way to introduce the changes
> such that they're phased in rather than immediately breaking everything.

And if we had a dev branch we could have rolled Object const into it and let it broken there without affecting stable.

We have no stable release because we only have one branch, dev. To have a stable release you must first have a branch you consider to be stable.

Major changes are rolled INTO stable from dev once they become stable. Another term for stable is staging if that language helps you understand the concept better.

Stable does NOT and NEVER will mean bug free.

It means that we think this code has generally been well tested and works in most cases and we promise not to break it with big changes. Note the last part, we promise not to break it with big changes (such as object const). Thats how you create a stable release, first you must promise not to break it severly. Currently we don't make that promise.

But just because we don't make that promise does not mean that we cannot or should not make that promise. That promise is highly valuable to the community at large.

> All that being the case, I don't know what this proposal actually buys us. The
> very thing that causes the most breaking changes (bug fixes) is the thing that
> still occurs in every release.
>
> - Jonathan M Davis


-- 
Adam Wilson
IRC: LightBender
Project Coordinator
The Horizon Project
http://www.thehorizonproject.org/
July 13, 2012
On Thursday, 12 July 2012 at 16:49:17 UTC, deadalnix wrote:

> Such a system would also permit to drop all D1 stuff that are in current DMD because D1 vs D2 can be chosen at compile time on the same sources.

This is how DMD v2 was developed at the beginning, I bet the version 1 compiler still has the -v1 switch.

I'm with Johnathan though. I don't see much benefit. Yes, the system has great benefit, but we can't support it.

If we create a stable branch we'd need to define what is "big" and decide on a support system, what do we do when the dev branch has another "big" change and need to have a stable-dev... How do we change the "big" changed dev to stable without being "unstable." Eventually these happy people will get the unhappy news they have to fix their code and probably don't have the resources to keep it up for years.

Maybe someone else can take on the task of merging bug fixes into their branch, yes a little bit of rework for them, but will that matter if the fixes merge cleanly?
July 13, 2012
On 13/07/2012 18:58, Jonathan M Davis wrote:
> So, in general, when stuff breaks, it's on accident or because how things
> worked before was broken, and some code accidentally relied on the buggy
> behavior. Even removing opEquals, opCmp, toHash, and toString will be done in
> a way which minimizes (if not completely avoids) immediate breakage. People
> will need to change their code to work with the new scheme, but they won't
> have to do so immediately, because we'll find a way to introduce the changes
> such that they're phased in rather than immediately breaking everything.
>

Yeah, I know that. We don't change stuff just to change them, but because something is broken.

But come on, we are comparing with PHP here ! PHP is so much broken that it is even hard to figure out what is done correctly to make it successful.

I'll tell you what, it is successful because you know you'll be able to run you piece of crappy code on top of an half broken VM next year without problems.
July 14, 2012
"Roman D. Boiko" <rb@d-coding.com> writes:

> On Friday, 13 July 2012 at 06:52:25 UTC, Adam Wilson wrote:
>> I hope Walter isn't against this, because I'm not seeing much community disagreement with this...
>
> I would not be against having development and stable versions, but the price is not trivial: every pull request must be done in at least two branches, probably diverging significantly. And most benefits are already available: we have the git version and the last stable version (of course, the latter would be without the latest bug-fixes). That would mean slower progress in applying existing pull requests. (There are 100+ of those, aren't there?)

Speaking from personal experience maitaining some code in git, I believe this fear is unfounded.

Although code may and will diverge in such a model, you'll find that in
most cases, bugfixes will apply to both branches with no or little
changes; and that git will be able to automatically handle most of those
differences with no issues (things like "the line numbers didn't match,
but the code did"). This is actually one of the major strengths of
git: merging code and patches to several branches is extremely
easy. While you will probably want to review what was merged, this
usually doesn't take a whole lot of time, and should be fairly
straightforward. And when you eventually do reach the point where
maintaining the divergent versions is taking much more of your time,
that's probably the point where you need to think about releasing the
next stable version.

-- 
The volume of a pizza of thickness a and radius z can be described by the following formula:

pi zz a
July 14, 2012
On 7/13/2012 9:58 AM, Jonathan M Davis wrote:
> All that being the case, I don't know what this proposal actually buys us.

I tend to agree.