December 17, 2012
On Sunday, 16 December 2012 at 23:18:20 UTC, Jonathan M Davis wrote:
> On Sunday, December 16, 2012 11:23:57 Andrei Alexandrescu wrote:
>> Right now we're using a tagging system for releases, implying releases
>> are just snapshots of a continuum. But what we need is e.g. to be able
>> to patch 2.065 to fix a bug for a client who can't upgrade right now.
>> That means one branch per release.
>
> Well, you can still do that. You just create a branch from the tag when you
> need a branch. You don't have to branch from the latest. You can branch from
> any point on the tree, and tags provide a good marker for points to branch if
> you need to.
>
> Then the official release is still clearly tagged, but you can have branches
> based off it when you need it. It also avoids some minor overhead when patching
> releases is rare. My main concern though would be that the actual release
> needs to be clearly marked separately from any patches, so it pretty much
> requires a tag regardless of what you do with branches.
>
> What I would have expected that we do (though I haven't read through most of
> this thread or the wiki yet, so I don't know what's being proposed) would be
> to branch when we do beta, and that branch would be what would ultimately end
> up where we release from, with a tag on the commit which was the actual
> release. If further commits were made for specific clients after the release,
> then either you'd make a branch from that which was specific to that client, or
> you'd put them on the same branch, where'd they'd be after the tag for the
> release and wouldn't affect it.
>
> - Jonathan M Davis

Precisely. Thank you.
December 17, 2012
On Monday, 17 December 2012 at 17:31:45 UTC, foobar wrote:
> Huh?
> Both LLVM and KDE are developed on *subversion* and as such their work-flows are not applicable. Not to mention that KDE is vastly different in concept and goals than a programming language.
>
> Subversion is conceptually very different from git and its model imposes practical restrictions that are not relevant for git, mostly with regards to branches, merging, etc. Actions which are first class and trivial to accomplish in git. This is analogous to designing highways based on the speed properties of bicycles.

Guess what, I know that. The post by SomeDude just claimed that release branches in general are impractical and not used by open source projects, which is wrong.

David
December 17, 2012
On Monday, 17 December 2012 at 17:45:12 UTC, David Nadlinger wrote:
> On Monday, 17 December 2012 at 17:31:45 UTC, foobar wrote:
>> Huh?
>> Both LLVM and KDE are developed on *subversion* and as such their work-flows are not applicable. Not to mention that KDE is vastly different in concept and goals than a programming language.
>>
>> Subversion is conceptually very different from git and its model imposes practical restrictions that are not relevant for git, mostly with regards to branches, merging, etc. Actions which are first class and trivial to accomplish in git. This is analogous to designing highways based on the speed properties of bicycles.
>
> Guess what, I know that. The post by SomeDude just claimed that release branches in general are impractical and not used by open source projects, which is wrong.
>
> David

At least the first part in that sentence is correct - there are more practical work flows that are just more difficult to achieve in svn. The branch per release in those projects is just a consequence of SVN limitations. hence open source projects _that use git_ don't need to follow this route.

Either way, this is completely irrelevant for our purpose.
The process should be designed with DVCS in mind since we already settled on this [very successful] model for D. We should avoid designing a work-flow based on other models and limited experience with git.
Suggestions such as implementing specific [shell?] scripts and having branch-per-release brings no improvement over what we already have.

I personally transitioned my [previous] team from ancient systems (rcs, cvs, proprietary in-house crap) to git. It requires not only memorizing a few new command line commands but also grokking a different model. Those that do, use git to great effect and greatly increase their efficiency, others simply have a "ci" script that calls "git commit" and gain nothing.
At the moment we may use git commands but really we are still developing on mostly a subversion model. Walter used to accept patches and those were simply replaced by pull requests. There isn't any change in the mental model required to really benefit from a decentralized system such as git. This is what the process discussion is ultimately meant to fix.
December 17, 2012
On Monday, 17 December 2012 at 18:14:54 UTC, foobar wrote:
> At the moment we may use git commands but really we are still developing on mostly a subversion model. Walter used to accept patches and those were simply replaced by pull requests. There isn't any change in the mental model required to really benefit from a decentralized system such as git. This is what the process discussion is ultimately meant to fix.

I think you've made a very good point. In order to make the most out of the way decentralized systems work vs a centralized one, will require an adjustment in the way people think. If we choose to follow the centralized model nothing will be gained when using a decentralized service, however centralization has its place and there's an obvious need to merge decentralized branches into a common centralize branch so that there's a common branch to use for testing and performing releases (etc).

I find it's great to debate ideas in here first, not the talk page, but any conclusions or interesting points of view should be posted into the wiki talk page so that it is not lost. IMO this one should go in the wiki if only to remind people that we have the flexibility of decentralized model to take advantage of.

--rt
December 17, 2012
On Monday, 17 December 2012 at 21:03:04 UTC, Rob T wrote:
> On Monday, 17 December 2012 at 18:14:54 UTC, foobar wrote:
>> At the moment we may use git commands but really we are still developing on mostly a subversion model. Walter used to accept patches and those were simply replaced by pull requests. There isn't any change in the mental model required to really benefit from a decentralized system such as git. This is what the process discussion is ultimately meant to fix.
>
> I think you've made a very good point. In order to make the most out of the way decentralized systems work vs a centralized one, will require an adjustment in the way people think. If we choose to follow the centralized model nothing will be gained when using a decentralized service, however centralization has its place and there's an obvious need to merge decentralized branches into a common centralize branch so that there's a common branch to use for testing and performing releases (etc).
>
> I find it's great to debate ideas in here first, not the talk page, but any conclusions or interesting points of view should be posted into the wiki talk page so that it is not lost. IMO this one should go in the wiki if only to remind people that we have the flexibility of decentralized model to take advantage of.
>
> --rt

DVCS is not about centralized vs. non centralized. This is a common misunderstanding which I too had when I started using git.
The actual difference is a client-server topology (CVS, SVN, etc) vs. P2P or perhaps "free-form" topology. By making all users equal with their own full copies of the repository, git and similar systems made the topology an aspect of the human work-flow instead of part of the technical design & implementation.

This gives you the freedom to have whatever topology you want - a star, a circle, whatever. For instance, Linux is developed with a "web of trust". the topology represents trust relationships. Thus, all the people Linus is pulling from directly are "core" developers he personally trust. They in turn trust other developers, and so on and so forth. Linus's version is the semi-official release of the Linux kernel but it is not the only release. For instance, Linux distributions can have their own repositories, and Google maintains their own repository for the android fork. So in fact, there are *multiple* repositories that represent graph roots in this "web of trust" topology.

What about D?
The current git-hub repository owned by Walter and the core devs (the github organization) is the official repository. *But*, we could also treat other compilers as roots. more over, there's no requirement for developers to go through the "main" github repository to share and sync. E.g Walter can pull directly from Don's repository *without* going through a formal branch on github. This in fact should be *the default workflow* for internal collaboration to reduce clutter and facilitate better organization.

This is why I'm arguing fiercely against having any sort of official alpha stage. There is no need standardizing this and it only mixes "private" developer only code and builds with builds aimed at end-users (those would be us, people writing D code and compiling with DMD). If you look on SVN servers, you often find an endless list of developer folders, "test" branches, "experimental" branches, etc, etc.

As a user, this is highly annoying to figure out what branches are meant for user consumption (releases, betas, preview for a new feature) and what isn't, dev X's private place to conduct experiments. This is only a result of the client-server topology imposed by svn architecture.

The official process should only standardize and focus on the points where integration is required. Everything else is much better served as being left alone. On the other hand, I think that we need to add more focus on pre/post stages of the actual coding. This means, the planning, setting priorities (aka roadmap, mile-stones, etc) as well as reducing regressions. I saw a post suggesting an automated build-bot that will run the test suit and build nightlies. What about DIPs, how do they integrate in the work-flow?
I think Andrei talked about the DIPs before but I don't thik it was discussed as part of this thread.
December 17, 2012
On Monday, 17 December 2012 at 22:02:45 UTC, foobar wrote:
> On Monday, 17 December 2012 at 21:03:04 UTC, Rob T wrote:
>> On Monday, 17 December 2012 at 18:14:54 UTC, foobar wrote:
>>> At the moment we may use git commands but really we are still developing on mostly a subversion model. Walter used to accept patches and those were simply replaced by pull requests. There isn't any change in the mental model required to really benefit from a decentralized system such as git. This is what the process discussion is ultimately meant to fix.
>>
>> I think you've made a very good point. In order to make the most out of the way decentralized systems work vs a centralized one, will require an adjustment in the way people think. If we choose to follow the centralized model nothing will be gained when using a decentralized service, however centralization has its place and there's an obvious need to merge decentralized branches into a common centralize branch so that there's a common branch to use for testing and performing releases (etc).
>>
>> I find it's great to debate ideas in here first, not the talk page, but any conclusions or interesting points of view should be posted into the wiki talk page so that it is not lost. IMO this one should go in the wiki if only to remind people that we have the flexibility of decentralized model to take advantage of.
>>
>> --rt
>
> DVCS is not about centralized vs. non centralized. This is a common misunderstanding which I too had when I started using git.
> The actual difference is a client-server topology (CVS, SVN, etc) vs. P2P or perhaps "free-form" topology. By making all users equal with their own full copies of the repository, git and similar systems made the topology an aspect of the human work-flow instead of part of the technical design & implementation.
>
> This gives you the freedom to have whatever topology you want - a star, a circle, whatever. For instance, Linux is developed with a "web of trust". the topology represents trust relationships. Thus, all the people Linus is pulling from directly are "core" developers he personally trust. They in turn trust other developers, and so on and so forth. Linus's version is the semi-official release of the Linux kernel but it is not the only release. For instance, Linux distributions can have their own repositories, and Google maintains their own repository for the android fork. So in fact, there are *multiple* repositories that represent graph roots in this "web of trust" topology.
>
> What about D?
> The current git-hub repository owned by Walter and the core devs (the github organization) is the official repository. *But*, we could also treat other compilers as roots. more over, there's no requirement for developers to go through the "main" github repository to share and sync. E.g Walter can pull directly from Don's repository *without* going through a formal branch on github. This in fact should be *the default workflow* for internal collaboration to reduce clutter and facilitate better organization.
>
> This is why I'm arguing fiercely against having any sort of official alpha stage. There is no need standardizing this and it only mixes "private" developer only code and builds with builds aimed at end-users (those would be us, people writing D code and compiling with DMD). If you look on SVN servers, you often find an endless list of developer folders, "test" branches, "experimental" branches, etc, etc.
>
> As a user, this is highly annoying to figure out what branches are meant for user consumption (releases, betas, preview for a new feature) and what isn't, dev X's private place to conduct experiments. This is only a result of the client-server topology imposed by svn architecture.
>
> The official process should only standardize and focus on the points where integration is required. Everything else is much better served as being left alone. On the other hand, I think that we need to add more focus on pre/post stages of the actual coding. This means, the planning, setting priorities (aka roadmap, mile-stones, etc) as well as reducing regressions. I saw a post suggesting an automated build-bot that will run the test suit and build nightlies. What about DIPs, how do they integrate in the work-flow?
> I think Andrei talked about the DIPs before but I don't thik it was discussed as part of this thread.

I forgot to explain that the multiple roots is applicable to D as well in that we have there fully working compilers - LDC, GDC and DMD. They currently share the same front-end - GDC and LDC merge DMD's code. This situation is also sub optimal and requires manual work and has redundancies in the process. E.g. what if the LDC guys fix a bug in the "shared" front-end? Should this be integrated back to DMD? What if Walter comes up with a different fix?
December 17, 2012
On Monday, 17 December 2012 at 22:10:12 UTC, foobar wrote:
> I forgot to explain that the multiple roots is applicable to D as well in that we have there fully working compilers - LDC, GDC and DMD. They currently share the same front-end - GDC and LDC merge DMD's code. This situation is also sub optimal and requires manual work and has redundancies in the process. E.g. what if the LDC guys fix a bug in the "shared" front-end? Should this be integrated back to DMD? What if Walter comes up with a different fix?

It will take me some time to read and digest all of the points you've made, so with that in mind I have some reading and thinking to do, but I am aware that LDC, GDC and possibly some others are poorly supported through whatever process we have going on at the moment, and you are raising another perfectly valid point that should be listed in the wiki page (if you have not done so already).

For example, one of our goals should be that the process will include specifics that are intended to support the various DMD front ends that are out there.

For supporting various front ends better, there's of course much more that can be done than adjusting the process, but as you know that's off topic and for later consideration, however some of it may have to be considered in order to fully understand what will eventually have to be done which may have an effect on the process we're conjuring up.

It would be great if the LDC and and GDC guys could toss in their thoughts on this subject with respect to what the process must do in order to support them better.

--rt
December 18, 2012
On Monday, 17 December 2012 at 17:45:12 UTC, David Nadlinger wrote:
> On Monday, 17 December 2012 at 17:31:45 UTC, foobar wrote:
>> Huh?
>> Both LLVM and KDE are developed on *subversion* and as such their work-flows are not applicable. Not to mention that KDE is vastly different in concept and goals than a programming language.
>>
>> Subversion is conceptually very different from git and its model imposes practical restrictions that are not relevant for git, mostly with regards to branches, merging, etc. Actions which are first class and trivial to accomplish in git. This is analogous to designing highways based on the speed properties of bicycles.
>
> Guess what, I know that. The post by SomeDude just claimed that release branches in general are impractical and not used by open source projects, which is wrong.
>
> David

I was actually correct, saying that it can only work with very few releases, and indeed, for KDE, there are at most 2 releases per year and usually less.
For instance, the 4.10 is scheduled to span from october 2012 to july 2013, with numerous patches and corrective releases in between. So it's roughly the same release process as the one branch/year I advocated.

And it's not the same at all as creating one branch every month.
December 18, 2012
On 12/18/2012 09:48 PM, SomeDude wrote:
> And it's not the same at all as creating one branch every month.

But why would you do that?  The general discussion seems to have been around the idea of 1 or 2 stable releases per year, 1 every 3 months max.

December 18, 2012
On Tuesday, 18 December 2012 at 20:55:34 UTC, Joseph Rushton Wakeling wrote:
> On 12/18/2012 09:48 PM, SomeDude wrote:
>> And it's not the same at all as creating one branch every month.
>
> But why would you do that?  The general discussion seems to have been around the idea of 1 or 2 stable releases per year, 1 every 3 months max.

THat's what I understood from Andrei's post:

> Just one tidbit of information: I talked to Walter and we want to build into the process the ability to modify any particular release. (One possibility is to do so as part of paid support for large corporate users.) That means there needs to be one branch per release.
>
> Andrei

Maybe I misunderstood him, I don't know.