Jump to page: 1 26  
Page
Thread overview
GDC CI
Sep 05
wjoe
Sep 05
wjoe
Sep 06
wjoe
Sep 07
wjoe
Sep 07
wjoe
Sep 08
wjoe
Sep 08
wjoe
Sep 08
wjoe
Sep 08
wjoe
Sep 09
wjoe
Sep 09
wjoe
Sep 09
wjoe
Sep 09
kinke
Sep 09
wjoe
Sep 09
wjoe
Sep 11
wjoe
Sep 15
wjoe
Sep 15
wjoe
Sep 16
wjoe
Sep 16
wjoe
Sep 16
wjoe
Sep 16
Seb
Sep 16
wjoe
Sep 16
wjoe
Sep 16
wjoe
Sep 16
wjoe
Sep 16
wjoe
6 days ago
wjoe
6 days ago
wjoe
5 days ago
Iain Buclaw
4 days ago
wjoe
4 days ago
wjoe
Sep 16
wjoe
Sep 16
wjoe
Sep 10
kinke
Sep 11
wjoe
Sep 11
wjoe
September 05
This thread is a continuation of the conversation "GDC 10.2.1 Released" in the Announce group here [1]:

For reference:

>  1. Is Cirrus CI good enough to build gdc?  And if so, look into adding
>     Windows, MacOSX, and FreeBSD platforms to the pipeline.
>
>>
>> What does "good enough" mean ?
>>
>
> It means, can Cirrus CI actually build gdc and run through the testsuite without being  killed by the pipeline?
>
> Travis CI for instance is rubbish, because:
> - Hardware is really slow.
> - Kills jobs that take longer than 50 minutes.
> - Kills jobs if a 3GB memory limit is exceeded.
> - Kills jobs that don't print anything for more than 10 minutes.
> - Truncates logs to first 2000 lines.
>
>  [...]
>
>  3. Use Docker+QEMU to have containers doing CI for other architectures, can
>     build images for Alpine and Debian on amd64, arm32v7, arm64v8, i386,
>     mips64le, ppc64le, and s390x.


What I learned so far is that Cirrus CI lets you configure the time after which jobs are killed and it can be increased from the default of 60 minutes.
  Memory for the container/VM can be configured, however since open source projects run on the community cluster, scheduling of such jobs are prioritized by resource requirements.

This looks promising.

Further, they write that if the build directory contains a Dockerfile an attempt is made to try and use that.

Therefore a good approach seems to be to start with a docker container which later can also be adapted for 3).


[1] https://forum.dlang.org/thread/wjyttivhbklzujwjrups@forum.dlang.org
September 05
Am Sat, 05 Sep 2020 10:04:30 +0000 schrieb wjoe:

> This thread is a continuation of the conversation "GDC 10.2.1 Released" in the Announce group here [1]:
> 

To answer your other question:
>> We use https://github.com/D-Programming-GDC/gcc for CI, but commits will go to the GCC SVN first, so GCC SVN or snapshot tarballs is the recommended way to get the latest GDC.
>
> Is this information still up to date ?
> 
> There's a semaphore folder. I suppose that's the one currently used with Semaphore CI. Is there something else ?

That information is probably quite obsolete: As GCC upstream uses git now, it might be possible to simplify the overall process. That proces never really worked out and was quite complicated anyway, as it required committers to locally merge the commit containing the .semaphore configuration files before pushing to github. In hindsight, it was probably a bad idea.

The main difficulty in setting up CI for GDC is that we can't simply put CI configuration files in the toplevel folder, as that folder is under GCC's control. For CI which allows you to keep the configuration out of the repositories, this is not a problem. But for those requiring certain files in the top-level folder, it's more complicated.

So that's why the old approach required merging a commit which includes the CI configuration. Maybe a better way is to automatically generate a new commit including CI configuration for each commit to be tested. This could probably be done with buildkite? Then trigger new build jobs for that auto-generated commit. The main difficulty there is integrating this into a somewhat nice workflow / interface.

-- 
Johannes
September 05
On Saturday, 5 September 2020 at 10:25:28 UTC, Johannes Pfau wrote:
> Am Sat, 05 Sep 2020 10:04:30 +0000 schrieb wjoe:
>
>> This thread is a continuation of the conversation "GDC 10.2.1 Released" in the Announce group here [1]:
>> 
>
> To answer your other question:
>>> We use https://github.com/D-Programming-GDC/gcc for CI, but commits will go to the GCC SVN first, so GCC SVN or snapshot tarballs is the recommended way to get the latest GDC.
>>
>> Is this information still up to date ?
>> 
>> There's a semaphore folder. I suppose that's the one currently used with Semaphore CI. Is there something else ?
>
> That information is probably quite obsolete: As GCC upstream uses git now, it might be possible to simplify the overall process. That proces never really worked out and was quite complicated anyway, as it required committers to locally merge the commit containing the .semaphore configuration files before pushing to github. In hindsight, it was probably a bad idea.
>
> The main difficulty in setting up CI for GDC is that we can't simply put CI configuration files in the toplevel folder, as that folder is under GCC's control. For CI which allows you to keep the configuration out of the repositories, this is not a problem. But for those requiring certain files in the top-level folder, it's more complicated.
>
> So that's why the old approach required merging a commit which includes the CI configuration. Maybe a better way is to automatically generate a new commit including CI configuration for each commit to be tested. This could probably be done with buildkite? Then trigger new build jobs for that auto-generated commit. The main difficulty there is integrating this into a somewhat nice workflow / interface.

Please forgive my confusion.

There are 2 repositories, upstream GCC and GitHub/D-Programming-GDC/gcc.
The former isn't hosted on GitHub but on gnu.org.
The latter is necessary for CI, because reasons, and is a mirror of the upstream git repository.
Development is done in the upstream repository.
Because of that we can't put our CI configs into the project root.
Thus the GitHub mirror is required for those CI providers that don't support a custom configuration location.
But it could be done with the upstream repo otherwise, unless the CI service only works with projects hosted on GitHub - Cirrus CI for instance.

Is that correct ?

How's upstream GCC doing CI ?
September 05
On Saturday, 5 September 2020 at 11:23:09 UTC, wjoe wrote:
> On Saturday, 5 September 2020 at 10:25:28 UTC, Johannes Pfau wrote:
>> Am Sat, 05 Sep 2020 10:04:30 +0000 schrieb wjoe:
>>
>>> [...]
>>
>> To answer your other question:
>>> [...]
>>
>> That information is probably quite obsolete: As GCC upstream uses git now, it might be possible to simplify the overall process. That proces never really worked out and was quite complicated anyway, as it required committers to locally merge the commit containing the .semaphore configuration files before pushing to github. In hindsight, it was probably a bad idea.
>>
>> The main difficulty in setting up CI for GDC is that we can't simply put CI configuration files in the toplevel folder, as that folder is under GCC's control. For CI which allows you to keep the configuration out of the repositories, this is not a problem. But for those requiring certain files in the top-level folder, it's more complicated.
>>
>> So that's why the old approach required merging a commit which includes the CI configuration. Maybe a better way is to automatically generate a new commit including CI configuration for each commit to be tested. This could probably be done with buildkite? Then trigger new build jobs for that auto-generated commit. The main difficulty there is integrating this into a somewhat nice workflow / interface.
>
> Please forgive my confusion.
>
> There are 2 repositories, upstream GCC and GitHub/D-Programming-GDC/gcc.
> The former isn't hosted on GitHub but on gnu.org.
> The latter is necessary for CI, because reasons, and is a mirror of the upstream git repository.
> Development is done in the upstream repository.
> Because of that we can't put our CI configs into the project root.
> Thus the GitHub mirror is required for those CI providers that don't support a custom configuration location.
> But it could be done with the upstream repo otherwise, unless the CI service only works with projects hosted on GitHub - Cirrus CI for instance.
>
> Is that correct ?
>

That sounds about right.

The only way you'd be able to test the upstream GCC repository directly is by doing periodic builds, rather than builds based off triggers.  The CI logic would have to live in a separate repository.  For convenience, this would be on GitHub.


> How's upstream GCC doing CI ?

They aren't.  Or rather, other people are building every so often, or have their own scripts that build every single commit, and then post test results on the mailing list (i.e: https://gcc.gnu.org/pipermail/gcc-testresults/2020-September/thread.html)
September 06
On Saturday, 5 September 2020 at 21:14:28 UTC, Iain Buclaw wrote:
> On Saturday, 5 September 2020 at 11:23:09 UTC, wjoe wrote:
>> On Saturday, 5 September 2020 at 10:25:28 UTC, Johannes Pfau wrote:
>>> [...]
>>
>> Please forgive my confusion.
>>
>> There are 2 repositories, upstream GCC and GitHub/D-Programming-GDC/gcc.
>> The former isn't hosted on GitHub but on gnu.org.
>> The latter is necessary for CI, because reasons, and is a mirror of the upstream git repository.
>> Development is done in the upstream repository.
>> Because of that we can't put our CI configs into the project root.
>> Thus the GitHub mirror is required for those CI providers that don't support a custom configuration location.
>> But it could be done with the upstream repo otherwise, unless the CI service only works with projects hosted on GitHub - Cirrus CI for instance.
>>
>> Is that correct ?
>>
>
> That sounds about right.
>
> The only way you'd be able to test the upstream GCC repository directly is by doing periodic builds, rather than builds based off triggers.  The CI logic would have to live in a separate repository.  For convenience, this would be on GitHub.

Periodic builds sound like what Cirrus CI calls cron builds.
But if the repository needs to be forked for CI it's kind of periodic as well if the commits are only merged in periodically.

Currently I'm looking into building a docker container which can run a GDC build. Because Cirrus CI supports Dockerfile/s directly and every other CI seems to run its tasks/jobs inside of a docker container this seems like a viable approach and can be extended with the ARM targets mentioned in your item number 3.
September 07
On Saturday, 5 September 2020 at 21:14:28 UTC, Iain Buclaw wrote:
> On Saturday, 5 September 2020 at 11:23:09 UTC, wjoe wrote:
>> On Saturday, 5 September 2020 at 10:25:28 UTC, Johannes Pfau wrote:
>>>
>>> The main difficulty in setting up CI for GDC is that we can't simply put CI configuration files in the toplevel folder, as that folder is under GCC's control. For CI which allows you to keep the configuration out of the repositories, this is not a problem. But for those requiring certain files in the top-level folder, it's more complicated.
>>>
>> How's upstream GCC doing CI ?
>
> They aren't.  Or rather, other people are building every so often, or have their own scripts that build every single commit, and then post test results on the mailing list (i.e: https://gcc.gnu.org/pipermail/gcc-testresults/2020-September/thread.html)

So when it comes to CI, there are two/three use cases that need to be considered.

1. Testing a changes to D or libphobos prior to committing to gcc mainline/branch.

2. Testing the mainline (master) branch, either periodically, every commit, or after a specific commit (such as the daily bump).

3. Testing the release branches of gcc (releases/gcc-9, releases/gcc-10, ...).

I am least bothered by having [1] covered.  I have enough faith that people who send patches have at least done some level of due diligence of testing their changes prior to submitting.  So I think the focus should only be on the frequent testing of mainline, and infrequent testing of release branches.

If Cirrus has built-in periodic scheduling (without the need for config files, or hooks added to the git repository), and you can point it to GCC's git (or the GitHub git-mirror/gcc) then that could be fine.  CI scripts still need to live in a separate repository pulled in with the build.

Iain.
September 07
On Sunday, 6 September 2020 at 21:52:04 UTC, wjoe wrote:
> On Saturday, 5 September 2020 at 21:14:28 UTC, Iain Buclaw wrote:
>> On Saturday, 5 September 2020 at 11:23:09 UTC, wjoe wrote:
>>> On Saturday, 5 September 2020 at 10:25:28 UTC, Johannes Pfau wrote:
>>>> [...]
>>>
>>> Please forgive my confusion.
>>>
>>> There are 2 repositories, upstream GCC and GitHub/D-Programming-GDC/gcc.
>>> The former isn't hosted on GitHub but on gnu.org.
>>> The latter is necessary for CI, because reasons, and is a mirror of the upstream git repository.
>>> Development is done in the upstream repository.
>>> Because of that we can't put our CI configs into the project root.
>>> Thus the GitHub mirror is required for those CI providers that don't support a custom configuration location.
>>> But it could be done with the upstream repo otherwise, unless the CI service only works with projects hosted on GitHub - Cirrus CI for instance.
>>>
>>> Is that correct ?
>>>
>>
>> That sounds about right.
>>
>> The only way you'd be able to test the upstream GCC repository directly is by doing periodic builds, rather than builds based off triggers.  The CI logic would have to live in a separate repository.  For convenience, this would be on GitHub.
>
> Periodic builds sound like what Cirrus CI calls cron builds.
> But if the repository needs to be forked for CI it's kind of periodic as well if the commits are only merged in periodically.
>
> Currently I'm looking into building a docker container which can run a GDC build. Because Cirrus CI supports Dockerfile/s directly and every other CI seems to run its tasks/jobs inside of a docker container this seems like a viable approach and can be extended with the ARM targets mentioned in your item number 3.

In case it saves some work...

Baseline dependencies for Debian/Ubuntu are:

autogen autoconf automake bison dejagnu flex libcurl4-gnutls-dev libgmp-dev libisl-dev libmpc-dev libmpfr-dev make patch tzdata xz-utils binutils libc6-dev gcc g++

Baseline dependencies for Alpine are:

autoconf automake bison curl-dev dejagnu flex gmp-dev isl-dev make mpc1-dev mpfr-dev patch tzdata xz binutils musl-dev gcc g++

Iain.
September 07
On Monday, 7 September 2020 at 09:14:08 UTC, Iain Buclaw wrote:
> On Saturday, 5 September 2020 at 21:14:28 UTC, Iain Buclaw wrote:
>> On Saturday, 5 September 2020 at 11:23:09 UTC, wjoe wrote:
>>> On Saturday, 5 September 2020 at 10:25:28 UTC, Johannes Pfau wrote:
>>>>
>>>> The main difficulty in setting up CI for GDC is that we can't simply put CI configuration files in the toplevel folder, as that folder is under GCC's control. For CI which allows you to keep the configuration out of the repositories, this is not a problem. But for those requiring certain files in the top-level folder, it's more complicated.
>>>>
>>> How's upstream GCC doing CI ?
>>
>> They aren't.  Or rather, other people are building every so often, or have their own scripts that build every single commit, and then post test results on the mailing list (i.e: https://gcc.gnu.org/pipermail/gcc-testresults/2020-September/thread.html)
>
> So when it comes to CI, there are two/three use cases that need to be considered.
>
> 1. Testing a changes to D or libphobos prior to committing to gcc mainline/branch.
>
> 2. Testing the mainline (master) branch, either periodically, every commit, or after a specific commit (such as the daily bump).
>
> 3. Testing the release branches of gcc (releases/gcc-9, releases/gcc-10, ...).
>
> I am least bothered by having [1] covered.  I have enough faith that people who send patches have at least done some level of due diligence of testing their changes prior to submitting.  So I think the focus should only be on the frequent testing of mainline, and infrequent testing of release branches.
>
> If Cirrus has built-in periodic scheduling (without the need for config files, or hooks added to the git repository), and you can point it to GCC's git (or the GitHub git-mirror/gcc) then that could be fine.  CI scripts still need to live in a separate repository pulled in with the build.
>
> Iain.

Cirrus CI currently only supports GitHub projects. Therefore the Dockerfile needs to be hosted there. But nowhere it says that you can't clone any arbitrary repository via a setup script.

Options I can think of are:
A) A Dockerfile for each case in (1.,) 2. and 3., or
B) A docker container which provides the environment to build GCC and a (Cirrus) CI config which defines the tasks to cover (1.,) 2. and 3.

A) sounds like a lot of duplication so I'm in favor of B)

Cirrus CI also provides a Docker Builder VM which can build and publish docker containers.
September 07
On Monday, 7 September 2020 at 09:20:21 UTC, Iain Buclaw wrote:
> On Sunday, 6 September 2020 at 21:52:04 UTC, wjoe wrote:
>> [...]
>
> In case it saves some work...
>
> Baseline dependencies for Debian/Ubuntu are:
>
> autogen autoconf automake bison dejagnu flex libcurl4-gnutls-dev libgmp-dev libisl-dev libmpc-dev libmpfr-dev make patch tzdata xz-utils binutils libc6-dev gcc g++
>
> Baseline dependencies for Alpine are:
>
> autoconf automake bison curl-dev dejagnu flex gmp-dev isl-dev make mpc1-dev mpfr-dev patch tzdata xz binutils musl-dev gcc g++
>
> Iain.

Yes, it does, thanks :)
September 08
On Monday, 7 September 2020 at 10:41:50 UTC, wjoe wrote:
>
> Options I can think of are:
> A) A Dockerfile for each case in (1.,) 2. and 3., or
> B) A docker container which provides the environment to build GCC and a (Cirrus) CI config which defines the tasks to cover (1.,) 2. and 3.


Small update.
Option A) isn't possible with Cirrus CI because the check-out phase takes about 54 minutes on average and building the container about 6 minutes.
Then the task is terminated on the 60 minutes mark (the default timeout) before it even starts the GCC configuration phase.

The environment is 4GB RAM and dual CPU with 16 threads.
« First   ‹ Prev
1 2 3 4 5 6