www.digitalmars.com         C & C++   DMDScript  

D.gnu - GDC CI

reply wjoe <invalid example.com> writes:
This thread is a continuation of the conversation "GDC 10.2.1 
Released" in the Announce group here [1]:

For reference:

  1. Is Cirrus CI good enough to build gdc?  And if so, look 
 into adding
     Windows, MacOSX, and FreeBSD platforms to the pipeline.

 What does "good enough" mean ?
It means, can Cirrus CI actually build gdc and run through the testsuite without being killed by the pipeline? Travis CI for instance is rubbish, because: - Hardware is really slow. - Kills jobs that take longer than 50 minutes. - Kills jobs if a 3GB memory limit is exceeded. - Kills jobs that don't print anything for more than 10 minutes. - Truncates logs to first 2000 lines. [...] 3. Use Docker+QEMU to have containers doing CI for other architectures, can build images for Alpine and Debian on amd64, arm32v7, arm64v8, i386, mips64le, ppc64le, and s390x.
What I learned so far is that Cirrus CI lets you configure the time after which jobs are killed and it can be increased from the default of 60 minutes. Memory for the container/VM can be configured, however since open source projects run on the community cluster, scheduling of such jobs are prioritized by resource requirements. This looks promising. Further, they write that if the build directory contains a Dockerfile an attempt is made to try and use that. Therefore a good approach seems to be to start with a docker container which later can also be adapted for 3). [1] https://forum.dlang.org/thread/wjyttivhbklzujwjrups forum.dlang.org
Sep 05 2020
parent reply Johannes Pfau <nospam example.com> writes:
Am Sat, 05 Sep 2020 10:04:30 +0000 schrieb wjoe:

 This thread is a continuation of the conversation "GDC 10.2.1 Released"
 in the Announce group here [1]:
 
To answer your other question:
 We use https://github.com/D-Programming-GDC/gcc for CI, but 
 commits will go to the GCC SVN first, so GCC SVN or snapshot 
 tarballs is the recommended way to get the latest GDC.
Is this information still up to date ? There's a semaphore folder. I suppose that's the one currently used with Semaphore CI. Is there something else ?
That information is probably quite obsolete: As GCC upstream uses git now, it might be possible to simplify the overall process. That proces never really worked out and was quite complicated anyway, as it required committers to locally merge the commit containing the .semaphore configuration files before pushing to github. In hindsight, it was probably a bad idea. The main difficulty in setting up CI for GDC is that we can't simply put CI configuration files in the toplevel folder, as that folder is under GCC's control. For CI which allows you to keep the configuration out of the repositories, this is not a problem. But for those requiring certain files in the top-level folder, it's more complicated. So that's why the old approach required merging a commit which includes the CI configuration. Maybe a better way is to automatically generate a new commit including CI configuration for each commit to be tested. This could probably be done with buildkite? Then trigger new build jobs for that auto-generated commit. The main difficulty there is integrating this into a somewhat nice workflow / interface. -- Johannes
Sep 05 2020
parent reply wjoe <invalid example.com> writes:
On Saturday, 5 September 2020 at 10:25:28 UTC, Johannes Pfau 
wrote:
 Am Sat, 05 Sep 2020 10:04:30 +0000 schrieb wjoe:

 This thread is a continuation of the conversation "GDC 10.2.1 
 Released" in the Announce group here [1]:
 
To answer your other question:
 We use https://github.com/D-Programming-GDC/gcc for CI, but 
 commits will go to the GCC SVN first, so GCC SVN or snapshot 
 tarballs is the recommended way to get the latest GDC.
Is this information still up to date ? There's a semaphore folder. I suppose that's the one currently used with Semaphore CI. Is there something else ?
That information is probably quite obsolete: As GCC upstream uses git now, it might be possible to simplify the overall process. That proces never really worked out and was quite complicated anyway, as it required committers to locally merge the commit containing the .semaphore configuration files before pushing to github. In hindsight, it was probably a bad idea. The main difficulty in setting up CI for GDC is that we can't simply put CI configuration files in the toplevel folder, as that folder is under GCC's control. For CI which allows you to keep the configuration out of the repositories, this is not a problem. But for those requiring certain files in the top-level folder, it's more complicated. So that's why the old approach required merging a commit which includes the CI configuration. Maybe a better way is to automatically generate a new commit including CI configuration for each commit to be tested. This could probably be done with buildkite? Then trigger new build jobs for that auto-generated commit. The main difficulty there is integrating this into a somewhat nice workflow / interface.
Please forgive my confusion. There are 2 repositories, upstream GCC and GitHub/D-Programming-GDC/gcc. The former isn't hosted on GitHub but on gnu.org. The latter is necessary for CI, because reasons, and is a mirror of the upstream git repository. Development is done in the upstream repository. Because of that we can't put our CI configs into the project root. Thus the GitHub mirror is required for those CI providers that don't support a custom configuration location. But it could be done with the upstream repo otherwise, unless the CI service only works with projects hosted on GitHub - Cirrus CI for instance. Is that correct ? How's upstream GCC doing CI ?
Sep 05 2020
parent reply Iain Buclaw <ibuclaw gdcproject.org> writes:
On Saturday, 5 September 2020 at 11:23:09 UTC, wjoe wrote:
 On Saturday, 5 September 2020 at 10:25:28 UTC, Johannes Pfau 
 wrote:
 Am Sat, 05 Sep 2020 10:04:30 +0000 schrieb wjoe:

 [...]
To answer your other question:
 [...]
That information is probably quite obsolete: As GCC upstream uses git now, it might be possible to simplify the overall process. That proces never really worked out and was quite complicated anyway, as it required committers to locally merge the commit containing the .semaphore configuration files before pushing to github. In hindsight, it was probably a bad idea. The main difficulty in setting up CI for GDC is that we can't simply put CI configuration files in the toplevel folder, as that folder is under GCC's control. For CI which allows you to keep the configuration out of the repositories, this is not a problem. But for those requiring certain files in the top-level folder, it's more complicated. So that's why the old approach required merging a commit which includes the CI configuration. Maybe a better way is to automatically generate a new commit including CI configuration for each commit to be tested. This could probably be done with buildkite? Then trigger new build jobs for that auto-generated commit. The main difficulty there is integrating this into a somewhat nice workflow / interface.
Please forgive my confusion. There are 2 repositories, upstream GCC and GitHub/D-Programming-GDC/gcc. The former isn't hosted on GitHub but on gnu.org. The latter is necessary for CI, because reasons, and is a mirror of the upstream git repository. Development is done in the upstream repository. Because of that we can't put our CI configs into the project root. Thus the GitHub mirror is required for those CI providers that don't support a custom configuration location. But it could be done with the upstream repo otherwise, unless the CI service only works with projects hosted on GitHub - Cirrus CI for instance. Is that correct ?
That sounds about right. The only way you'd be able to test the upstream GCC repository directly is by doing periodic builds, rather than builds based off triggers. The CI logic would have to live in a separate repository. For convenience, this would be on GitHub.
 How's upstream GCC doing CI ?
They aren't. Or rather, other people are building every so often, or have their own scripts that build every single commit, and then post test results on the mailing list (i.e: https://gcc.gnu.org/pipermail/gcc-testresults/2020-September/thread.html)
Sep 05 2020
next sibling parent reply wjoe <invalid example.com> writes:
On Saturday, 5 September 2020 at 21:14:28 UTC, Iain Buclaw wrote:
 On Saturday, 5 September 2020 at 11:23:09 UTC, wjoe wrote:
 On Saturday, 5 September 2020 at 10:25:28 UTC, Johannes Pfau 
 wrote:
 [...]
Please forgive my confusion. There are 2 repositories, upstream GCC and GitHub/D-Programming-GDC/gcc. The former isn't hosted on GitHub but on gnu.org. The latter is necessary for CI, because reasons, and is a mirror of the upstream git repository. Development is done in the upstream repository. Because of that we can't put our CI configs into the project root. Thus the GitHub mirror is required for those CI providers that don't support a custom configuration location. But it could be done with the upstream repo otherwise, unless the CI service only works with projects hosted on GitHub - Cirrus CI for instance. Is that correct ?
That sounds about right. The only way you'd be able to test the upstream GCC repository directly is by doing periodic builds, rather than builds based off triggers. The CI logic would have to live in a separate repository. For convenience, this would be on GitHub.
Periodic builds sound like what Cirrus CI calls cron builds. But if the repository needs to be forked for CI it's kind of periodic as well if the commits are only merged in periodically. Currently I'm looking into building a docker container which can run a GDC build. Because Cirrus CI supports Dockerfile/s directly and every other CI seems to run its tasks/jobs inside of a docker container this seems like a viable approach and can be extended with the ARM targets mentioned in your item number 3.
Sep 06 2020
parent reply Iain Buclaw <ibuclaw gdcproject.org> writes:
On Sunday, 6 September 2020 at 21:52:04 UTC, wjoe wrote:
 On Saturday, 5 September 2020 at 21:14:28 UTC, Iain Buclaw 
 wrote:
 On Saturday, 5 September 2020 at 11:23:09 UTC, wjoe wrote:
 On Saturday, 5 September 2020 at 10:25:28 UTC, Johannes Pfau 
 wrote:
 [...]
Please forgive my confusion. There are 2 repositories, upstream GCC and GitHub/D-Programming-GDC/gcc. The former isn't hosted on GitHub but on gnu.org. The latter is necessary for CI, because reasons, and is a mirror of the upstream git repository. Development is done in the upstream repository. Because of that we can't put our CI configs into the project root. Thus the GitHub mirror is required for those CI providers that don't support a custom configuration location. But it could be done with the upstream repo otherwise, unless the CI service only works with projects hosted on GitHub - Cirrus CI for instance. Is that correct ?
That sounds about right. The only way you'd be able to test the upstream GCC repository directly is by doing periodic builds, rather than builds based off triggers. The CI logic would have to live in a separate repository. For convenience, this would be on GitHub.
Periodic builds sound like what Cirrus CI calls cron builds. But if the repository needs to be forked for CI it's kind of periodic as well if the commits are only merged in periodically. Currently I'm looking into building a docker container which can run a GDC build. Because Cirrus CI supports Dockerfile/s directly and every other CI seems to run its tasks/jobs inside of a docker container this seems like a viable approach and can be extended with the ARM targets mentioned in your item number 3.
In case it saves some work... Baseline dependencies for Debian/Ubuntu are: autogen autoconf automake bison dejagnu flex libcurl4-gnutls-dev libgmp-dev libisl-dev libmpc-dev libmpfr-dev make patch tzdata xz-utils binutils libc6-dev gcc g++ Baseline dependencies for Alpine are: autoconf automake bison curl-dev dejagnu flex gmp-dev isl-dev make mpc1-dev mpfr-dev patch tzdata xz binutils musl-dev gcc g++ Iain.
Sep 07 2020
parent wjoe <invalid example.com> writes:
On Monday, 7 September 2020 at 09:20:21 UTC, Iain Buclaw wrote:
 On Sunday, 6 September 2020 at 21:52:04 UTC, wjoe wrote:
 [...]
In case it saves some work... Baseline dependencies for Debian/Ubuntu are: autogen autoconf automake bison dejagnu flex libcurl4-gnutls-dev libgmp-dev libisl-dev libmpc-dev libmpfr-dev make patch tzdata xz-utils binutils libc6-dev gcc g++ Baseline dependencies for Alpine are: autoconf automake bison curl-dev dejagnu flex gmp-dev isl-dev make mpc1-dev mpfr-dev patch tzdata xz binutils musl-dev gcc g++ Iain.
Yes, it does, thanks :)
Sep 07 2020
prev sibling parent reply Iain Buclaw <ibuclaw gdcproject.org> writes:
On Saturday, 5 September 2020 at 21:14:28 UTC, Iain Buclaw wrote:
 On Saturday, 5 September 2020 at 11:23:09 UTC, wjoe wrote:
 On Saturday, 5 September 2020 at 10:25:28 UTC, Johannes Pfau 
 wrote:
 The main difficulty in setting up CI for GDC is that we can't 
 simply put CI configuration files in the toplevel folder, as 
 that folder is under GCC's control. For CI which allows you 
 to keep the configuration out of the repositories, this is 
 not a problem. But for those requiring certain files in the 
 top-level folder, it's more complicated.
How's upstream GCC doing CI ?
They aren't. Or rather, other people are building every so often, or have their own scripts that build every single commit, and then post test results on the mailing list (i.e: https://gcc.gnu.org/pipermail/gcc-testresults/2020-September/thread.html)
So when it comes to CI, there are two/three use cases that need to be considered. 1. Testing a changes to D or libphobos prior to committing to gcc mainline/branch. 2. Testing the mainline (master) branch, either periodically, every commit, or after a specific commit (such as the daily bump). 3. Testing the release branches of gcc (releases/gcc-9, releases/gcc-10, ...). I am least bothered by having [1] covered. I have enough faith that people who send patches have at least done some level of due diligence of testing their changes prior to submitting. So I think the focus should only be on the frequent testing of mainline, and infrequent testing of release branches. If Cirrus has built-in periodic scheduling (without the need for config files, or hooks added to the git repository), and you can point it to GCC's git (or the GitHub git-mirror/gcc) then that could be fine. CI scripts still need to live in a separate repository pulled in with the build. Iain.
Sep 07 2020
parent reply wjoe <invalid example.com> writes:
On Monday, 7 September 2020 at 09:14:08 UTC, Iain Buclaw wrote:
 On Saturday, 5 September 2020 at 21:14:28 UTC, Iain Buclaw 
 wrote:
 On Saturday, 5 September 2020 at 11:23:09 UTC, wjoe wrote:
 On Saturday, 5 September 2020 at 10:25:28 UTC, Johannes Pfau 
 wrote:
 The main difficulty in setting up CI for GDC is that we 
 can't simply put CI configuration files in the toplevel 
 folder, as that folder is under GCC's control. For CI which 
 allows you to keep the configuration out of the 
 repositories, this is not a problem. But for those requiring 
 certain files in the top-level folder, it's more complicated.
How's upstream GCC doing CI ?
They aren't. Or rather, other people are building every so often, or have their own scripts that build every single commit, and then post test results on the mailing list (i.e: https://gcc.gnu.org/pipermail/gcc-testresults/2020-September/thread.html)
So when it comes to CI, there are two/three use cases that need to be considered. 1. Testing a changes to D or libphobos prior to committing to gcc mainline/branch. 2. Testing the mainline (master) branch, either periodically, every commit, or after a specific commit (such as the daily bump). 3. Testing the release branches of gcc (releases/gcc-9, releases/gcc-10, ...). I am least bothered by having [1] covered. I have enough faith that people who send patches have at least done some level of due diligence of testing their changes prior to submitting. So I think the focus should only be on the frequent testing of mainline, and infrequent testing of release branches. If Cirrus has built-in periodic scheduling (without the need for config files, or hooks added to the git repository), and you can point it to GCC's git (or the GitHub git-mirror/gcc) then that could be fine. CI scripts still need to live in a separate repository pulled in with the build. Iain.
Cirrus CI currently only supports GitHub projects. Therefore the Dockerfile needs to be hosted there. But nowhere it says that you can't clone any arbitrary repository via a setup script. Options I can think of are: A) A Dockerfile for each case in (1.,) 2. and 3., or B) A docker container which provides the environment to build GCC and a (Cirrus) CI config which defines the tasks to cover (1.,) 2. and 3. A) sounds like a lot of duplication so I'm in favor of B) Cirrus CI also provides a Docker Builder VM which can build and publish docker containers.
Sep 07 2020
parent reply wjoe <invalid example.com> writes:
On Monday, 7 September 2020 at 10:41:50 UTC, wjoe wrote:
 Options I can think of are:
 A) A Dockerfile for each case in (1.,) 2. and 3., or
 B) A docker container which provides the environment to build 
 GCC and a (Cirrus) CI config which defines the tasks to cover 
 (1.,) 2. and 3.
Small update. Option A) isn't possible with Cirrus CI because the check-out phase takes about 54 minutes on average and building the container about 6 minutes. Then the task is terminated on the 60 minutes mark (the default timeout) before it even starts the GCC configuration phase. The environment is 4GB RAM and dual CPU with 16 threads.
Sep 08 2020
parent reply Iain Buclaw <ibuclaw gdcproject.org> writes:
On Tuesday, 8 September 2020 at 13:50:26 UTC, wjoe wrote:
 On Monday, 7 September 2020 at 10:41:50 UTC, wjoe wrote:
 Options I can think of are:
 A) A Dockerfile for each case in (1.,) 2. and 3., or
 B) A docker container which provides the environment to build 
 GCC and a (Cirrus) CI config which defines the tasks to cover 
 (1.,) 2. and 3.
Small update. Option A) isn't possible with Cirrus CI because the check-out phase takes about 54 minutes on average and building the container about 6 minutes. Then the task is terminated on the 60 minutes mark (the default timeout) before it even starts the GCC configuration phase. The environment is 4GB RAM and dual CPU with 16 threads.
It is just doing a plain git clone? Can you control whether checkout is done using --single-branch or --depth?
Sep 08 2020
parent reply wjoe <invalid example.com> writes:
On Tuesday, 8 September 2020 at 14:18:10 UTC, Iain Buclaw wrote:
 On Tuesday, 8 September 2020 at 13:50:26 UTC, wjoe wrote:
 On Monday, 7 September 2020 at 10:41:50 UTC, wjoe wrote:
 Options I can think of are:
 A) A Dockerfile for each case in (1.,) 2. and 3., or
 B) A docker container which provides the environment to build 
 GCC and a (Cirrus) CI config which defines the tasks to cover 
 (1.,) 2. and 3.
Small update. Option A) isn't possible with Cirrus CI because the check-out phase takes about 54 minutes on average and building the container about 6 minutes. Then the task is terminated on the 60 minutes mark (the default timeout) before it even starts the GCC configuration phase. The environment is 4GB RAM and dual CPU with 16 threads.
It is just doing a plain git clone? Can you control whether checkout is done using --single-branch or --depth?
This update was about a build via the zero configuration feature by just providing a Dockerfile. Just to see if it works but it didn't. I didn't find a git clone command in the log so I couldn't really tell. A lot of things, such as RAM, timeout, custom git checkouts, custom command to build a docker container, environment variables, etc., can be configured in a cirrus configuration file. So what I'm doing now is making a .cirrus.yml with a custom checkout, a shallow clone with --depth=1 of the master-ci branch should do the trick, using the container I made for building gcc. Then I'll adapt the build-ci script for use with Cirrus-CI. E.g. the dependency installation for instance isn't necessary by using a container that already provides all those. I'm not sure if Johannes was referring to that script being a bad idea in hindsight. If so it's not a problem to define the necessary steps in the cirrus config.
Sep 08 2020
parent reply Iain Buclaw <ibuclaw gdcproject.org> writes:
On Tuesday, 8 September 2020 at 16:44:39 UTC, wjoe wrote:
 On Tuesday, 8 September 2020 at 14:18:10 UTC, Iain Buclaw wrote:
 On Tuesday, 8 September 2020 at 13:50:26 UTC, wjoe wrote:
 On Monday, 7 September 2020 at 10:41:50 UTC, wjoe wrote:
 Options I can think of are:
 A) A Dockerfile for each case in (1.,) 2. and 3., or
 B) A docker container which provides the environment to 
 build GCC and a (Cirrus) CI config which defines the tasks 
 to cover (1.,) 2. and 3.
Small update. Option A) isn't possible with Cirrus CI because the check-out phase takes about 54 minutes on average and building the container about 6 minutes. Then the task is terminated on the 60 minutes mark (the default timeout) before it even starts the GCC configuration phase. The environment is 4GB RAM and dual CPU with 16 threads.
It is just doing a plain git clone? Can you control whether checkout is done using --single-branch or --depth?
This update was about a build via the zero configuration feature by just providing a Dockerfile. Just to see if it works but it didn't. I didn't find a git clone command in the log so I couldn't really tell. A lot of things, such as RAM, timeout, custom git checkouts, custom command to build a docker container, environment variables, etc., can be configured in a cirrus configuration file. So what I'm doing now is making a .cirrus.yml with a custom checkout, a shallow clone with --depth=1 of the master-ci branch should do the trick, using the container I made for building gcc. Then I'll adapt the build-ci script for use with Cirrus-CI. E.g. the dependency installation for instance isn't necessary by using a container that already provides all those. I'm not sure if Johannes was referring to that script being a bad idea in hindsight. If so it's not a problem to define the necessary steps in the cirrus config.
Well the ci script in the repo [1] should be used as a baseline, if not in its entirety as it's been used on enough to work in many environments, whether building a native or cross compiler. At the very least, you just need to set-up the environment() using whatever variables Cirrus provides and it'll just go and run. I think I do have a copy somewhere that adds support for running it locally without any dependencies on a specific CI. [1] https://github.com/D-Programming-GDC/gcc/blob/master-ci/buildci.sh
Sep 08 2020
parent reply wjoe <invalid example.com> writes:
On Tuesday, 8 September 2020 at 20:03:03 UTC, Iain Buclaw wrote:
 On Tuesday, 8 September 2020 at 16:44:39 UTC, wjoe wrote:
 [...]
Well the ci script in the repo [1] should be used as a baseline, if not in its entirety as it's been used on enough to work in many environments, whether building a native or cross compiler. At the very least, you just need to set-up the environment() using whatever variables Cirrus provides and it'll just go and run. I think I do have a copy somewhere that adds support for running it locally without any dependencies on a specific CI. [1] https://github.com/D-Programming-GDC/gcc/blob/master-ci/buildci.sh
Yes, that's the one I was talking about. I'll use that. Thanks.
Sep 08 2020
parent reply wjoe <invalid example.com> writes:
On Tuesday, 8 September 2020 at 20:12:47 UTC, wjoe wrote:
 On Tuesday, 8 September 2020 at 20:03:03 UTC, Iain Buclaw wrote:
 On Tuesday, 8 September 2020 at 16:44:39 UTC, wjoe wrote:
 [...]
Well the ci script in the repo [1] should be used as a baseline, if not in its entirety as it's been used on enough to work in many environments, whether building a native or cross compiler. At the very least, you just need to set-up the environment() using whatever variables Cirrus provides and it'll just go and run. [...]
Except there's a problem with the installation of gcc-9. Apparently there's no Debian release providing a gcc-9 package. The build of the container fails in step 4 on apt-get update with:
 Step 3/5 : RUN add-apt-repository -y ppa:ubuntu-toolchain-r/test
  ---> Running in dedb85ee0ddd
 gpg: keybox '/tmp/tmpcn_fbb85/pubring.gpg' created
 gpg: /tmp/tmpcn_fbb85/trustdb.gpg: trustdb created
 gpg: key 1E9377A2BA9EF27F: public key "Launchpad Toolchain 
 builds" imported
 gpg: Total number processed: 1
 gpg:               imported: 1
 Warning: apt-key output should not be parsed (stdout is not a 
 terminal)
 gpg: no valid OpenPGP data found.
 Removing intermediate container dedb85ee0ddd
  ---> 290665f0cc40
 Step 4/5 : RUN apt-get update     && apt-get install -y git 
 autogen autoconf automake bison dejagnu     flex 
 libcurl4-gnutls-dev libgmp-dev libisl-dev libmpc-dev     
 libmpfr-dev make patch tzdata xz-utils binutils libc6-dev gcc-9 
 g++-9     sudo curl     && rm -rf /var/lib/apt/lists/*
  ---> Running in fcc6340c33b0
 Hit:1 http://deb.debian.org/debian buster InRelease
 Hit:2 http://deb.debian.org/debian buster-updates InRelease
 Ign:3 http://ppa.launchpad.net/ubuntu-toolchain-r/test/ubuntu 
 groovy InRelease
 Hit:4 http://security.debian.org/debian-security buster/updates 
 InRelease
 Err:5 http://ppa.launchpad.net/ubuntu-toolchain-r/test/ubuntu 
 groovy Release
   404  Not Found [IP: 91.189.95.83 80]
 Reading package lists...
 E: The repository 
 'http://ppa.launchpad.net/ubuntu-toolchain-r/test/ubuntu groovy 
 Release' does not have a Release file.
 The command '/bin/sh -c apt-get update     && apt-get install 
 -y git autogen autoconf automake bison dejagnu     flex 
 libcurl4-gnutls-dev libgmp-dev libisl-dev libmpc-dev     
 libmpfr-dev make patch tzdata xz-utils binutils libc6-dev gcc-9 
 g++-9     sudo curl     && rm -rf /var/lib/apt/lists/*' 
 returned a non-zero code: 100
When I let buildci.sh install the repositories instead it fails like so:
 ./buildci.sh setup
 gpg: keybox '/tmp/tmpc4p6pqmr/pubring.gpg' created
 gpg: /tmp/tmpc4p6pqmr/trustdb.gpg: trustdb created
 gpg: key 1E9377A2BA9EF27F: public key "Launchpad Toolchain 
 builds" imported
 gpg: Total number processed: 1
 gpg:               imported: 1
 Warning: apt-key output should not be parsed (stdout is not a 
 terminal)
 gpg: no valid OpenPGP data found.
 E: The repository 
 'http://ppa.launchpad.net/ubuntu-toolchain-r/test/ubuntu groovy 
 Release' does not have a Release file.
 E: Unable to locate package gcc-9
 E: Unable to locate package g++-9> E: Couldn't find any package 
 by regex 'g++-9'> E: Unable to locate package gdc-9> E: Unable 
 to locate package pxz
This bug seems to be related https://bugs.launchpad.net/ubuntu/+source/wget/+bug/994097 Is there a strict dependency on Debian?
Sep 08 2020
next sibling parent reply wjoe <invalid example.com> writes:
On Tuesday, 8 September 2020 at 23:48:58 UTC, wjoe wrote:
 On Tuesday, 8 September 2020 at 20:12:47 UTC, wjoe wrote:
 On Tuesday, 8 September 2020 at 20:03:03 UTC, Iain Buclaw 
 wrote:
 On Tuesday, 8 September 2020 at 16:44:39 UTC, wjoe wrote:
 [...]
Well the ci script in the repo [1] should be used as a baseline, if not in its entirety as it's been used on enough to work in many environments, whether building a native or cross compiler. At the very least, you just need to set-up the environment() using whatever variables Cirrus provides and it'll just go and run. [...]
Except there's a problem with the installation of gcc-9. Apparently there's no Debian release providing a gcc-9 package.
This issue is resolved.
Sep 09 2020
parent wjoe <invalid example.com> writes:
Small update.

I managed to install gcc-9 and g++-9 from the ubuntu toolchain 
ppa in the Docker container [1] and it's successfully built.
I instructed Cirrus CI to clone the repository like so
 git clone --branch=master-ci --depth=1 
 https://github.com/W-joe/gcc /tmp/cirrus-ci-build
and it reaches the configure stage in the buildci [2] script, however it fails like so: ./buildci.sh setup % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 2327k 100 2327k 0 0 1961k 0 0:00:01 0:00:01 --:--:-- 1959k % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 1 1249k 1 13880 0 0 24139 0 0:00:52 --:--:-- 0:00:52 24097 100 1249k 100 1249k 0 0 1620k 0 --:--:-- --:--:-- --:--:-- 1618k % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 37 654k 37 242k 0 0 263k 0 0:00:02 --:--:-- 0:00:02 263k 100 654k 100 654k 0 0 642k 0 0:00:01 0:00:01 --:--:-- 642k % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 1619k 100 1619k 0 0 2221k 0 --:--:-- --:--:-- --:--:-- 2221k checking build system type... x86_64-pc-linux-gnu checking host system type... x86_64-pc-linux-gnu checking target system type... x86_64-pc-linux-gnu checking for a BSD-compatible install... /usr/bin/install -c checking whether ln works... yes checking whether ln -s works... yes checking for a sed that does not truncate output... /bin/sed checking for gawk... no checking for mawk... mawk checking for libatomic support... yes checking for libvtv support... yes checking for libhsail-rt support... yes checking for libphobos support... yes checking for x86_64-linux-gnu-gcc... gcc-9 checking whether the C compiler works... yes checking for C compiler default output file name... a.out checking for suffix of executables... checking whether we are cross compiling... no checking for suffix of object files... o checking whether we are using the GNU C compiler... yes checking whether gcc-9 accepts -g... yes checking for gcc-9 option to accept ISO C89... none needed
 checking whether we are using the GNU C++ compiler... no
checking whether g++-9 accepts -g... no checking whether g++ accepts -static-libstdc++ -static-libgcc... no checking for x86_64-linux-gnu-gnatbind... no checking for gnatbind... no checking for x86_64-linux-gnu-gnatmake... no checking for gnatmake... no checking whether compiler driver understands Ada... no checking how to compare bootstrapped objects... cmp --ignore-initial=16 $$f1 $$f2 checking whether g++-9 supports C++11 features by default... no checking whether g++-9 supports C++11 features with -std=gnu++11... no checking whether g++-9 supports C++11 features with -std=gnu++0x... no checking whether g++-9 supports C++11 features with -std=c++11... no checking whether g++-9 supports C++11 features with +std=c++11... no checking whether g++-9 supports C++11 features with -h std=c++11... no checking whether g++-9 supports C++11 features with -std=c++0x... no checking whether g++-9 supports C++11 features with +std=c++0x... no checking whether g++-9 supports C++11 features with -h std=c++0x... no
 configure: error: *** A compiler with support for C++11 
 language features is required.
It says that it's not using the GNU C++ compiler, why is that ? Also, shouldn't g++-9 support C++11 ? I suspect that maybe the compiler wasn't properly installed in the container ? [1] https://github.com/w-joe/gcc/blob/master-ci/Dockerfile [2] https://github.com/w-joe/gcc/blob/master-ci/buildci.sh
Sep 09 2020
prev sibling parent reply Iain Buclaw <ibuclaw gdcproject.org> writes:
On Tuesday, 8 September 2020 at 23:48:58 UTC, wjoe wrote:
 On Tuesday, 8 September 2020 at 20:12:47 UTC, wjoe wrote:
 [...]
Except there's a problem with the installation of gcc-9. Apparently there's no Debian release providing a gcc-9 package. The build of the container fails in step 4 on apt-get update with:
 [...]
When I let buildci.sh install the repositories instead it fails like so:
 [...]
This bug seems to be related https://bugs.launchpad.net/ubuntu/+source/wget/+bug/994097 Is there a strict dependency on Debian?
Doesn't seem related, though the PPA is meant for Ubuntu distributions, not Debian. The problem might be in incompatibilities between the two.
Sep 09 2020
parent reply wjoe <invalid example.com> writes:
On Wednesday, 9 September 2020 at 11:22:14 UTC, Iain Buclaw wrote:
 On Tuesday, 8 September 2020 at 23:48:58 UTC, wjoe wrote:
 On Tuesday, 8 September 2020 at 20:12:47 UTC, wjoe wrote:
 [...]
Except there's a problem with the installation of gcc-9. Apparently there's no Debian release providing a gcc-9 package. The build of the container fails in step 4 on apt-get update with:
 [...]
When I let buildci.sh install the repositories instead it fails like so:
 [...]
This bug seems to be related https://bugs.launchpad.net/ubuntu/+source/wget/+bug/994097 Is there a strict dependency on Debian?
Doesn't seem related, though the PPA is meant for Ubuntu distributions, not Debian. The problem might be in incompatibilities between the two.
No, it's not. The issue seems to be that add-apt-repository adds a source for the current Ubuntu version which is groovy which doesn't provide a Release file. Some such. It works when I take the ppa from Ubuntu bionic. Anyways, I managed to install gcc-9 and g++-9 from the ubuntu toolchain ppa in the Docker container [1] and it builds successfully. I instructed Cirrus CI to clone the repository like so
 git clone --branch=master-ci --depth=1 
 https://github.com/W-joe/gcc /tmp/cirrus-ci-build
and it reaches the configure stage in the buildci [2] script, however it fails. It says that it's not using the GNU C++ compiler, why is that ? Also, shouldn't g++-9 support C++11 ? I suspect that maybe the compiler wasn't properly installed in the container ? This is the log: ./buildci.sh setup % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 2327k 100 2327k 0 0 1961k 0 0:00:01 0:00:01 --:--:-- 1959k % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 1 1249k 1 13880 0 0 24139 0 0:00:52 --:--:-- 0:00:52 24097 100 1249k 100 1249k 0 0 1620k 0 --:--:-- --:--:-- --:--:-- 1618k % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 37 654k 37 242k 0 0 263k 0 0:00:02 --:--:-- 0:00:02 263k 100 654k 100 654k 0 0 642k 0 0:00:01 0:00:01 --:--:-- 642k % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 1619k 100 1619k 0 0 2221k 0 --:--:-- --:--:-- --:--:-- 2221k checking build system type... x86_64-pc-linux-gnu checking host system type... x86_64-pc-linux-gnu checking target system type... x86_64-pc-linux-gnu checking for a BSD-compatible install... /usr/bin/install -c checking whether ln works... yes checking whether ln -s works... yes checking for a sed that does not truncate output... /bin/sed checking for gawk... no checking for mawk... mawk checking for libatomic support... yes checking for libvtv support... yes checking for libhsail-rt support... yes checking for libphobos support... yes checking for x86_64-linux-gnu-gcc... gcc-9 checking whether the C compiler works... yes checking for C compiler default output file name... a.out checking for suffix of executables... checking whether we are cross compiling... no checking for suffix of object files... o checking whether we are using the GNU C compiler... yes checking whether gcc-9 accepts -g... yes checking for gcc-9 option to accept ISO C89... none needed checking whether we are using the GNU C++ compiler... no checking whether g++-9 accepts -g... no checking whether g++ accepts -static-libstdc++ -static-libgcc... no checking for x86_64-linux-gnu-gnatbind... no checking for gnatbind... no checking for x86_64-linux-gnu-gnatmake... no checking for gnatmake... no checking whether compiler driver understands Ada... no checking how to compare bootstrapped objects... cmp --ignore-initial=16 $$f1 $$f2 checking whether g++-9 supports C++11 features by default... no checking whether g++-9 supports C++11 features with -std=gnu++11... no checking whether g++-9 supports C++11 features with -std=gnu++0x... no checking whether g++-9 supports C++11 features with -std=c++11... no checking whether g++-9 supports C++11 features with +std=c++11... no checking whether g++-9 supports C++11 features with -h std=c++11... no checking whether g++-9 supports C++11 features with -std=c++0x... no checking whether g++-9 supports C++11 features with +std=c++0x... no checking whether g++-9 supports C++11 features with -h std=c++0x... no configure: error: *** A compiler with support for C++11 language features is required. [1] https://github.com/w-joe/gcc/blob/master-ci/Dockerfile [2] https://github.com/w-joe/gcc/blob/master-ci/buildci.sh
Sep 09 2020
parent reply kinke <noone nowhere.com> writes:
On Wednesday, 9 September 2020 at 11:33:22 UTC, wjoe wrote:
 I suspect that maybe the compiler wasn't properly installed in 
 the container ?
Maybe just a typo in your Dockerfile? You're installing `g++9`, but the package name is `g++-9`.
Sep 09 2020
parent reply wjoe <invalid example.com> writes:
On Wednesday, 9 September 2020 at 12:13:32 UTC, kinke wrote:
 On Wednesday, 9 September 2020 at 11:33:22 UTC, wjoe wrote:
 I suspect that maybe the compiler wasn't properly installed in 
 the container ?
Maybe just a typo in your Dockerfile? You're installing `g++9`, but the package name is `g++-9`.
Ahh good catch! It works now! Thanks :)
Sep 09 2020
parent reply wjoe <invalid example.com> writes:
On Wednesday, 9 September 2020 at 12:37:37 UTC, wjoe wrote:
 On Wednesday, 9 September 2020 at 12:13:32 UTC, kinke wrote:
 On Wednesday, 9 September 2020 at 11:33:22 UTC, wjoe wrote:
 I suspect that maybe the compiler wasn't properly installed 
 in the container ?
Maybe just a typo in your Dockerfile? You're installing `g++9`, but the package name is `g++-9`.
Ahh good catch! It works now! Thanks :)
The build as well as the unittests finished successfully. The entire run took close to 70 minutes. This was a linux container with 4 CPUs and 10G RAM. Which files should be kept once the task completed and what should happen with them ? On success I could add a package task. Next up is to test if Cirrus CI can handle the remaining platforms since the limits are lower than for linux containers. Unit tests almost hit the 9G RAM mark. Mac is a single core VM on community cluster. Cron jobs aren't specified in the configuration file but in the Cirrus app setting on GitHub.
Sep 09 2020
next sibling parent reply Iain Buclaw <ibuclaw gdcproject.org> writes:
On Wednesday, 9 September 2020 at 18:32:07 UTC, wjoe wrote:
 On Wednesday, 9 September 2020 at 12:37:37 UTC, wjoe wrote:
 On Wednesday, 9 September 2020 at 12:13:32 UTC, kinke wrote:
 On Wednesday, 9 September 2020 at 11:33:22 UTC, wjoe wrote:
 I suspect that maybe the compiler wasn't properly installed 
 in the container ?
Maybe just a typo in your Dockerfile? You're installing `g++9`, but the package name is `g++-9`.
Ahh good catch! It works now! Thanks :)
The build as well as the unittests finished successfully. The entire run took close to 70 minutes. This was a linux container with 4 CPUs and 10G RAM.
Sounds about right. There are a couple heavy modules that instantiate tens of thousands of functions when building phobos unittests.
 Which files should be kept once the task completed and what 
 should happen with them ?
 On success I could add a package task.
There's 'make install'. I probably wouldn't prune anything copied during that recipe, as you'll lose integration with C, C++ and LTO compilers if any of those components are missing.
 Next up is to test if Cirrus CI can handle the remaining 
 platforms since the limits are lower than for linux containers. 
 Unit tests almost hit the 9G RAM mark. Mac is a single core VM 
 on community cluster.
You'll need to implement DSO handling on Darwin, there's a little bit of compiler support code, and the rest is in the library. I've got a patch somewhere with maybe 90% of the work done. From what I recall there was some weirdness with how dynamic loading works.
Sep 09 2020
parent reply wjoe <invalid example.com> writes:
On Wednesday, 9 September 2020 at 23:13:38 UTC, Iain Buclaw wrote:
 On Wednesday, 9 September 2020 at 18:32:07 UTC, wjoe wrote:
 On Wednesday, 9 September 2020 at 12:37:37 UTC, wjoe wrote:
 On Wednesday, 9 September 2020 at 12:13:32 UTC, kinke wrote:
 On Wednesday, 9 September 2020 at 11:33:22 UTC, wjoe wrote:
 I suspect that maybe the compiler wasn't properly installed 
 in the container ?
Maybe just a typo in your Dockerfile? You're installing `g++9`, but the package name is `g++-9`.
Ahh good catch! It works now! Thanks :)
The build as well as the unittests finished successfully. The entire run took close to 70 minutes. This was a linux container with 4 CPUs and 10G RAM.
Sounds about right. There are a couple heavy modules that instantiate tens of thousands of functions when building phobos unittests.
 Which files should be kept once the task completed and what 
 should happen with them ?
 On success I could add a package task.
There's 'make install'. I probably wouldn't prune anything copied during that recipe, as you'll lose integration with C, C++ and LTO compilers if any of those components are missing.
I would create a prefix for make install, install to that location, tar that folder and keep the tarball. Also, on failure it would probably be a good idea to preserve the logs ? The docs mention that it's possible to define GitHub actions, e.g. to email stuff. Would that be useful ?
Sep 11 2020
parent reply Iain Buclaw <ibuclaw gdcproject.org> writes:
On Friday, 11 September 2020 at 12:33:02 UTC, wjoe wrote:
 On Wednesday, 9 September 2020 at 23:13:38 UTC, Iain Buclaw 
 wrote:
 On Wednesday, 9 September 2020 at 18:32:07 UTC, wjoe wrote:
 On Wednesday, 9 September 2020 at 12:37:37 UTC, wjoe wrote:
 On Wednesday, 9 September 2020 at 12:13:32 UTC, kinke wrote:
 On Wednesday, 9 September 2020 at 11:33:22 UTC, wjoe wrote:
 I suspect that maybe the compiler wasn't properly 
 installed in the container ?
Maybe just a typo in your Dockerfile? You're installing `g++9`, but the package name is `g++-9`.
Ahh good catch! It works now! Thanks :)
The build as well as the unittests finished successfully. The entire run took close to 70 minutes. This was a linux container with 4 CPUs and 10G RAM.
Sounds about right. There are a couple heavy modules that instantiate tens of thousands of functions when building phobos unittests.
 Which files should be kept once the task completed and what 
 should happen with them ?
 On success I could add a package task.
There's 'make install'. I probably wouldn't prune anything copied during that recipe, as you'll lose integration with C, C++ and LTO compilers if any of those components are missing.
I would create a prefix for make install, install to that location, tar that folder and keep the tarball.
Yes, that works, IIRC the same was done for the historical binary tarballs.
 Also, on failure it would probably be a good idea to preserve 
 the logs ?
If stdout/stderr is accessible after build, then the relevant logs can just be cat'd. For the testsuite, these be ./gcc/testsuite/gdc/gdc.log and ./x86_64-pc-linux-gnu/libphobos/testsuite/libphobos.sum (substitute the target where necessary).
 The docs mention that it's possible to define GitHub actions, 
 e.g. to email stuff. Would that be useful ?
In the event of failed builds, or builds where the success/fail status changed, perhaps. Having emails for every build would just be noise.
Sep 12 2020
parent reply wjoe <invalid example.com> writes:
I've added the above tasks and they are reported to have been 
completed successfully in the Cirrus summary.
However, on a closer look I can spot multiple failures or 
files/directories which can't be found.
- Some 12 failed tests as well as 10 unresolved test cases in the 
unittest step.
- The build package step failed with: cd ./libcc1: no such file 
or directory.
Sep 15 2020
next sibling parent reply wjoe <invalid example.com> writes:
On Tuesday, 15 September 2020 at 16:00:56 UTC, wjoe wrote:
 I've added the above tasks and they are reported to have been 
 completed successfully in the Cirrus summary.
 However, on a closer look I can spot multiple failures or 
 files/directories which can't be found.
 - Some 12 failed tests as well as 10 unresolved test cases in 
 the unittest step.
 - The build package step failed with: cd ./libcc1: no such file 
 or directory.
This is the package function for reference: build_package() { cd build || exit 1 make install || exit 1 tar -cJf gdc-${build_host}.txz /usr || exit 1 } Links for your convenience. [1] https://github.com/w-joe/gcc/blob/master-ci/buildci.sh [2] https://github.com/w-joe/gcc/blob/master-ci/.cirrus.yml
Sep 15 2020
next sibling parent reply Iain Buclaw <ibuclaw gdcproject.org> writes:
On Tuesday, 15 September 2020 at 16:05:53 UTC, wjoe wrote:
 On Tuesday, 15 September 2020 at 16:00:56 UTC, wjoe wrote:
 I've added the above tasks and they are reported to have been 
 completed successfully in the Cirrus summary.
 However, on a closer look I can spot multiple failures or 
 files/directories which can't be found.
 - Some 12 failed tests as well as 10 unresolved test cases in 
 the unittest step.
 - The build package step failed with: cd ./libcc1: no such 
 file or directory.
This is the package function for reference: build_package() { cd build || exit 1 make install || exit 1 tar -cJf gdc-${build_host}.txz /usr || exit 1 } Links for your convenience. [1] https://github.com/w-joe/gcc/blob/master-ci/buildci.sh [2] https://github.com/w-joe/gcc/blob/master-ci/.cirrus.yml
Note the comments in the build function, only the dependencies of the C++ and D libraries are built. So you'll need to do `make all` in order to cover anything that was deliberately skipped.
Sep 15 2020
parent reply wjoe <invalid example.com> writes:
On Tuesday, 15 September 2020 at 18:49:48 UTC, Iain Buclaw wrote:
 On Tuesday, 15 September 2020 at 16:05:53 UTC, wjoe wrote:
 On Tuesday, 15 September 2020 at 16:00:56 UTC, wjoe wrote:
 I've added the above tasks and they are reported to have been 
 completed successfully in the Cirrus summary.
 However, on a closer look I can spot multiple failures or 
 files/directories which can't be found.
 - Some 12 failed tests as well as 10 unresolved test cases in 
 the unittest step.
 - The build package step failed with: cd ./libcc1: no such 
 file or directory.
This is the package function for reference: build_package() { cd build || exit 1 make install || exit 1 tar -cJf gdc-${build_host}.txz /usr || exit 1 } Links for your convenience. [1] https://github.com/w-joe/gcc/blob/master-ci/buildci.sh [2] https://github.com/w-joe/gcc/blob/master-ci/.cirrus.yml
Note the comments in the build function, only the dependencies of the C++ and D libraries are built. So you'll need to do `make all` in order to cover anything that was deliberately skipped.
Does that mean that building with build_bootstrap="enabled" is insufficient ? Because I exported that variable in the Cirrus CI configuration and set it to 'disabled' for GDC and Unittest tasks and to 'enabled' for the Package task. Packaging in the Unittest task didn't work unfortunately because building that with bootstrap enabled exceeded the 2h time limit. So now there are 3 tasks. - Build GDC testsuite - Build Unittests - Build Package task At the moment I made the Package task depend on the Unittest task, but GDC and Unittest run in parallel. I guess best practices would ask to make Unittest depend on GDC to go easy on resources. However, that would blow up the build cycle to something over 3.5h. The stats right now are: - Docker container builds in less than 5 minutes It's automatically cached by CirrusCI i.e. this only applies if the Dockerfile changed. - GDC testsuite takes about 45 minutes with build_bootstrap=disabled - Unittests take a little more than 1 hour with build_bootstrap=disabled - Package task took 1:45h with build_bootstrap=enabled That was 1:15h for building, and a little less than half an hour for the package step (make install && tar) For a total of 2:46h
Sep 16 2020
parent reply Iain Buclaw <ibuclaw gdcproject.org> writes:
On Wednesday, 16 September 2020 at 09:33:05 UTC, wjoe wrote:
 On Tuesday, 15 September 2020 at 18:49:48 UTC, Iain Buclaw 
 wrote:
 On Tuesday, 15 September 2020 at 16:05:53 UTC, wjoe wrote:
 [...]
Note the comments in the build function, only the dependencies of the C++ and D libraries are built. So you'll need to do `make all` in order to cover anything that was deliberately skipped.
Does that mean that building with build_bootstrap="enabled" is insufficient ? Because I exported that variable in the Cirrus CI configuration and set it to 'disabled' for GDC and Unittest tasks and to 'enabled' for the Package task.
build_bootstrap=enabled should be OK. Even better, it configures with --enable-checking=release.
 Packaging in the Unittest task didn't work unfortunately 
 because building that with bootstrap enabled exceeded the 2h 
 time limit.

 So now there are 3 tasks.
 - Build GDC testsuite
 - Build Unittests
 - Build Package task

 At the moment I made the Package task depend on the Unittest 
 task, but GDC and Unittest run in parallel.
 I guess best practices would ask to make Unittest depend on GDC 
 to go easy on resources.
 However, that would blow up the build cycle to something over 
 3.5h.
All tasks could be ran in parallel as there's no dependencies between them. The Package task though I'd imagine should be reserved for the release branches, so it's only ran infrequently.
Sep 16 2020
parent reply wjoe <invalid example.com> writes:
On Wednesday, 16 September 2020 at 10:23:16 UTC, Iain Buclaw 
wrote:
 On Wednesday, 16 September 2020 at 09:33:05 UTC, wjoe wrote:
[...]
build_bootstrap=enabled should be OK. Even better, it configures with --enable-checking=release.
[...]
All tasks could be ran in parallel as there's no dependencies between them. The Package task though I'd imagine should be reserved for the release branches, so it's only ran infrequently.
Configuration wise that's not a problem at all. My reasoning for the dependency was that there's no point in making a release tar ball if the Unittests task fails.
Sep 16 2020
parent reply Iain Buclaw <ibuclaw gdcproject.org> writes:
On Wednesday, 16 September 2020 at 10:47:01 UTC, wjoe wrote:
 On Wednesday, 16 September 2020 at 10:23:16 UTC, Iain Buclaw 
 wrote:
 On Wednesday, 16 September 2020 at 09:33:05 UTC, wjoe wrote:
[...]
build_bootstrap=enabled should be OK. Even better, it configures with --enable-checking=release.
[...]
All tasks could be ran in parallel as there's no dependencies between them. The Package task though I'd imagine should be reserved for the release branches, so it's only ran infrequently.
Configuration wise that's not a problem at all. My reasoning for the dependency was that there's no point in making a release tar ball if the Unittests task fails.
It's not really the end of the world if a test fails. While tagged releases should ideally all pass, some failures can occur that are not our fault (i.e: x32 has a libc bug that causes some syscalls to fail and trigger asserts in a couple libphobos tests). For non-tagged builds, I imagine we'd just be replacing the previously built tarball based on the given branch it was built off, so if something really is broken, in the worst case we just wait until a fix goes in and retrigger CI. Or if some downstream is affected, we'd have some sort of versioning in place (such as syncthing) to do a quick restore.
Sep 16 2020
parent wjoe <invalid example.com> writes:
On Wednesday, 16 September 2020 at 21:25:19 UTC, Iain Buclaw 
wrote:
 On Wednesday, 16 September 2020 at 10:47:01 UTC, wjoe wrote:
 On Wednesday, 16 September 2020 at 10:23:16 UTC, Iain Buclaw 
 wrote:
 [...]
Configuration wise that's not a problem at all. My reasoning for the dependency was that there's no point in making a release tar ball if the Unittests task fails.
It's not really the end of the world if a test fails. While tagged releases should ideally all pass, some failures can occur that are not our fault (i.e: x32 has a libc bug that causes some syscalls to fail and trigger asserts in a couple libphobos tests). For non-tagged builds, I imagine we'd just be replacing the previously built tarball based on the given branch it was built off, so if something really is broken, in the worst case we just wait until a fix goes in and retrigger CI. Or if some downstream is affected, we'd have some sort of versioning in place (such as syncthing) to do a quick restore.
I see. So I'll just remove the dependency and let all the tasks run in parallel. It may still be delayed due to scheduling but time wise it won't be worse that with a dependency on Unittest.
Sep 16 2020
prev sibling parent reply Seb <seb wilzba.ch> writes:
On Tuesday, 15 September 2020 at 16:05:53 UTC, wjoe wrote:
 On Tuesday, 15 September 2020 at 16:00:56 UTC, wjoe wrote:
 I've added the above tasks and they are reported to have been 
 completed successfully in the Cirrus summary.
 However, on a closer look I can spot multiple failures or 
 files/directories which can't be found.
 - Some 12 failed tests as well as 10 unresolved test cases in 
 the unittest step.
 - The build package step failed with: cd ./libcc1: no such 
 file or directory.
This is the package function for reference: build_package() { cd build || exit 1 make install || exit 1 tar -cJf gdc-${build_host}.txz /usr || exit 1 } Links for your convenience. [1] https://github.com/w-joe/gcc/blob/master-ci/buildci.sh [2] https://github.com/w-joe/gcc/blob/master-ci/.cirrus.yml
That's great work and progress! I just wanted to a side note: If you manage to build tarballs of the binaries, I think a lot of people would greatly appreciate if they are made available (e.g. can be done directly on GitHub via "Releases"). For example, see https://github.com/dlang/installer/pull/251, https://forum.dlang.org/thread/xktompypwvaabwebnjol forum.dlang.org, or https://forum.dlang.org/thread/bnkbldsifjhsseswiceq forum.dlang.org . If the download links are in the official install.sh script, then it will auto-magically be available on Travis CI and others. I'm happy to help with getting such releases shipped.
Sep 15 2020
next sibling parent reply wjoe <invalid example.com> writes:
On Wednesday, 16 September 2020 at 00:49:52 UTC, Seb wrote:
 On Tuesday, 15 September 2020 at 16:05:53 UTC, wjoe wrote:
 On Tuesday, 15 September 2020 at 16:00:56 UTC, wjoe wrote:
 I've added the above tasks and they are reported to have been 
 completed successfully in the Cirrus summary.
 However, on a closer look I can spot multiple failures or 
 files/directories which can't be found.
 - Some 12 failed tests as well as 10 unresolved test cases in 
 the unittest step.
 - The build package step failed with: cd ./libcc1: no such 
 file or directory.
This is the package function for reference: build_package() { cd build || exit 1 make install || exit 1 tar -cJf gdc-${build_host}.txz /usr || exit 1 } Links for your convenience. [1] https://github.com/w-joe/gcc/blob/master-ci/buildci.sh [2] https://github.com/w-joe/gcc/blob/master-ci/.cirrus.yml
That's great work and progress! I just wanted to a side note: If you manage to build tarballs of the binaries, I think a lot of people would greatly appreciate if they are made available (e.g. can be done directly on GitHub via "Releases"). For example, see https://github.com/dlang/installer/pull/251, https://forum.dlang.org/thread/xktompypwvaabwebnjol forum.dlang.org, or https://forum.dlang.org/thread/bnkbldsifjhsseswiceq forum.dlang.org .
Thank you for your kind words :) I do but I didn't preserve it in the last run. I've just added the Artifacts step to the cirrus configuration. Artifacts is the terminology they use to make files from the build environment available for download after the task completed. The way it's being done right now is that 'make install' installs to the /usr prefix. After that a tarball of this prefix is created (via tar cJf gdc-triplet.txz /usr). I'm not sure if that's suitable as a release as is because tar omits the root / so the result will be extracted as usr/ There isn't a lot of time budget left in that task but it should be possible to run some more scripts. If the time limit won't suffice it should be possible to cache /usr and move the tar ball script into a new task. Also, all of that is Linux only at the moment. I've created a matrix for the Dockerfiles so all platforms that can run bash should be easy to add - simply adding the Dockerfile to the matrix and it-should-work(TM).
 If the download links are in the official install.sh script, 
 then it will auto-magically be available on Travis CI and 
 others.

 I'm happy to help with getting such releases shipped.
I couldn't find the install script in my 1 minute search in the download section. Could I inconvenience you to copy/paste the link, please ? Thanks :)
Sep 16 2020
next sibling parent reply wjoe <invalid example.com> writes:
On Wednesday, 16 September 2020 at 09:55:22 UTC, wjoe wrote:
 I couldn't find the install script in my 1 minute search in the 
 download section. Could I inconvenience you to copy/paste the 
 link, please ?
 Thanks :)
Found it. It was just one more click away :)
Sep 16 2020
parent Iain Buclaw <ibuclaw gdcproject.org> writes:
On Wednesday, 16 September 2020 at 09:57:21 UTC, wjoe wrote:
 On Wednesday, 16 September 2020 at 09:55:22 UTC, wjoe wrote:
 I couldn't find the install script in my 1 minute search in 
 the download section. Could I inconvenience you to copy/paste 
 the link, please ?
 Thanks :)
Found it. It was just one more click away :)
It should be pulling it from the gdc site. If we need the capacity, I have no problem with ordering a storage box to host all downloads (it would be nice to move the existing tarballs off the tiny VM anyway).
Sep 16 2020
prev sibling parent reply Iain Buclaw <ibuclaw gdcproject.org> writes:
On Wednesday, 16 September 2020 at 09:55:22 UTC, wjoe wrote:
 The way it's being done right now is that 'make install' 
 installs to the /usr prefix. After that a tarball of this 
 prefix is created (via tar cJf gdc-triplet.txz /usr). I'm not 
 sure if that's suitable as a release as is because tar omits 
 the root / so the result will be extracted as usr/
 There isn't a lot of time budget left in that task but it 
 should be possible to run some more scripts.
 If the time limit won't suffice it should be possible to cache 
 /usr and move the tar ball script into a new task.
If it follows the convention of the existing packages, it should be fine. e.g: tar extracts gdc into 'x86_64-unknown-linux-gnu/bin/gdc'
Sep 16 2020
parent reply wjoe <invalid example.com> writes:
On Wednesday, 16 September 2020 at 10:42:27 UTC, Iain Buclaw 
wrote:
 On Wednesday, 16 September 2020 at 09:55:22 UTC, wjoe wrote:
 The way it's being done right now is that 'make install' 
 installs to the /usr prefix. After that a tarball of this 
 prefix is created (via tar cJf gdc-triplet.txz /usr). I'm not 
 sure if that's suitable as a release as is because tar omits 
 the root / so the result will be extracted as usr/
 There isn't a lot of time budget left in that task but it 
 should be possible to run some more scripts.
 If the time limit won't suffice it should be possible to cache 
 /usr and move the tar ball script into a new task.
If it follows the convention of the existing packages, it should be fine. e.g: tar extracts gdc into 'x86_64-unknown-linux-gnu/bin/gdc'
The tar ball is 443MiB. That's because it includes half the docker container :) The buildci script [1] uses hard coded --prefix=/usr and lib-dirs=/usr/lib. Is there a particular reason for that ? Or, rather, could I just change it or introduce a variable prefix in order to be able to use an isolated directory ? [1] https://github.com/W-joe/gcc/blob/master-ci/buildci.sh#L274-L280
Sep 16 2020
parent reply Iain Buclaw <ibuclaw gdcproject.org> writes:
On Wednesday, 16 September 2020 at 12:50:57 UTC, wjoe wrote:
 On Wednesday, 16 September 2020 at 10:42:27 UTC, Iain Buclaw 
 wrote:
 On Wednesday, 16 September 2020 at 09:55:22 UTC, wjoe wrote:
 The way it's being done right now is that 'make install' 
 installs to the /usr prefix. After that a tarball of this 
 prefix is created (via tar cJf gdc-triplet.txz /usr). I'm not 
 sure if that's suitable as a release as is because tar omits 
 the root / so the result will be extracted as usr/
 There isn't a lot of time budget left in that task but it 
 should be possible to run some more scripts.
 If the time limit won't suffice it should be possible to 
 cache /usr and move the tar ball script into a new task.
If it follows the convention of the existing packages, it should be fine. e.g: tar extracts gdc into 'x86_64-unknown-linux-gnu/bin/gdc'
The tar ball is 443MiB. That's because it includes half the docker container :)
You should be able to get yourself down to about ~30MB. ;-) I'll see if there's any miscellaneous tweaks you can do, but the most obvious one is `strip --strip-debug`.
 The buildci script [1] uses hard coded --prefix=/usr and 
 lib-dirs=/usr/lib.
 Is there a particular reason for that ?
 Or, rather, could I just change it or introduce a variable 
 prefix in order to be able to use an isolated directory ?

 [1] 
 https://github.com/W-joe/gcc/blob/master-ci/buildci.sh#L274-L280
IIRC, that top line just matches Debian/Ubuntu built gcc (in the hope that no weirdness would happen when running testsuite). Seems reasonable to break it out into a variable that can be overridden by the CI. Just looking at an old binary, the builder used `--prefix=/home/build/share/cache/install/x86_64-unknown-linux-gnu`. Not saying that you should do the same, but the last part being the target triplet is the key. ... It may only be a marginal gain, but I find that --disable-libstdcxx-pch helps with speeding up incremental builds (a long time is spent compiling headers in libstdc++).
Sep 16 2020
next sibling parent wjoe <invalid example.com> writes:
On Wednesday, 16 September 2020 at 21:05:27 UTC, Iain Buclaw 
wrote:
 On Wednesday, 16 September 2020 at 12:50:57 UTC, wjoe wrote:
[...]
You should be able to get yourself down to about ~30MB. ;-) I'll see if there's any miscellaneous tweaks you can do, but the most obvious one is `strip --strip-debug`. [...]
I guess the /usr prefix makes sense if it's built for the .deb package.
 [...]
A triplet prefix it is :)
 [...]
I think this may come in handy.
Sep 16 2020
prev sibling next sibling parent wjoe <invalid example.com> writes:
On Wednesday, 16 September 2020 at 21:05:27 UTC, Iain Buclaw 
wrote:
 You should be able to get yourself down to about ~30MB. ;-)

 I'll see if there's any miscellaneous tweaks you can do, but 
 the most obvious one is `strip --strip-debug`.
`make install-strip` should do the trick. Also, the final installation docs [1] advise against stripping the GNAT runtime, as this would break certain features of the debugger that depend on this debugging information (catching Ada exceptions for instance). Is that relevant ? [1] https://gcc.gnu.org/install/finalinstall.html
Sep 16 2020
prev sibling parent reply wjoe <invalid example.com> writes:
On Wednesday, 16 September 2020 at 21:05:27 UTC, Iain Buclaw 
wrote:
 On Wednesday, 16 September 2020 at 12:50:57 UTC, wjoe wrote:
 On Wednesday, 16 September 2020 at 10:42:27 UTC, Iain Buclaw 
 wrote:
 On Wednesday, 16 September 2020 at 09:55:22 UTC, wjoe wrote:
 The way it's being done right now is that 'make install' 
 installs to the /usr prefix. After that a tarball of this 
 prefix is created (via tar cJf gdc-triplet.txz /usr). I'm 
 not sure if that's suitable as a release as is because tar 
 omits the root / so the result will be extracted as usr/
 There isn't a lot of time budget left in that task but it 
 should be possible to run some more scripts.
 If the time limit won't suffice it should be possible to 
 cache /usr and move the tar ball script into a new task.
If it follows the convention of the existing packages, it should be fine. e.g: tar extracts gdc into 'x86_64-unknown-linux-gnu/bin/gdc'
The tar ball is 443MiB. That's because it includes half the docker container :)
You should be able to get yourself down to about ~30MB. ;-) I'll see if there's any miscellaneous tweaks you can do, but the most obvious one is `strip --strip-debug`.
 The buildci script [1] uses hard coded --prefix=/usr and 
 lib-dirs=/usr/lib.
 Is there a particular reason for that ?
 Or, rather, could I just change it or introduce a variable 
 prefix in order to be able to use an isolated directory ?

 [1] 
 https://github.com/W-joe/gcc/blob/master-ci/buildci.sh#L274-L280
IIRC, that top line just matches Debian/Ubuntu built gcc (in the hope that no weirdness would happen when running testsuite). Seems reasonable to break it out into a variable that can be overridden by the CI. Just looking at an old binary, the builder used `--prefix=/home/build/share/cache/install/x86_64-unknown-linux-gnu`. Not saying that you should do the same, but the last part being the target triplet is the key. ... It may only be a marginal gain, but I find that --disable-libstdcxx-pch helps with speeding up incremental builds (a long time is spent compiling headers in libstdc++).
Installation and creation of the tar ball now works as expected. It contains a folder named after the build_target triplet and includes the libstdcxx, libphobos headers/sources as well as the static/shared libs, the binaries and man pages. It took 1:54 to build and weighs 316MB but it isn't stripped yet. Still waiting on the results of make install-strip and with pre-compiled headers disabled. Once those are done I'll setup a github action to upload the tar ball to the repository releases. Maybe it would be a good idea to tag the directory name with a version ID or to make a version directory there and install into that ?
Sep 17 2020
next sibling parent wjoe <invalid example.com> writes:
On Thursday, 17 September 2020 at 14:15:34 UTC, wjoe wrote:
 On Wednesday, 16 September 2020 at 21:05:27 UTC, Iain Buclaw
 [...]
Installation and creation of the tar ball now works as expected. It contains a folder named after the build_target triplet and includes the libstdcxx, libphobos headers/sources as well as the static/shared libs, the binaries and man pages. It took 1:54 to build and weighs 316MB but it isn't stripped yet. Still waiting on the results of make install-strip and with pre-compiled headers disabled. Once those are done I'll setup a github action to upload the tar ball to the repository releases. Maybe it would be a good idea to tag the directory name with a version ID or to make a version directory there and install into that ?
make install-strip cuts the time for the Package task by 20 minutes (~4 minutes, down from ~24). The tar ball now weighs 54MiB but it's still not anywhere close to 30.
Sep 17 2020
prev sibling parent reply Iain Buclaw <ibuclaw gdcproject.org> writes:
On Thursday, 17 September 2020 at 14:15:34 UTC, wjoe wrote:
 On Wednesday, 16 September 2020 at 21:05:27 UTC, Iain Buclaw 
 wrote:
 On Wednesday, 16 September 2020 at 12:50:57 UTC, wjoe wrote:
[...]
You should be able to get yourself down to about ~30MB. ;-) I'll see if there's any miscellaneous tweaks you can do, but the most obvious one is `strip --strip-debug`.
 [...]
IIRC, that top line just matches Debian/Ubuntu built gcc (in the hope that no weirdness would happen when running testsuite). Seems reasonable to break it out into a variable that can be overridden by the CI. Just looking at an old binary, the builder used `--prefix=/home/build/share/cache/install/x86_64-unknown-linux-gnu`. Not saying that you should do the same, but the last part being the target triplet is the key. ... It may only be a marginal gain, but I find that --disable-libstdcxx-pch helps with speeding up incremental builds (a long time is spent compiling headers in libstdc++).
Installation and creation of the tar ball now works as expected. It contains a folder named after the build_target triplet and includes the libstdcxx, libphobos headers/sources as well as the static/shared libs, the binaries and man pages. It took 1:54 to build and weighs 316MB but it isn't stripped yet. Still waiting on the results of make install-strip and with pre-compiled headers disabled. Once those are done I'll setup a github action to upload the tar ball to the repository releases. Maybe it would be a good idea to tag the directory name with a version ID or to make a version directory there and install into that ?
I can provide a server for you to upload to. Git tags for versioning might be interesting for uploads, but they're not going to be binaries that will stay around for too long. This is a bastardized version of the git gcc-descr alias. git describe --all --abbrev=40 --match 'basepoints/gcc-[0-9]*' origin/master | sed -n 's,^\(tags/\)\?basepoints/gcc-,r,p' That should get you a tag like r11-3179-g5de41c886207a3a0ff1f44ce0a5a644e9d9a17f8
Sep 18 2020
parent reply wjoe <invalid example.com> writes:
On Saturday, 19 September 2020 at 00:29:37 UTC, Iain Buclaw wrote:
 [...]

 I can provide a server for you to upload to.  Git tags for
Great.
 versioning might be interesting for uploads, but they're not 
 going to be binaries that will stay around for too long.

 This is a bastardized version of the git gcc-descr alias.

 git describe --all --abbrev=40 --match 'basepoints/gcc-[0-9]*' 
 origin/master | sed -n 's,^\(tags/\)\?basepoints/gcc-,r,p'

 That should get you a tag like 
 r11-3179-g5de41c886207a3a0ff1f44ce0a5a644e9d9a17f8
Thanks.
Sep 19 2020
parent wjoe <invalid example.com> writes:
On Saturday, 19 September 2020 at 09:53:29 UTC, wjoe wrote:
 This is a bastardized version of the git gcc-descr alias.

 git describe --all --abbrev=40 --match 'basepoints/gcc-[0-9]*' 
 origin/master | sed -n 's,^\(tags/\)\?basepoints/gcc-,r,p'

 That should get you a tag like 
 r11-3179-g5de41c886207a3a0ff1f44ce0a5a644e9d9a17f8
Thanks.
This fails with:
 fatal: No names found, cannot describe anything.
This is probably because of a shallow clone and maybe also because there aren't any tags. Would this tag be 'basepoints/gcc-10' ?
Sep 19 2020
prev sibling parent wjoe <invalid example.com> writes:
On Wednesday, 16 September 2020 at 00:49:52 UTC, Seb wrote:
 If you manage to build tarballs of the binaries, I think a lot 
 of people would greatly appreciate if they are made available 
 (e.g. can be done directly on GitHub via "Releases").
 For example, see https://github.com/dlang/installer/pull/251, 
 https://forum.dlang.org/thread/xktompypwvaabwebnjol forum.dlang.org, or
https://forum.dlang.org/thread/bnkbldsifjhsseswiceq forum.dlang.org .
Cirrus CI doesn't support to make releases available in the GitHub repository (at least no at the moment). But they provide an example script of how that can be achieved. I think such a publish script is even better because it's possible to push the downloads to external mirrors as well.
Sep 16 2020
prev sibling parent reply Iain Buclaw <ibuclaw gdcproject.org> writes:
On Tuesday, 15 September 2020 at 16:00:56 UTC, wjoe wrote:
 I've added the above tasks and they are reported to have been 
 completed successfully in the Cirrus summary.
 However, on a closer look I can spot multiple failures or 
 files/directories which can't be found.
 - Some 12 failed tests as well as 10 unresolved test cases in 
 the unittest step.
 - The build package step failed with: cd ./libcc1: no such file 
 or directory.
I had a look on the most recent build, and they are all timeout failures. This has on some rare occasions happened locally. It is related to building phobos unittests with optimizations enabled. The default timeout value is 300, maybe it should be increased to 600. Should be as simple as adding: global tool_timeout set tool_timeout 600 Inside this proc https://github.com/D-Programming-GDC/gcc/blob/8177cfa01e10aabb29bc8496657ff0b847e9ceb9/libphobos/testsuite/lib/libphobos.exp#L96-L100
Sep 16 2020
parent reply wjoe <invalid example.com> writes:
On Wednesday, 16 September 2020 at 10:57:46 UTC, Iain Buclaw 
wrote:
 On Tuesday, 15 September 2020 at 16:00:56 UTC, wjoe wrote:
 [...]
I had a look on the most recent build, and they are all timeout failures. This has on some rare occasions happened locally. It is related to building phobos unittests with optimizations enabled. The default timeout value is 300, maybe it should be increased to 600. Should be as simple as adding: global tool_timeout set tool_timeout 600 Inside this proc https://github.com/D-Programming-GDC/gcc/blob/8177cfa01e10aabb29bc8496657ff0b847e9ceb9/libphobos/testsuite/lib/libphobos.exp#L96-L100
Done. Is it supposed to not error out on time out failures ?
Sep 16 2020
parent Iain Buclaw <ibuclaw gdcproject.org> writes:
On Wednesday, 16 September 2020 at 11:17:41 UTC, wjoe wrote:
 On Wednesday, 16 September 2020 at 10:57:46 UTC, Iain Buclaw 
 wrote:
 On Tuesday, 15 September 2020 at 16:00:56 UTC, wjoe wrote:
 [...]
I had a look on the most recent build, and they are all timeout failures. This has on some rare occasions happened locally. It is related to building phobos unittests with optimizations enabled. The default timeout value is 300, maybe it should be increased to 600. Should be as simple as adding: global tool_timeout set tool_timeout 600 Inside this proc https://github.com/D-Programming-GDC/gcc/blob/8177cfa01e10aabb29bc8496657ff0b847e9ceb9/libphobos/testsuite/lib/libphobos.exp#L96-L100
Done. Is it supposed to not error out on time out failures ?
The testsuite runs until completion, it doesn't exit after the first failure occurred. The buildci script can be tweaked to return failure if there's any FAIL tests in the summary log as part of the final step.
Sep 16 2020
prev sibling parent reply kinke <noone nowhere.com> writes:
On Wednesday, 9 September 2020 at 18:32:07 UTC, wjoe wrote:
 Mac is a single core VM on community cluster.
Nope, it's a dual-core VM with hyperthreading and so 4 logical cores.
Sep 10 2020
parent reply wjoe <invalid example.com> writes:
On Thursday, 10 September 2020 at 09:30:15 UTC, kinke wrote:
 On Wednesday, 9 September 2020 at 18:32:07 UTC, wjoe wrote:
 Mac is a single core VM on community cluster.
Nope, it's a dual-core VM with hyperthreading and so 4 logical cores.
All the better. For whatever reason the way I got it it's a VM with 1 physical core and hyperthreading = 2 logical CPUs :) It's still sort of a meager setup. But there's hoping that it will be enough to complete the task within the 2 hours time limit.
Sep 11 2020
parent reply Iain Buclaw <ibuclaw gdcproject.org> writes:
On Friday, 11 September 2020 at 12:19:52 UTC, wjoe wrote:
 On Thursday, 10 September 2020 at 09:30:15 UTC, kinke wrote:
 On Wednesday, 9 September 2020 at 18:32:07 UTC, wjoe wrote:
 Mac is a single core VM on community cluster.
Nope, it's a dual-core VM with hyperthreading and so 4 logical cores.
All the better. For whatever reason the way I got it it's a VM with 1 physical core and hyperthreading = 2 logical CPUs :) It's still sort of a meager setup. But there's hoping that it will be enough to complete the task within the 2 hours time limit.
You can set build_bootstrap="disable" within the buildci script so it only builds the compiler in one step.
Sep 11 2020
parent wjoe <invalid example.com> writes:
On Friday, 11 September 2020 at 12:30:11 UTC, Iain Buclaw wrote:
 On Friday, 11 September 2020 at 12:19:52 UTC, wjoe wrote:
 On Thursday, 10 September 2020 at 09:30:15 UTC, kinke wrote:
 On Wednesday, 9 September 2020 at 18:32:07 UTC, wjoe wrote:
 Mac is a single core VM on community cluster.
Nope, it's a dual-core VM with hyperthreading and so 4 logical cores.
All the better. For whatever reason the way I got it it's a VM with 1 physical core and hyperthreading = 2 logical CPUs :) It's still sort of a meager setup. But there's hoping that it will be enough to complete the task within the 2 hours time limit.
You can set build_bootstrap="disable" within the buildci script so it only builds the compiler in one step.
I'll keep that in mind. Thanks.
Sep 11 2020