www.digitalmars.com         C & C++   DMDScript  

D.gnu - [Testing] Using Travis CI (or better alternative) for master/branch/PR

reply "Iain Buclaw" <ibuclaw gdcproject.org> writes:
As the autotester seems to be broken indefinitely for the time 
being, I've been playing around with Travis for builds.

https://travis-ci.org/ibuclaw/GDC/branches

A couple of show stoppers I've been running into:
- Time to build, run testsuite, run unittests exceeds quota (50 
minutes)
- Memory consumption exceeds quota (claims to be a hard 3GB)

This is interesting, to speed up builds it might be considered 
logical to increase the number of parallel jobs, infact this 
conflicts directly with the memory consumption quota, meaning 
that the build needs to be carefully split up to run at different 
parallel levels depending on the memory used.

What seems to be a total blocker is that I seem to be getting 
inconsistent results (in relation to out-of-memory errors) 
depending on which host the build is running on.

So, I'd be willing to hear of alternatives:

https://semaphoreci.com  - However CPU's given are 2, and time to 
build and run tests is limited to 60 minutes.
https://drone.io - However time to build and run tests cannot 
exceed 15 minutes
https://codeship.com - However does not appear to support C++ 
builds


Iain.
Jun 22 2015
next sibling parent reply Johannes Pfau <nospam example.com> writes:
Am Mon, 22 Jun 2015 08:56:53 +0000
schrieb "Iain Buclaw" <ibuclaw gdcproject.org>:

 As the autotester seems to be broken indefinitely for the time 
 being, I've been playing around with Travis for builds.
 
 https://travis-ci.org/ibuclaw/GDC/branches
 
 A couple of show stoppers I've been running into:
 - Time to build, run testsuite, run unittests exceeds quota (50 
 minutes)
 - Memory consumption exceeds quota (claims to be a hard 3GB)
 
 This is interesting, to speed up builds it might be considered 
 logical to increase the number of parallel jobs, infact this 
 conflicts directly with the memory consumption quota, meaning 
 that the build needs to be carefully split up to run at different 
 parallel levels depending on the memory used.
 
 What seems to be a total blocker is that I seem to be getting 
 inconsistent results (in relation to out-of-memory errors) 
 depending on which host the build is running on.
Maybe drop them a mail if they would consider upgrading the quotas for GDC. Regarding inconsistent results: If it's related to the build environment using docker might help?
 
 Iain.
BTW: I'd also like to run build-tests for arm and mingw in the future. We could simply test if we can build the x86_64=>mingw/arm cross-compilers. I'm currently preparing the last changes for crosstool-ng and once that's done I'll publish the docker container to build the gdcproject.org downloads. Then building the cross compilers will be as simple as: docker run -v docker-shared:/home/build/shared -t gdc/build-gdc /usr/bin/build-gdc build --toolchain=x86_64-linux-gnu/gcc-snapshot/x86_64-w64-mingw32
Jun 22 2015
parent reply "Iain Buclaw via D.gnu" <d.gnu puremagic.com> writes:
On 22 June 2015 at 19:58, Johannes Pfau via D.gnu <d.gnu puremagic.com> wrote:
 Am Mon, 22 Jun 2015 08:56:53 +0000
 schrieb "Iain Buclaw" <ibuclaw gdcproject.org>:

 As the autotester seems to be broken indefinitely for the time
 being, I've been playing around with Travis for builds.

 https://travis-ci.org/ibuclaw/GDC/branches

 A couple of show stoppers I've been running into:
 - Time to build, run testsuite, run unittests exceeds quota (50
 minutes)
 - Memory consumption exceeds quota (claims to be a hard 3GB)

 This is interesting, to speed up builds it might be considered
 logical to increase the number of parallel jobs, infact this
 conflicts directly with the memory consumption quota, meaning
 that the build needs to be carefully split up to run at different
 parallel levels depending on the memory used.

 What seems to be a total blocker is that I seem to be getting
 inconsistent results (in relation to out-of-memory errors)
 depending on which host the build is running on.
Maybe drop them a mail if they would consider upgrading the quotas for GDC. Regarding inconsistent results: If it's related to the build environment using docker might help?
I suspect that would require money. https://travis-ci.com/plans (Their production build servers allow up to 2 hours) The environment should be clean upon each new build. The problem I suspect is dependent on the current load of the host server where the VM / Sandbox is running. If it is already under high load (or not), this may drastically change the order of which sources are compiled in parallel. What I should do is split up the testsuite and unittests into two different environments, this will give us a better chance to complete in the short timeframe. On the topic of keeping memory down. What I'd like to do eventually is to integrate at least all glue structures into the GCC GC. It might also be possible to free up all front-end allocated memory too once we've finished generating all codegen. This is something that will need to be investigated (add it to our todo list?)
 BTW: I'd also like to run build-tests for arm and mingw in the future.
 We could simply test if we can build the x86_64=>mingw/arm
 cross-compilers.

 I'm currently preparing the last changes for crosstool-ng and once
 that's done I'll publish the docker container to build the
 gdcproject.org downloads. Then building the cross compilers will be as
 simple as:

 docker run -v docker-shared:/home/build/shared -t
 gdc/build-gdc /usr/bin/build-gdc build
 --toolchain=x86_64-linux-gnu/gcc-snapshot/x86_64-w64-mingw32
Cool, I suspect you will still need to apply the memory hack for ld though as I have done. Iain.
Jun 22 2015
parent Johannes Pfau <nospam example.com> writes:
Am Mon, 22 Jun 2015 20:37:40 +0200
schrieb "Iain Buclaw via D.gnu" <d.gnu puremagic.com>:

 On 22 June 2015 at 19:58, Johannes Pfau via D.gnu
 <d.gnu puremagic.com> wrote:
 Am Mon, 22 Jun 2015 08:56:53 +0000
 schrieb "Iain Buclaw" <ibuclaw gdcproject.org>:

 As the autotester seems to be broken indefinitely for the time
 being, I've been playing around with Travis for builds.

 https://travis-ci.org/ibuclaw/GDC/branches

 A couple of show stoppers I've been running into:
 - Time to build, run testsuite, run unittests exceeds quota (50
 minutes)
 - Memory consumption exceeds quota (claims to be a hard 3GB)

 This is interesting, to speed up builds it might be considered
 logical to increase the number of parallel jobs, infact this
 conflicts directly with the memory consumption quota, meaning
 that the build needs to be carefully split up to run at different
 parallel levels depending on the memory used.

 What seems to be a total blocker is that I seem to be getting
 inconsistent results (in relation to out-of-memory errors)
 depending on which host the build is running on.
Maybe drop them a mail if they would consider upgrading the quotas for GDC. Regarding inconsistent results: If it's related to the build environment using docker might help?
I suspect that would require money. https://travis-ci.com/plans (Their production build servers allow up to 2 hours)
I don't know. It sounds like these plans are meant for commercial users. They probably only have limits for OSS projects to avoid abuse (like running a bitcoin miner or something) so I could imagine they might lift the limits on a case-by-case basis. OTOH if Semaphore works fine that's even better!
Jun 23 2015
prev sibling parent reply "Marko Anastasov" <marko renderedtext.com> writes:
On Monday, 22 June 2015 at 08:56:54 UTC, Iain Buclaw wrote:
 So, I'd be willing to hear of alternatives:

 https://semaphoreci.com  - However CPU's given are 2, and time 
 to build and run tests is limited to 60 minutes.
Hi Iain, Semaphore cofounder here. The first point is correct, however the 60 minute limit applies to single build commands, not the entire build. If you can compose your build of n commands, each < 60 minutes, it'll be fine. I invite you to give Semaphore a try. We'd love your feedback. And I'm here for any questions you may have. :) Cheers, Marko
Jun 23 2015
next sibling parent "Iain Buclaw via D.gnu" <d.gnu puremagic.com> writes:
On 23 June 2015 at 10:15, Marko Anastasov via D.gnu <d.gnu puremagic.com> wrote:
 On Monday, 22 June 2015 at 08:56:54 UTC, Iain Buclaw wrote:
 So, I'd be willing to hear of alternatives:

 https://semaphoreci.com  - However CPU's given are 2, and time to build
 and run tests is limited to 60 minutes.
Hi Iain, Semaphore cofounder here. The first point is correct, however the 60 minute limit applies to single build commands, not the entire build. If you can compose your build of n commands, each < 60 minutes, it'll be fine. I invite you to give Semaphore a try. We'd love your feedback. And I'm here for any questions you may have. :)
Hi Marko, Thanks for the update. Yes, each command step is easily done in 20 minutes. I just have a few questions which I couldn't find answers for in the documentation. 1. What kind of storage and memory quotas are in-place on the build environment? Total storage needed for a building GCC/GDC can be anywhere up to 4GB depending on multilib/multiarch configuration, or enabled/disabled features or other languages. As I am only building specific parts of GCC, I wouldn't expect the size to ever exceed this. There are also at least 3 "very big" sources in the GCC backend build and at least 2 modules in the GDC library build (many more than that when building unittests) that if built in parallel, their combined memory usage easily exceeds the 3GB limit on Travis CI. Infact, depending on the GCC version, a single source file and/or module one it's own may exceed the memory limit! https://travis-ci.org/ibuclaw/GDC/builds/67799174#L9590 2. Is the absence of any logging also taken into consideration when timing out builds? For instance, the D testsuite run can take anywhere up to 15 minutes to run depending on number of parallel jobs, and is silent for the entire duration except in the event that a test fails. For Travis, I've had to work around their 10 minute silence limitation by turning up the verbosity, but that comes with it's own set of problems with excessive logging. 3. How are caches in Semaphore stored? Is there a size limit for files in cache? Do they expire? I'd prefer to not download the GCC tarballs from mirrorservice at the start of each build, which are approaching 100MB in size. In release branches, a new minor version of GCC comes out once every 4 months or so, and in development, a snapshot is released every week, though on our side, I only bump the snapshot version every month or so. When investigating Travis, the use of cache is effectively useless when it comes with dealing with such tarball sizes as it all goes in S3 storage! Adding in logic to test if a given tarball is in cache is simple enough on my end. Removing old tarballs after I no longer care about them can be done also, unless your cache servers already take care of that. Thanks Iain.
Jun 23 2015
prev sibling next sibling parent reply "Iain Buclaw via D.gnu" <d.gnu puremagic.com> writes:
On 23 June 2015 at 12:31, Iain Buclaw <ibuclaw gdcproject.org> wrote:
 On 23 June 2015 at 10:15, Marko Anastasov via D.gnu <d.gnu puremagic.com>
wrote:
 On Monday, 22 June 2015 at 08:56:54 UTC, Iain Buclaw wrote:
 So, I'd be willing to hear of alternatives:

 https://semaphoreci.com  - However CPU's given are 2, and time to build
 and run tests is limited to 60 minutes.
Hi Iain, Semaphore cofounder here. The first point is correct, however the 60 minute limit applies to single build commands, not the entire build. If you can compose your build of n commands, each < 60 minutes, it'll be fine. I invite you to give Semaphore a try. We'd love your feedback. And I'm here for any questions you may have. :)
Hi Marko, Thanks for the update. Yes, each command step is easily done in 20 minutes. I just have a few questions which I couldn't find answers for in the documentation.
OK, I went ahead and tried it out anyway, and was surprised to find that everything went smoothly on the first (proper) build! So I send out my kudos to Marko on the ease of use (once I got around how the interface works). https://semaphoreci.com/ibuclaw/gdc Total time is 36 minutes using -j2 - this is *significantly faster* than Travis CI, so I am happy with that. Will also give -j4 a go to see if we can get any improvement over that, also going to try run our unittester with -j2 to see if I hit an OOM related error. Some things to note: - I could not see a way to set environmental variables on a per-branch basis, but maybe I just didn't look hard enough. - Our command list in the build process is quite complex, we should probably put parts of it into a dedicated build script. So rather than having to worry about which HOST or BUILD version of gcc we are fetching, and trying to force that logic in the semaphoreci project settings, just run some ./semaphoreci-setup.sh to get the correct version of GCC our branch supports Iain.
Jun 23 2015
parent reply "Marko Anastasov" <marko renderedtext.com> writes:
On Tuesday, 23 June 2015 at 14:57:55 UTC, Iain Buclaw wrote:
 On 23 June 2015 at 12:31, Iain Buclaw <ibuclaw gdcproject.org> 
 wrote:
 On 23 June 2015 at 10:15, Marko Anastasov via D.gnu 
 <d.gnu puremagic.com> wrote:
 On Monday, 22 June 2015 at 08:56:54 UTC, Iain Buclaw wrote:
 So, I'd be willing to hear of alternatives:

 https://semaphoreci.com  - However CPU's given are 2, and 
 time to build and run tests is limited to 60 minutes.
Hi Iain, Semaphore cofounder here. The first point is correct, however the 60 minute limit applies to single build commands, not the entire build. If you can compose your build of n commands, each < 60 minutes, it'll be fine. I invite you to give Semaphore a try. We'd love your feedback. And I'm here for any questions you may have. :)
Hi Marko, Thanks for the update. Yes, each command step is easily done in 20 minutes. I just have a few questions which I couldn't find answers for in the documentation.
OK, I went ahead and tried it out anyway, and was surprised to find that everything went smoothly on the first (proper) build! So I send out my kudos to Marko on the ease of use (once I got around how the interface works).
I’m very happy to hear that — thanks! I’ll reply to your earlier questions below.
 1. What kind of storage and memory quotas are in-place on the 
 build environment?
4GB of storage is fine. Currently we have a ~4GB soft limit on RAM, going over that would notify us and we’d discuss it with you directly.
 2. Is the absence of any logging also taken into consideration 
 when timing out builds?
No.
 3. How are caches in Semaphore stored?  Is there a size limit 
 for files in cache?  Do they expire?
You can consider them to be on the same local network where the machine running your build is. There’s a special `.semaphore-cache` directory which you can use to store arbitrary files: https://semaphoreci.com/docs/caching-between-builds.html Cache hit is not yet 99% but it’s our goal to reach it. Cheers, Marko
Jun 23 2015
parent reply "Iain Buclaw via D.gnu" <d.gnu puremagic.com> writes:
On 23 June 2015 at 18:40, Marko Anastasov via D.gnu <d.gnu puremagic.com>
wrote:

 On Tuesday, 23 June 2015 at 14:57:55 UTC, Iain Buclaw wrote:

 On 23 June 2015 at 12:31, Iain Buclaw <ibuclaw gdcproject.org> wrote:

 On 23 June 2015 at 10:15, Marko Anastasov via D.gnu <d.gnu puremagic.co=
m>
 wrote:

 On Monday, 22 June 2015 at 08:56:54 UTC, Iain Buclaw wrote:

 So, I'd be willing to hear of alternatives:

 https://semaphoreci.com  - However CPU's given are 2, and time to
 build and run tests is limited to 60 minutes.
Hi Iain, Semaphore cofounder here. The first point is correct, however the 60 minute limit applies to single build commands, not the entire build. I=
f you
 can compose your build of n commands, each < 60 minutes, it'll be fine=
.
 I invite you to give Semaphore a try. We'd love your feedback. And I'm
 here for any questions you may have. :)
Hi Marko, Thanks for the update. Yes, each command step is easily done in 20 minutes. I just have a few questions which I couldn't find answers for=
in
 the documentation.
OK, I went ahead and tried it out anyway, and was surprised to find that everything went smoothly on the first (proper) build! So I send out my kudos to Marko on the ease of use (once I got around how the interface works).
I=E2=80=99m very happy to hear that =E2=80=94 thanks! I=E2=80=99ll reply to your earlier questions below. 1. What kind of storage and memory quotas are in-place on the build
 environment?
4GB of storage is fine. Currently we have a ~4GB soft limit on RAM, going over that would notify us and we=E2=80=99d discuss it with you directly.
OK, good to know. Let's hope that the current parallel settings don't trigger the the soft limit too often then. 3. How are caches in Semaphore stored? Is there a size limit for files in
 cache?  Do they expire?
You can consider them to be on the same local network where the machine running your build is. There=E2=80=99s a special `.semaphore-cache` direc=
tory which
 you can use to store arbitrary files:

 https://semaphoreci.com/docs/caching-between-builds.html

 Cache hit is not yet 99% but it=E2=80=99s our goal to reach it.
So it is possible to manage the `.semaphore-cache` directory in a scriptable way then? I'm thinking in terms of checking whether a tarball exists either downloading or extracting based on the outcome. I see that is an `Expire Dependency Cache` button for the build, I assume this cleans up the `.semaphore-cache` directory for us. Iain
Jun 23 2015
parent reply "Marko Anastasov" <marko renderedtext.com> writes:
On Tuesday, 23 June 2015 at 17:48:23 UTC, Iain Buclaw wrote:
 So it is possible to manage the `.semaphore-cache` directory in 
 a scriptable way then?  I'm thinking in terms of checking 
 whether a tarball exists either downloading or extracting based 
 on the outcome.
Yes, the directory is always present and you can write a script based on the presence or absence of its content. Note that Semaphore build commands can be Bash commands too. Some people tend to encapsulate things in a .sh file as things get more complex.
 I see that is an `Expire Dependency Cache` button for the 
 build, I assume this cleans up the `.semaphore-cache` directory 
 for us.
Yes.
Jun 23 2015
parent reply "Iain Buclaw via D.gnu" <d.gnu puremagic.com> writes:
On 23 June 2015 at 20:05, Marko Anastasov via D.gnu <d.gnu puremagic.com>
wrote:

 On Tuesday, 23 June 2015 at 17:48:23 UTC, Iain Buclaw wrote:

 So it is possible to manage the `.semaphore-cache` directory in a
 scriptable way then?  I'm thinking in terms of checking whether a tarball
 exists either downloading or extracting based on the outcome.
Yes, the directory is always present and you can write a script based on the presence or absence of its content. Note that Semaphore build commands can be Bash commands too. Some people tend to encapsulate things in a .sh file as things get more complex.
 I see that is an `Expire Dependency Cache` button for the build, I assume
 this cleans up the `.semaphore-cache` directory for us.
Yes.
OK, Thanks. I've added in some logic using the exposed `BRANCH_NAME` environment variable, and now saving tarballs to cache. Just triggered a build for gcc-5, gcc-4.9, and gcc-4.8 on my own repository. I'll re-trigger one last build for master to verify that the tarballs are kept locally between builds, then I'll promise I'll stop "hammering" your build servers. :-) Johannes, I've created a GDC team on semaphoreci, will add you to it along with the build configuration I've set-up so far. I imagine this will be rolled out tomorrow. Let's see how this goes... Iain.
Jun 23 2015
parent reply Johannes Pfau <nospam example.com> writes:
Am Tue, 23 Jun 2015 20:46:41 +0200
schrieb "Iain Buclaw via D.gnu" <d.gnu puremagic.com>:

  Johannes, I've created a GDC team on semaphoreci, will add you to it
 along with the build configuration I've set-up so far.  I imagine
 this will be rolled out tomorrow.  Let's see how this goes...
Sounds great! My semaphoreci account is jpf91.
Jun 23 2015
parent "Iain Buclaw via D.gnu" <d.gnu puremagic.com> writes:
On 24 June 2015 at 08:32, Johannes Pfau via D.gnu <d.gnu puremagic.com> wrote:
 Am Tue, 23 Jun 2015 20:46:41 +0200
 schrieb "Iain Buclaw via D.gnu" <d.gnu puremagic.com>:

  Johannes, I've created a GDC team on semaphoreci, will add you to it
 along with the build configuration I've set-up so far.  I imagine
 this will be rolled out tomorrow.  Let's see how this goes...
Sounds great! My semaphoreci account is jpf91.
Added you do Team GDC (kept the organization structure and names identical to our conventions on github). And we are building! https://semaphoreci.com/d-programming-gdc/gdc Regards Iain
Jun 24 2015
prev sibling next sibling parent "Iain Buclaw via D.gnu" <d.gnu puremagic.com> writes:
On 23 June 2015 at 16:57, Iain Buclaw <ibuclaw gdcproject.org> wrote:
 On 23 June 2015 at 12:31, Iain Buclaw <ibuclaw gdcproject.org> wrote:
 On 23 June 2015 at 10:15, Marko Anastasov via D.gnu <d.gnu puremagic.com>
wrote:
 On Monday, 22 June 2015 at 08:56:54 UTC, Iain Buclaw wrote:
 So, I'd be willing to hear of alternatives:

 https://semaphoreci.com  - However CPU's given are 2, and time to build
 and run tests is limited to 60 minutes.
Hi Iain, Semaphore cofounder here. The first point is correct, however the 60 minute limit applies to single build commands, not the entire build. If you can compose your build of n commands, each < 60 minutes, it'll be fine. I invite you to give Semaphore a try. We'd love your feedback. And I'm here for any questions you may have. :)
Hi Marko, Thanks for the update. Yes, each command step is easily done in 20 minutes. I just have a few questions which I couldn't find answers for in the documentation.
OK, I went ahead and tried it out anyway, and was surprised to find that everything went smoothly on the first (proper) build! So I send out my kudos to Marko on the ease of use (once I got around how the interface works). https://semaphoreci.com/ibuclaw/gdc Total time is 36 minutes using -j2 - this is *significantly faster* than Travis CI, so I am happy with that. Will also give -j4 a go to see if we can get any improvement over that, also going to try run our unittester with -j2 to see if I hit an OOM related error.
Build went down to 25 minutes, going to try one last test running the unittester with -j4, and I think we can start considering rolling this out to the main GDC project. Iain
Jun 23 2015
prev sibling parent "Iain Buclaw via D.gnu" <d.gnu puremagic.com> writes:
On 23 June 2015 at 17:28, Iain Buclaw <ibuclaw gdcproject.org> wrote:
 On 23 June 2015 at 16:57, Iain Buclaw <ibuclaw gdcproject.org> wrote:
 On 23 June 2015 at 12:31, Iain Buclaw <ibuclaw gdcproject.org> wrote:
 On 23 June 2015 at 10:15, Marko Anastasov via D.gnu <d.gnu puremagic.com>
wrote:
 On Monday, 22 June 2015 at 08:56:54 UTC, Iain Buclaw wrote:
 So, I'd be willing to hear of alternatives:

 https://semaphoreci.com  - However CPU's given are 2, and time to build
 and run tests is limited to 60 minutes.
Hi Iain, Semaphore cofounder here. The first point is correct, however the 60 minute limit applies to single build commands, not the entire build. If you can compose your build of n commands, each < 60 minutes, it'll be fine. I invite you to give Semaphore a try. We'd love your feedback. And I'm here for any questions you may have. :)
Hi Marko, Thanks for the update. Yes, each command step is easily done in 20 minutes. I just have a few questions which I couldn't find answers for in the documentation.
OK, I went ahead and tried it out anyway, and was surprised to find that everything went smoothly on the first (proper) build! So I send out my kudos to Marko on the ease of use (once I got around how the interface works). https://semaphoreci.com/ibuclaw/gdc Total time is 36 minutes using -j2 - this is *significantly faster* than Travis CI, so I am happy with that. Will also give -j4 a go to see if we can get any improvement over that, also going to try run our unittester with -j2 to see if I hit an OOM related error.
Build went down to 25 minutes, going to try one last test running the unittester with -j4, and I think we can start considering rolling this out to the main GDC project.
Builds succeeded, but witnessed a 2 minute slowdown, so reverting back to -j2 in the unittester.
Jun 23 2015