Because Bazel manages the complete dependency tree, it becomes incredibly easy to integrate Bazel with CI systems. For the most part, the entire Bazel CI build/testing system can be reduced down to a couple of Bazel commands.
That’s it! Those two commands will run all tests and build all targets under
src, which is pretty much the primary goal of any CI system.
We can do better, however… by actually running these commands on a CI system. I chose to work with Travis-CI, as it is free for open source projects and super simple to set up. I’ve augmented my bazel_examples project over at https://github.com/curtismuntz/bazel_examples.
Docker CI Solution Within Travis
I’ve decided to maintain a Bazel Docker file in order to satisfy my reproducible build requirements. My container installs some basic compiler components and then uses the bazelisk project in order to have a run-time selectable version of bazel. This is super useful in CI systems, as you can increment the version of the build tool and verify that the entire build still works automatically. The container can be found on Dockerhub here.
I headed over to https://travis-ci.org/ in order to set up a public repository for Travis. Once set up, I added the following
.travis.yaml file to the workspace root, and Travis started an automated build running the CI script under
sudo is needed in order to run Docker commands on a Travis worker instance.
The CI Script
My script simply spins up a Docker image and execs the Bazel commands to test and compile for both x86 and the raspberry pi toolchains. The CI container is given a specific bazelrc file that enables sandboxed compilation and testing, adds in verbose failures, and sets the Bazel server to start up in batch mode.
set -e, bash exits if any subcommand has a failure. This is necessary to stop the whole build in the event of a failure with one of the Docker execs.
Finally, I placed an exit trap so that this script stops the running Bazel container in the event of a failure.
This ci script is incredibly simple, and the build times reflect this. Without setting up caching, the script as implemented will build all targets under
//deploy for both x86 and armv7hf architectures. On this small project, it’s not that big of a deal, but on a large repository, these build times can be very long.
If I wanted to be a little less verbose, I could use bazel’s example ci script, found here. This script is a bit more methodical, where it checks to see what files have changed and uses bazel query to determine what targets need to be rebuilt, then only builds/tests those targets.