Because Bazel manages the complete dependency tree, it becomes incredibly easy to integrate Bazel with CI systems. For the most part, the entire Bazel CI build/testing system can be reduced down to a couple of Bazel commands.
That’s it! Those two commands will run all tests and build all targets under src, which is pretty much the primary goal of any CI system.
We can do better, however… by actually running these commands on a CI system. I chose to work with Travis-CI, as it is free for open source projects. I was inspired by a previous Bazel-Travis CI solution that needed some updating from Bazel 0.3 to 0.11, but I was having issues with that solution due to Debian package download server uptimes. I decided to package Bazel within a container that could then be used as the builder.
I’ve decided to maintain a Bazel Docker file within this repository due to the uptime issues that I mentioned previously. Running within a container also gives me the flexibility to lock down which version of Bazel to run (up to and including master), and gives me the ability to add system dependencies should I need to.
The image is heavily inspired by insready’s Bazel container but also installs my system dependencies (currently just python). It can be found on Dockerhub here.
I headed over to https://travis-ci.org/ in order to set up a public repository for Travis. Once set up, I added the following .travis.yaml file to the workspace root, and Travis started an automated build running the CI script under tools/ci/run_ci_tests.sh.
Note that sudo is needed in order to run Docker commands on a Travis worker instance.
The CI Script
My script simply spins up a Docker image and execs the Bazel commands to test and compile for both x86 and the raspberry pi toolchains. The CI container is given a specific bazelrc file that enables sandboxed compilation and testing, adds in verbose failures, and sets the Bazel server to start up in batch mode.
By using set -e, bash exits if any subcommand has a failure. This is necessary to stop the whole build in the event of a failure with one of the Docker execs.
Finally, I placed an exit trap so that this script stops the running Bazel container in the event of a failure.
There is a pretty glaring circular dependency that I introduced into this repository. Anytime I update any of my Dockerfiles, both Dockerhub and Travis will spawn builds. The Dockerhub maintained image will not be available to Travis immediately, so technically it will be running on an out of date container image.
I could move these files out of this repository to help mitigate this container image issue but I felt that the benefits of the monorepo concept outweigh this issue, especially for an example Bazel project.
I decided to take a look at resin.io recently. Being able to run containers on embedded devices vastly improves our ability to manage and maintain them.
I was wondering if it was possible to use the power of bazel to deploy fast, correct containers to my array of embedded devices that I have laying around. Turns out, you can, but there are a few limitations currently.
bazel rules for creating hermetic docker images for my software
bazel crosstool defined for armv7 (raspberry pi2 cpu)
resin_deploy script for automating the build/deploy to resin
Hello world software
I’ll start with a simple hello world cpp project to demonstrate functionality. It can be ran as follows:
Bazel building local docker images
Now I can use bazel to run this hello world software within a docker image. Bazel has built in rules for creating docker images, allowing us to build and mount a docker image into the docker runtime. The stock bazel cc_image() and py_image() rules will be sufficient to run on an amd64 platform, but will not be sufficient to run my projects on arm devices, as the underlying base image for these rules was not built for the arm architecture. For this reason, I have to use the contianer_image() rules. This rule requires a base field, which is effectively the same as the FROM field in a standard docker file.
I’ve created a build target that creates this docker image. It is trivial to depend on any cpp/python projects that I have defined under the bazel workspace such that they are included in the docker image.
Loading and running these images can be accomplished with:
Cross compiling docker images
Dockerhub maintained bazel base images
In order to create a target that can be ran either locally or cross compiled for embedded devices, I’ve created two docker base images that will be maintained by dockerhub. These images will be the base image for any cpp/python projects that I wish to create under this paradigm.
The amd64 image is based from Debian, and the raspberry_pi2 is based off of resin’s resin/armv7hf-debian:stretch image. Resin’s image allows us to emulate an armv7hf environment and prep a dockerfile to be ran on the pi. Any additional software that is needed by the system is included in the install_prereqs.sh file, which is ran by both dockerfiles on build. This helps keep both platforms in sync and reduces the number of files that I have to maintain. I have set up dockerhub to auto build these projects so that any committed changes are automatically updating the dockerhub images with the latest tag.
Cross compiling for the raspberry pi
The first step required for this project was to set up a cross compilation toolchain within bazel. This was covered in a previous blog post here.
Next, in the BUILD file I simply add a config_setting field to detect cpu type and a select statement in the container_image() rule to toggle between docker image bases.
Now, when I run bazel run --crosstool_top=//compilers/arm_compiler:toolchain --cpu=rpi deploy:hello_cpp_image, bazel will detect the cpu type, compile with the crosstool definition, select the proper container base image, and load the produced container into docker.
Deploying to an IoT device using resin.io
Resin’s cli allows us to deploy local docker images to a resin app by running resin deploy HelloImage bazel/deploy:hello_cpp_image.
Remembering all the steps to get my cpp project compiled and deployed can be tricky, so I wrote a script to automate the process.
Running the helper script will compile the hello cpp image with optimized compiler parameters, load into the docker engine, and then deploy up to resin.io. Now I can scale my hello world to as many raspberry pi’s as I want!
Bazel’s pkg_tar() rule does not currently package the runtime deps along with the binary. This is most evident when trying to package python binaries. The hello world python package fails to run in a docker container when executed as the bazel built binary:
There is currently work under way to fix this limitation. In the meantime, I’ve directed my entrypoint to use python3 hello.py instead of the bazel binary.
Resin currently only allows for one dockerfile to be ran on each device at a time. It is one of their most requested features to implement multi container support, but for now, I can get away with running multiple processes inside a container through the use of projects such as supervisord, Monit, or even running docker-in-docker. See update.
As far as I can tell, resin deploy only works to the cloud. If resin has a way to deploy locally in a quicker fasion, say to a local device, allowing for rapid development, it would go a long way to making it fully functional. It sounds like they are working on this feature: https://github.com/resin-io/resin-cli/issues/613.
Resin has released multicontainer support since this was posted. I am working on a follow up post to investigate multicontainer with bazel.