I decided to take a look at resin.io recently. Being able to run containers on embedded devices vastly improves our ability to manage and maintain them.

I was wondering if it was possible to use the power of bazel to deploy fast, correct containers to my array of embedded devices that I have laying around. Turns out, you can, but there are a few limitations currently.

I’ve set up a project over at https://github.com/curtismuntz/bazel_examples that has all the necessary components:

  • simple hello world cpp project
  • dockerhub maintained bazel base images
  • bazel rules for creating hermetic docker images for my software
  • bazel crosstool defined for armv7 (raspberry pi2 cpu)
  • resin_deploy script for automating the build/deploy to resin

Hello world software

I’ll start with a simple hello world cpp project to demonstrate functionality. It can be ran as follows:

~/projects/bazel_examples$ bazel build //src/cpp:hello
INFO: Analysed target //src/cpp:hello (4 packages loaded).
INFO: Found 1 target...
Target //src/cpp:hello up-to-date:
  bazel-bin/src/cpp/hello
INFO: Elapsed time: 1.348s, Critical Path: 0.44s
INFO: Build completed successfully, 7 total actions

~/projects/bazel_examples$ bazel-bin/src/cpp/hello
Hello world! 0
Hello world! 1
Hello world! 2
Hello world! 3
Hello world! 4
Hello world! 5

Bazel building local docker images

Now I can use bazel to run this hello world software within a docker image. Bazel has built in rules for creating docker images, allowing us to build and mount a docker image into the docker runtime. The stock bazel cc_image() and py_image() rules will be sufficient to run on an amd64 platform, but will not be sufficient to run my projects on arm devices, as the underlying base image for these rules was not built for the arm architecture. For this reason, I have to use the contianer_image() rules. This rule requires a base field, which is effectively the same as the FROM field in a standard docker file.

I’ve created a build target that creates this docker image. It is trivial to depend on any cpp/python projects that I have defined under the bazel workspace such that they are included in the docker image.

load("@io_bazel_rules_docker//container:container.bzl", "container_image")

container_image(
	name = "hello_cpp_image",
	base = "@amd64_docker_base//image",
	cmd = ["/opt/hello"],
	mode = "777",
	stamp = True,
	tars = ["//src/cpp:hello_cpp_tar"],
)

Loading and running these images can be accomplished with:

~/projects/bazel_examples$ bazel run deploy:hello_cpp_image
~/projects/bazel_examples$ docker images
REPOSITORY                  TAG                 IMAGE ID            CREATED             SIZE
bazel/deploy                hello_cpp_image     1c6c37b331b0        48 years ago        472MB

~/projects/bazel_examples$ docker run --rm -it bazel/deploy:hello_cpp_image
Hello world! 0
Hello world! 1
Hello world! 2
Hello world! 3
Hello world! 4
Hello world! 5

Cross compiling docker images

Dockerhub maintained bazel base images

In order to create a target that can be ran either locally or cross compiled for embedded devices, I’ve created two docker base images that will be maintained by dockerhub. These images will be the base image for any cpp/python projects that I wish to create under this paradigm.

~/projects/bazel_examples$ tree -L 1 docker/base_images
base_images/
├── Dockerfile.amd64_docker_base
├── Dockerfile.raspberrypi2_docker_base
└── install_prereqs.sh

The amd64 image is based from Debian, and the raspberry_pi2 is based off of resin’s resin/armv7hf-debian:stretch image. Resin’s image allows us to emulate an armv7hf environment and prep a dockerfile to be ran on the pi. Any additional software that is needed by the system is included in the install_prereqs.sh file, which is ran by both dockerfiles on build. This helps keep both platforms in sync and reduces the number of files that I have to maintain. I have set up dockerhub to auto build these projects so that any committed changes are automatically updating the dockerhub images with the latest tag.

Cross compiling for the raspberry pi

The first step required for this project was to set up a cross compilation toolchain within bazel. This was covered in a previous blog post here.

Next, in the BUILD file I simply add a config_setting field to detect cpu type and a select statement in the container_image() rule to toggle between docker image bases.

load("@io_bazel_rules_docker//container:container.bzl", "container_image")

config_setting(
	name = "rpi",
	values = {"cpu": "rpi"},
)

container_image(
	name = "hello_cpp_image",
	base = select({
		":rpi": "@rpi_docker_base//image",
		"conditions:default": "@amd64_docker_base//image",
	}),
	cmd = ["/opt/hello"],
	mode = "777",
	stamp = True,
	tars = ["//src/cpp:hello_cpp_tar"],
)

Now, when I run bazel run --crosstool_top=//compilers/arm_compiler:toolchain --cpu=rpi deploy:hello_cpp_image, bazel will detect the cpu type, compile with the crosstool definition, select the proper container base image, and load the produced container into docker.

Deploying to an IoT device using resin.io

Resin’s cli allows us to deploy local docker images to a resin app by running resin deploy HelloImage bazel/deploy:hello_cpp_image.

Remembering all the steps to get my cpp project compiled and deployed can be tricky, so I wrote a script to automate the process.

~/projects/bazel_examples$ tools/resin_deploy --help
Usage: tools/resin_deploy [-t <string>] [-a <string>] [-r <string>]
  -t: bazel target (path/to/project:target)
  -a: architecture (must be listed within tools/arm_compiler/CROSSTOOL)
  -r: resin appname (must exist on resin)
  -b: additional build options for bazel (optional)

Running the helper script will compile the hello cpp image with optimized compiler parameters, load into the docker engine, and then deploy up to resin.io. Now I can scale my hello world to as many raspberry pi’s as I want!

~/projects/bazel_examples$ tools/resin_deploy -t deploy:hello_cpp_image -a rpi -r HelloImage -b "-c opt"
.........
BUILDING deploy:hello_cpp_image -c opt for rpi and deploying to HelloImage


INFO: Analysed target //deploy:hello_cpp_image (31 packages loaded).
INFO: Found 1 target...
Target //deploy:hello_cpp_image up-to-date:
  bazel-bin/deploy/hello_cpp_image-layer.tar
INFO: Elapsed time: 102.462s, Critical Path: 96.18s
INFO: Build completed successfully, 53 total actions

INFO: Running command line: bazel-bin/deploy/hello_cpp_image
Loaded image ID: sha256:723defcc2ea9161cb353b66065acebc5fd1935718fcf08798a835408a13a3826
Tagging 723defcc2ea9161cb353b66065acebc5fd1935718fcf08798a835408a13a3826 as bazel/deploy:hello_cpp_image
[Info]    Initializing deploy...
[Info]    Deploying [========================] 100% eta 0s           
[Info]    Remote: Accepting image for application helloimage, owner: REDACTED
[Success] Successfully deployed image: 4d31d957463b4381d8102a51ef7eda6158285cec

Current Limitations

Bazel

Bazel’s pkg_tar() rule does not currently package the runtime deps along with the binary. This is most evident when trying to package python binaries. The hello world python package fails to run in a docker container when executed as the bazel built binary:

bash-4.3# python3 hello
Traceback (most recent call last):
  File "hello", line 172, in <module>
	Main()
  File "hello", line 111, in Main
	module_space = FindModuleSpace()
  File "hello", line 86, in FindModuleSpace
	raise AssertionError('Cannot find .runfiles directory for %s' % sys.argv[0])
AssertionError: Cannot find .runfiles directory for hello

There is currently work under way to fix this limitation. In the meantime, I’ve directed my entrypoint to use python3 hello.py instead of the bazel binary.

Resin

Resin currently only allows for one dockerfile to be ran on each device at a time. It is one of their most requested features to implement multi container support, but for now, I can get away with running multiple processes inside a container through the use of projects such as supervisord, Monit, or even running docker-in-docker. See update.

As far as I can tell, resin deploy only works to the cloud. If resin has a way to deploy locally in a quicker fasion, say to a local device, allowing for rapid development, it would go a long way to making it fully functional. It sounds like they are working on this feature: https://github.com/resin-io/resin-cli/issues/613.

Update

Resin has released multicontainer support since this was posted. I am working on a follow up post to investigate multicontainer with bazel.