murt a robotics engineer yet another roboticist blog

Video Editing Via CLI

If you’re like me and have zero video editing experience and primarily work on Linux, learning open source video editing tools can be frustrating. I’m slowly learning blender for this purpose, but recently I’ve been taking to just using the CLI to manage most of my video editing needs. I’m not doing anything super visually important and sticking with basic operations, so the CLI works just fine for me. Plus all of these commands are automate-able.

I’ve included some examples of basic operations below. All you need to install is ffmpeg.

Cutting long videos into smaller clips

Cut long_video.mkv into smaller_clip.mkv from start timestamp 01:07:11 to end timestamp 01:08:13:

ffmpeg -ss 01:07:11 -to 01:08:13 -i long_video.mkv -c copy smaller_clip.mkv

Concatenating multiple videos together

ffmpeg -f concat -safe 0 -i mylist.txt -c copy output.mkv

This will read from the mylist.txt for what files to copy in which order. mylist.txt requires the format:

file 'clip1.mkv'
file 'clip2.mkv'
file 'funny_video.mkv'

If you don’t care about specific ordering, or if you’re deterministic enough to label your clips in numerical fashion, you can automatically create this list using a bash script like so:

#!/bin/bash
set -eux
file_list="file_list.txt"
if [[ -f "${file_list}" ]]; then
  rm "${file_list}"
fi

for fname in *.mkv; do
  echo "file '${fname}'" >> "${file_list}"
done

Combining directory of photos into a timelapse

Things like GoPros can automatically combine their photos into timelapses, but I often find myself over-sampling photos and wanting to re-adjust the time range, fps, or duration of the timelapse video. If you shoot timelapses on an interval timer, ffmpeg gives us most the tunables:

ffmpeg -r 30 -pattern_type glob -i "*.JPG" -s 1920x1440 -vcodec libx264 output.mp4

Note that the -s flag scales the images used. My camera shoots in some 6240x4160 but I downscale the images to 4k. Assuming your camera tags files with the appropriate metadata, file can show you information about the image, including its resolution.

▶ file DSCF0062.JPG
DSCF0062.JPG: JPEG image data, Exif standard: [TIFF image data, little-endian, direntries=13, manufacturer=FUJIFILM, model=X-T4, orientation=upper-right, xresolution=186, yresolution=194, resolutionunit=2, software=Digital Camera X-T4 Ver1.20, datetime=2021:08:23 16:53:52], baseline, precision 8, 6240x4160, components 3

Huzzah32 Battery Monitoring via ESPHome

I’ve been setting up my house with a ton of sensors and have been having fun with Home Assistant recently. I typically prefer Power Over Etherenet(PoE) devices such as the Olimex ESP32-POE-ISO, but I wanted to experiment with a smaller form factor WiFi solution in the event that I need a wireless approach. I found the Adafruit Huzzah32 which is effectively an Adafruit ESP32 Feather with some nice modifications:

We packed everything you love about Feathers: built in USB-to-Serial converter, automatic bootloader reset, Lithium Ion/Polymer charger, and all the GPIO brought out so you can use it with any of our Feather Wings.

I want to use these devices for any purpose, but I also want to report back to HomeAssistant the battery voltage so that I can attempt to recharge them when they get low. There is no built-in ESPHome battery monitor, but it allows for reading ADC voltages directly.

I found this post from cuddletech that explained how to convert the ADC readings into voltage. In short:

When you read the ADC you’ll get a value like 2339. The ADC value is a 12-bit number, so the maximum value is 4095 (counting from 0). To convert the ADC integer value to a real voltage you’ll need to divide it by the maximum value of 4095, then double it (note above that Adafruit halves the voltage), then multiply that by the reference voltage of the ESP32 which is 3.3V and then vinally, multiply that again by the ADC Reference Voltage of 1100mV.

I was having problems with this calculation due to the fact that my readings using these calculations kept coming out > 7 Volts, so I decided to dive in to see what’s wrong.

The code on this page references mongoose-os adc. It turns out, on the implemenation of the ADC, it is assumed that the default attenuation value for pin 35 is 11db:

// ... (truncated)
// From https://github.com/mongoose-os-libs/adc/blob/b1d3bf6312d4c624314b6ca1dee1d4e722fe8417/src/esp32/esp32_adc.c#L40
static struct esp32_adc_channel_info s_chans[8] = {
    {.pin = 36, .ch = ADC1_CHANNEL_0, .atten = ADC_ATTEN_DB_11},
    {.pin = 37, .ch = ADC1_CHANNEL_1, .atten = ADC_ATTEN_DB_11},
    {.pin = 38, .ch = ADC1_CHANNEL_2, .atten = ADC_ATTEN_DB_11},
    {.pin = 39, .ch = ADC1_CHANNEL_3, .atten = ADC_ATTEN_DB_11},
    {.pin = 32, .ch = ADC1_CHANNEL_4, .atten = ADC_ATTEN_DB_11},
    {.pin = 33, .ch = ADC1_CHANNEL_5, .atten = ADC_ATTEN_DB_11},
    {.pin = 34, .ch = ADC1_CHANNEL_6, .atten = ADC_ATTEN_DB_11},
    {.pin = 35, .ch = ADC1_CHANNEL_7, .atten = ADC_ATTEN_DB_11},
};
// ... (truncated)

This means that we have to set the attenuation properly in ESPHome’s adc sensor in order to read the correct range of voltages:

// From https://github.com/esphome/esphome/blob/410fad3b41640b76c7f902fb4656d0b1c2598681/esphome/components/adc/adc_sensor.cpp#L22
#ifdef ARDUINO_ARCH_ESP32
  analogSetPinAttenuation(this->pin_, this->attenuation_);
#endif

Note that ESPHome automatically divides by 4095 and multiplies by the attenuation max value for us, so we’ll have to compensate in order to use cuddletech’s calculation.

// ... (truncated)

// From https://github.com/esphome/esphome/blob/410fad3b41640b76c7f902fb4656d0b1c2598681/esphome/components/adc/adc_sensor.cpp#L61
float value_v = analogRead(this->pin_) / 4095.0f;  // NOLINT

// ... (truncated)

// From https://github.com/esphome/esphome/blob/410fad3b41640b76c7f902fb4656d0b1c2598681/esphome/components/adc/adc_sensor.cpp#L73
case ADC_11db:
 value_v *= 3.9;
 break;

// ... (truncated)

Putting this all together, we can use the following ESPHome yaml configuration to define the calculation for battery voltage:

esphome:
  name: huzzah32_example
  platform: ESP32
  # No built in huzzah32 board, but it seems identical to the featheresp32.
  board: featheresp32

wifi:
  ssid: "myssid"
  password: "supersecretpassword"

# Enable logging
logger:

# Enable Home Assistant API
api:

ota:

status_led:
  pin: LED

sensor:
# Documentation: https://esphome.io/components/sensor/adc.html
- platform: adc

  # https://learn.adafruit.com/adafruit-huzzah32-esp32-feather/power-management
  pin: A13

  name: "ESP32 Battery Voltage"

  update_interval: 10s

  # See https://murt.is/articles/2021-02/huzzah32-battery-monitoring-esphome.md
  attenuation: 11db

  # Calculation based on https://cuddletech.com/?p=1030, modified to account for
  # ESPHome's internal math
  # (https://github.com/esphome/esphome/blob/410fad3b41640b76c7f902fb4656d0b1c2598681/esphome/components/adc/adc_sensor.cpp#L59).
  # (x / 3.9) should be the adc measurement converted to Volts.
  filters:
    - lambda: return (x / 3.9) * 2 * 3.3 * 1.1;

And huzzah! We now have mathematically correct battery voltages streaming into HomeAssistant from the Huzzah32 board.

Home Assistant Battery Page

I’ll make a follow up post on calibrating the ADC readings and attempting to decrease power usage.

Auto configure Nvidia Jetson WiFi

I just got in a Jetson Nano, figured I want to experiment with it. Step one was to get cross compiling working, now it’s time getting wifi working.

WiFi has been a serious pain point in my experience working on the TX1/TX2 because of Nvidia’s choice in WiFi chipsets. The built-in Broadcom chip requires some modprobe hacking if you want a dynamically configurable WiFi. Luckily, with the release of the Nano, Nvidia went back to the same design model that I loved about the TK1: let the user install their own WiFi chipset.

I’ve long maintained that the best dynamically configurable wifi solution is what I call the “chromecast” experience (it’s where I personally first saw it). The device will host an access point with a captive portal, and the user logs onto the portal and enters SSID and password information. The device reboots once configured, and will re-enter the host mode access point if the configuration failed. Luckily I know of a nice open source project that offers this exact functionality: https://github.com/balena-io/wifi-connect

I’ve pushed a docker container for aarch64 targets to auto configure wifi if there is no active wifi connection upon container start. Theoretically, this same container can run on the TX1 and TX2, but Nvidia’s choice in wifi chipsets to use within the SOM have proven to be tricky to get working with this approach. Check the README.md file on the github page for how it can be ran, but a quick summary, you can run this via:

docker run --rm -it \
      --privileged \
      --network=host \
      --name=wifi-connect \
      -v /run/dbus:/var/run/dbus \
      murtis/jetson-wifi-connect

I recommend using the nvidia recommended Intel 8265 PCI card, as for some reason I can only see 2.4 GHz WiFi SSID’s using the also recommended Edimax USB dongle.

Check out the github repo for more information:

https://github.com/curtismuntz/jetson-nano-wifi-connect

Nvidia Jetson Bazel Crosstool

If you are developing c++ code for use on any of the Nvidia Jetson product line, and you can use bazel to build your code, feel free to try out my bazel cross compile toolchain definition. I’m not a compiler expert, so use with caution and send PRs to fix my mistakes!

It is trivial to include the croosstool in your bazel project. Simply add the following to your WORKSPACE file:

http_archive(
    name = "murtis_bazel_compilers",
    url = "https://github.com/curtismuntz/bazel_compilers/archive/v0.3.0.tar.gz",
    strip_prefix = 'bazel_compilers-0.3.0',
    sha256 = "4eeda87667cb235a83a67aeb2a3fdbe83f372c9693a313c22e84192e6a2f356b"
)

load("@murtis_bazel_compilers//compilers:dependencies.bzl", "cross_compiler_dependencies")

cross_compiler_dependencies()

Next, in your .bazelrc file:

build:aarch64 --crosstool_top=@murtis_bazel_compilers//compilers/arm_compiler:toolchain
build:aarch64 --host_crosstool_top=@bazel_tools//tools/cpp:toolchain
build:aarch64 --cpu=aarch64-linux-gnu --compiler=gcc
build:aarch64 --spawn_strategy=standalone

You can then build with:

~/projects/my_project$ bazel build --config=aarch64 //src:whaterver_the_target

The resultant executable can then be ran on any of the Nvidia Jetson product line.

You can find the code here: https://github.com/curtismuntz/bazel_compilers

Note that my repository also supports compiling for the RaspberryPi product line as well.

Witness: An API callable webcam project

In a previous blog post I mentioned how easy it is to build containers that you can then deploy to the balena.io backend. In this post, I’ll present a more full formed example on my end to end workflow.

I work a lot in robotics. Sometimes testing robots outside of simulators can result in hardware damaging incidents. Debugging these incidents is a combination of digging through log files and relying on witness accounts. Witnesses are imperfect, as they can be biased or perhaps they see the whole incident.

Typically, witness cameras are used to combat this problem. It becomes part of the operational practice to always start a camera recording when you start a test. The problem with recording every test, is if it’s not automated, eventually you’ll forget to do it. This led me to develop the concept of an unbiased robotic witness. The idea is to have an API callable webcam that is scalable to however many witnesses I want to have.

I’ve created the witness project: https://github.com/curtismuntz/witness.

Features

Witness is capable of running a camera with configurable filenames via the witness API in the following modes:

  • Photo capture
  • Video capture
  • Timelapse
  • Security monitor

With this project complete, I can now integrate the API calls into my usual robotics testing procedures. Any time I’m about to test a new feature, I can tell all my witnesses to start a recording session with the same filename (the software prepends hostname to filename). Then, I can download all the witness viewpoints when I’m done with the test. This automates the process of synchronizing tests with witness accounts, as well as ensures a video is always recorded.

I’ve included some simple scripts under //witness/client/endpoint_scripts and a simple gui that demonstrate the capability of the project, but for the full feature set, take a look at the protobuf definition of the witness service.

Deploying as an IoT Project

I want to take this device into the real world, and I can’t be expected to place laptops everywhere that I want a witness. In order to scale this to as many devices as possible, I again used the infrastructure provided by balena.io in order to deploy and manage my fleet of witnesses. Because Balena now supports multiple concurrent docker containers, I’ve defined a docker-compose.yml file in order to bring up my witness service along with some other nice services. I’ve chosen to use Balena’s wifi-connect project to make the device as portable as I am.

Looking at the compose script, my balena deployment can be categorized into a couple chunks. Any service marked with the image: tag will be pulled from docker hub, and any service marked with the build: tag will be built via docker-compose. Because I’ve listed the witness service as an image, I’ll need a way to build and deploy that container to docker hub.

Previous blog posts have covered the way that I cross compile containers. In this case, my container is simply pushed to docker hub via:

bazel run //witness/deploy:push_witness_armv7 --config=armv7hf

That single command pulls and compiles all of the necessary code with its dependencies, produces an output an armv7 compatible docker container, and pushes said container to docker hub. Once this is complete, I can deploy the newly built container to all my witnesses simultaneously via:

git push balena master

I chose to run the project on the balena fin, but naturally it can be ran on most raspberry pi versions with minimal modifications.

Sample outputs

Here are some sample outputs (converted to gifs) from the witness project.

Here I used witness to record a test run of a robot arm.

bazel run witness/client/endpoint_scripts:start_recording

Robot arm test

Here I set up the witness in the monitor mode to keep track of the neighborhood wildlife.

bazel run witness/client/endpoint_scripts:start_monitor

Set up the monitor to watch over the driveway

Here I set up the witness to record a timelapse of my sourdough starter rising.

bazel run witness/client/endpoint_scripts:start_timelapse

Sourdough starter

Travis CI Tooling for Bazel

Because Bazel manages the complete dependency tree, it becomes incredibly easy to integrate Bazel with CI systems. For the most part, the entire Bazel CI build/testing system can be reduced down to a couple of Bazel commands.

bazel test //src/...
bazel build //src/...

That’s it! Those two commands will run all tests and build all targets under src, which is pretty much the primary goal of any CI system.

We can do better, however… by actually running these commands on a CI system. I chose to work with Travis-CI, as it is free for open source projects and super simple to set up. I’ve augmented my bazel_examples project over at https://github.com/curtismuntz/bazel_examples.

Docker CI Solution Within Travis

I’ve decided to maintain a Bazel Docker file in order to satisfy my reproducible build requirements. My container installs some basic compiler components and then uses the bazelisk project in order to have a run-time selectable version of bazel. This is super useful in CI systems, as you can increment the version of the build tool and verify that the entire build still works automatically. The container can be found on Dockerhub here.

Travis.yaml

I headed over to https://travis-ci.org/ in order to set up a public repository for Travis. Once set up, I added the following .travis.yaml file to the workspace root, and Travis started an automated build running the CI script under tools/ci/run_ci_tests.sh.

sudo: required
language: cpp
os: linux
dist: xenial
services:
  - docker
install: skip
script:
  - tools/ci/run_ci_tests.sh

Note that sudo is needed in order to run Docker commands on a Travis worker instance.

The CI Script

My script simply spins up a Docker image and execs the Bazel commands to test and compile for both x86 and the raspberry pi toolchains. The CI container is given a specific bazelrc file that enables sandboxed compilation and testing, adds in verbose failures, and sets the Bazel server to start up in batch mode.

By using set -e, bash exits if any subcommand has a failure. This is necessary to stop the whole build in the event of a failure with one of the Docker execs.

Finally, I placed an exit trap so that this script stops the running Bazel container in the event of a failure.

#!/bin/bash
set -e

function stop {
  docker stop travis_build
}
trap stop EXIT

TARGETS="//src/...
         //deploy/...
        "
CONFIG="--bazelrc=tools/ci/bazelrc_travis"

OPTS="-c opt"

docker run -it --rm -d \
  --name travis_build \
  -v "$PWD":/opt/src \
  murtis/bazel \
  /bin/bash

docker exec travis_build bazel $CONFIG build $OPTS $TARGETS
docker exec travis_build bazel $CONFIG build $OPTS $TARGETS --crosstool_top=//compilers/arm_compiler:toolchain --cpu=rpi

Limitations

This ci script is incredibly simple, and the build times reflect this. Without setting up caching, the script as implemented will build all targets under //src and //deploy for both x86 and armv7hf architectures. On this small project, it’s not that big of a deal, but on a large repository, these build times can be very long.

If I wanted to be a little less verbose, I could use bazel’s example ci script, found here. This script is a bit more methodical, where it checks to see what files have changed and uses bazel query to determine what targets need to be rebuilt, then only builds/tests those targets.

Bazel and Resin

I decided to take a look at resin.io recently. Being able to run containers on embedded devices vastly improves our ability to manage and maintain them.

I was wondering if it was possible to use the power of bazel to deploy fast, correct containers to my array of embedded devices that I have laying around. Turns out, you can, but there are a few limitations currently.

I’ve set up a project over at https://github.com/curtismuntz/bazel_examples that has all the necessary components:

  • simple hello world cpp project
  • dockerhub maintained bazel base images
  • bazel rules for creating hermetic docker images for my software
  • bazel crosstool defined for armv7 (raspberry pi2 cpu)
  • resin_deploy script for automating the build/deploy to resin

Hello world software

I’ll start with a simple hello world cpp project to demonstrate functionality. It can be ran as follows:

~/projects/bazel_examples$ bazel build //src/cpp:hello
INFO: Analysed target //src/cpp:hello (4 packages loaded).
INFO: Found 1 target...
Target //src/cpp:hello up-to-date:
  bazel-bin/src/cpp/hello
INFO: Elapsed time: 1.348s, Critical Path: 0.44s
INFO: Build completed successfully, 7 total actions

~/projects/bazel_examples$ bazel-bin/src/cpp/hello
Hello world! 0
Hello world! 1
Hello world! 2
Hello world! 3
Hello world! 4
Hello world! 5

Bazel building local docker images

Now I can use bazel to run this hello world software within a docker image. Bazel has built in rules for creating docker images, allowing us to build and mount a docker image into the docker runtime. The stock bazel cc_image() and py_image() rules will be sufficient to run on an amd64 platform, but will not be sufficient to run my projects on arm devices, as the underlying base image for these rules was not built for the arm architecture. For this reason, I have to use the contianer_image() rules. This rule requires a base field, which is effectively the same as the FROM field in a standard docker file.

I’ve created a build target that creates this docker image. It is trivial to depend on any cpp/python projects that I have defined under the bazel workspace such that they are included in the docker image.

load("@io_bazel_rules_docker//container:container.bzl", "container_image")

container_image(
	name = "hello_cpp_image",
	base = "@amd64_docker_base//image",
	cmd = ["/opt/hello"],
	mode = "777",
	stamp = True,
	tars = ["//src/cpp:hello_cpp_tar"],
)

Loading and running these images can be accomplished with:

~/projects/bazel_examples$ bazel run deploy:hello_cpp_image
~/projects/bazel_examples$ docker images
REPOSITORY                  TAG                 IMAGE ID            CREATED             SIZE
bazel/deploy                hello_cpp_image     1c6c37b331b0        48 years ago        472MB

~/projects/bazel_examples$ docker run --rm -it bazel/deploy:hello_cpp_image
Hello world! 0
Hello world! 1
Hello world! 2
Hello world! 3
Hello world! 4
Hello world! 5

Cross compiling docker images

Dockerhub maintained bazel base images

In order to create a target that can be ran either locally or cross compiled for embedded devices, I’ve created two docker base images that will be maintained by dockerhub. These images will be the base image for any cpp/python projects that I wish to create under this paradigm.

~/projects/bazel_examples$ tree -L 1 docker/base_images
base_images/
├── Dockerfile.amd64_docker_base
├── Dockerfile.raspberrypi2_docker_base
└── install_prereqs.sh

The amd64 image is based from Debian, and the raspberry_pi2 is based off of resin’s resin/armv7hf-debian:stretch image. Resin’s image allows us to emulate an armv7hf environment and prep a dockerfile to be ran on the pi. Any additional software that is needed by the system is included in the install_prereqs.sh file, which is ran by both dockerfiles on build. This helps keep both platforms in sync and reduces the number of files that I have to maintain. I have set up dockerhub to auto build these projects so that any committed changes are automatically updating the dockerhub images with the latest tag.

Cross compiling for the raspberry pi

The first step required for this project was to set up a cross compilation toolchain within bazel. This was covered in a previous blog post here.

Next, in the BUILD file I simply add a config_setting field to detect cpu type and a select statement in the container_image() rule to toggle between docker image bases.

load("@io_bazel_rules_docker//container:container.bzl", "container_image")

config_setting(
	name = "rpi",
	values = {"cpu": "rpi"},
)

container_image(
	name = "hello_cpp_image",
	base = select({
		":rpi": "@rpi_docker_base//image",
		"conditions:default": "@amd64_docker_base//image",
	}),
	cmd = ["/opt/hello"],
	mode = "777",
	stamp = True,
	tars = ["//src/cpp:hello_cpp_tar"],
)

Now, when I run bazel run --crosstool_top=//compilers/arm_compiler:toolchain --cpu=rpi deploy:hello_cpp_image, bazel will detect the cpu type, compile with the crosstool definition, select the proper container base image, and load the produced container into docker.

Deploying to an IoT device using resin.io

Resin’s cli allows us to deploy local docker images to a resin app by running resin deploy HelloImage bazel/deploy:hello_cpp_image.

Remembering all the steps to get my cpp project compiled and deployed can be tricky, so I wrote a script to automate the process.

~/projects/bazel_examples$ tools/resin_deploy --help
Usage: tools/resin_deploy [-t <string>] [-a <string>] [-r <string>]
  -t: bazel target (path/to/project:target)
  -a: architecture (must be listed within tools/arm_compiler/CROSSTOOL)
  -r: resin appname (must exist on resin)
  -b: additional build options for bazel (optional)

Running the helper script will compile the hello cpp image with optimized compiler parameters, load into the docker engine, and then deploy up to resin.io. Now I can scale my hello world to as many raspberry pi’s as I want!

~/projects/bazel_examples$ tools/resin_deploy -t deploy:hello_cpp_image -a rpi -r HelloImage -b "-c opt"
.........
BUILDING deploy:hello_cpp_image -c opt for rpi and deploying to HelloImage


INFO: Analysed target //deploy:hello_cpp_image (31 packages loaded).
INFO: Found 1 target...
Target //deploy:hello_cpp_image up-to-date:
  bazel-bin/deploy/hello_cpp_image-layer.tar
INFO: Elapsed time: 102.462s, Critical Path: 96.18s
INFO: Build completed successfully, 53 total actions

INFO: Running command line: bazel-bin/deploy/hello_cpp_image
Loaded image ID: sha256:723defcc2ea9161cb353b66065acebc5fd1935718fcf08798a835408a13a3826
Tagging 723defcc2ea9161cb353b66065acebc5fd1935718fcf08798a835408a13a3826 as bazel/deploy:hello_cpp_image
[Info]    Initializing deploy...
[Info]    Deploying [========================] 100% eta 0s           
[Info]    Remote: Accepting image for application helloimage, owner: REDACTED
[Success] Successfully deployed image: 4d31d957463b4381d8102a51ef7eda6158285cec

Current Limitations

Bazel

Bazel’s pkg_tar() rule does not currently package the runtime deps along with the binary. This is most evident when trying to package python binaries. The hello world python package fails to run in a docker container when executed as the bazel built binary:

bash-4.3# python3 hello
Traceback (most recent call last):
  File "hello", line 172, in <module>
	Main()
  File "hello", line 111, in Main
	module_space = FindModuleSpace()
  File "hello", line 86, in FindModuleSpace
	raise AssertionError('Cannot find .runfiles directory for %s' % sys.argv[0])
AssertionError: Cannot find .runfiles directory for hello

There is currently work under way to fix this limitation. In the meantime, I’ve directed my entrypoint to use python3 hello.py instead of the bazel binary.

Resin

Resin currently only allows for one dockerfile to be ran on each device at a time. It is one of their most requested features to implement multi container support, but for now, I can get away with running multiple processes inside a container through the use of projects such as supervisord, Monit, or even running docker-in-docker. See update.

As far as I can tell, resin deploy only works to the cloud. If resin has a way to deploy locally in a quicker fasion, say to a local device, allowing for rapid development, it would go a long way to making it fully functional. It sounds like they are working on this feature: https://github.com/resin-io/resin-cli/issues/613.

Update

Resin has released multicontainer support since this was posted. I am working on a follow up post to investigate multicontainer with bazel.

RaspberryPi Bazel Crosstool

I set up a simple hello world cpp project so I could test the bazel crosstool tutorial to run a bazel built executable on a raspberry pi. First, I set up and built a simple project:

~/projects/bazel_examples$ tree -L 1 src/cpp   
cpp
├── BUILD
├── hello_lib.cpp
├── hello_lib.h
└── main.cpp
~/projects/bazel_examples$ bazel build //src/cpp:hello
INFO: Analysed target //src/cpp:hello (4 packages loaded).
INFO: Found 1 target...
Target //src/cpp:hello up-to-date:
  bazel-bin/src/cpp/hello
INFO: Elapsed time: 1.348s, Critical Path: 0.44s
INFO: Build completed successfully, 7 total actions
~/projects/bazel_examples$ bazel-bin/src/cpp/hello
Hello world! 0
Hello world! 1
Hello world! 2
Hello world! 3
Hello world! 4
Hello world! 5

Following the guide in the bazel wiki and modifying the bazel example project, I was able to crosscompile and run my hello world example on a raspberry pi 2.

~/projects/bazel_examples$ bazel build --crosstool_top=//tools/arm_compiler:toolchain --cpu=rpi  //src/cpp:hello
INFO: Analysed target //src/cpp:hello (0 packages loaded).
INFO: Found 1 target...
Target //src/cpp:hello up-to-date:
  bazel-bin/src/cpp/hello
INFO: Elapsed time: 7.164s, Critical Path: 6.97s
INFO: Build completed successfully, 3 total actions

Copying the file to the pi shows that it executes nominally:

pi@raspberrypi:~ $ uname -a
Linux raspberrypi 4.9.59-v7+ #1047 SMP Sun Oct 29 12:19:23 GMT 2017 armv7l GNU/Linux
pi@raspberrypi:~ $ ./hello
Hello world! 0
Hello world! 1
Hello world! 2
Hello world! 3
Hello world! 4
Hello world! 5

Code available here:

https://github.com/curtismuntz/bazel_examples

Update December 2018

I have relocated the bazel crosstool logic into its own repository, allowing for trivial inclusion to your projects. It can now be found here:

https://github.com/curtismuntz/bazel_compilers

Simply add to your WORKSPACE file:

http_archive(
    name = "murtis_bazel_compilers",
    url = "https://github.com/curtismuntz/bazel_compilers/archive/v0.3.0.tar.gz",
    strip_prefix = 'bazel_compilers-0.3.0',
    sha256 = "4eeda87667cb235a83a67aeb2a3fdbe83f372c9693a313c22e84192e6a2f356b"
)

load("@murtis_bazel_compilers//compilers:dependencies.bzl", "cross_compiler_dependencies")

cross_compiler_dependencies()

Note that the repository may have a newer version than this post references. Next, in your .bazelrc file:

build --compiler=compiler

build:armv7hf --crosstool_top=@murtis_bazel_compilers//compilers/arm_compiler:toolchain
build:armv7hf --host_crosstool_top=@bazel_tools//tools/cpp:toolchain
build:armv7hf --cpu=armeabi-v7a --compiler=gcc
build:armv7hf --spawn_strategy=standalone

You can then build with:

~/projects/bazel_examples$ bazel build --config=armv7hf  //src/cpp:hello

Update May 2019

I have bumped the support for bazel crosstools up to the new bazel 0.25 crosstool syntax, as well as I have added a crosstool definition for aarch64. As of right now, I’m using a Linaro 5.3.1 toolchain, but I intend on updating this to a newer Linaro version soon.