In a previous blog post I mentioned how easy it is to build containers that you can then deploy to the balena.io backend. In this post, I’ll present a more full formed example on my end to end workflow.
I work a lot in robotics. Sometimes testing robots outside of simulators can result in hardware damaging incidents. Debugging these incidents is a combination of digging through log files and relying on witness accounts. Witnesses are imperfect, as they can be biased or perhaps they see the whole incident.
Typically, witness cameras are used to combat this problem. It becomes part of the operational practice to always start a camera recording when you start a test. The problem with recording every test, is if it’s not automated, eventually you’ll forget to do it. This led me to develop the concept of an unbiased robotic witness. The idea is to have an API callable webcam that is scalable to however many witnesses I want to have.
I’ve created the
witness project: https://github.com/curtismuntz/witness.
Witness is capable of running a camera with configurable filenames via the witness API in the following modes:
- Photo capture
- Video capture
- Security monitor
With this project complete, I can now integrate the API calls into my usual robotics testing procedures. Any time I’m about to test a new feature, I can tell all my witnesses to start a recording session with the same filename (the software prepends hostname to filename). Then, I can download all the witness viewpoints when I’m done with the test. This automates the process of synchronizing tests with witness accounts, as well as ensures a video is always recorded.
I’ve included some simple scripts under
//witness/client/endpoint_scripts and a simple gui that demonstrate the capability of the project, but for the full feature set, take a look at the protobuf definition of the witness service.
Deploying as an IoT Project
I want to take this device into the real world, and I can’t be expected to place laptops everywhere that I want a witness. In order to scale this to as many devices as possible, I again used the infrastructure provided by balena.io in order to deploy and manage my fleet of witnesses. Because Balena now supports multiple concurrent docker containers, I’ve defined a
docker-compose.yml file in order to bring up my witness service along with some other nice services. I’ve chosen to use Balena’s wifi-connect project to make the device as portable as I am.
Looking at the compose script, my balena deployment can be categorized into a couple chunks. Any service marked with the
image: tag will be pulled from docker hub, and any service marked with the
build: tag will be built via docker-compose. Because I’ve listed the witness service as an
image, I’ll need a way to build and deploy that container to docker hub.
Previous blog posts have covered the way that I cross compile containers. In this case, my container is simply pushed to docker hub via:
That single command pulls and compiles all of the necessary code with its dependencies, produces an output an armv7 compatible docker container, and pushes said container to docker hub. Once this is complete, I can deploy the newly built container to all my witnesses simultaneously via:
I chose to run the project on the balena fin, but naturally it can be ran on most raspberry pi versions with minimal modifications.
Here are some nice photos of the fin running witness.
Showing up at a location without access to ethernet, I can use balena’s wifi-connect project to connect to local WiFi.
Here I set up the witness in the monitor mode to keep track of the neighborhood wildlife.
Here I set up the witness to record a timelapse of my sourdough starter rising.