If you’re like me and have zero video editing experience and primarily work on Linux, learning open source video editing tools can be frustrating. I’m slowly learning blender for this purpose, but recently I’ve been taking to just using the CLI to manage most of my video editing needs. I’m not doing anything super visually important and sticking with basic operations, so the CLI works just fine for me. Plus all of these commands are automate-able.
I’ve included some examples of basic operations below. All you need to install is ffmpeg.
Cutting long videos into smaller clips
Cut long_video.mkv into smaller_clip.mkv from start timestamp 01:07:11 to end timestamp 01:08:13:
If you don’t care about specific ordering, or if you’re deterministic enough to label your clips in numerical fashion, you can automatically create this list using a bash script like so:
#!/bin/bash
set -eux
file_list="file_list.txt"
if [[ -f "${file_list}" ]]; then
rm "${file_list}"
fi
for fname in *.mkv; do
echo "file '${fname}'" >> "${file_list}"
done
Combining directory of photos into a timelapse
Things like GoPros can automatically combine their photos into timelapses, but I often find myself over-sampling photos and wanting to re-adjust the time range, fps, or duration of the timelapse video. If you shoot timelapses on an interval timer, ffmpeg gives us most the tunables:
Note that the -s flag scales the images used. My camera shoots in some 6240x4160 but I downscale the images to 4k. Assuming your camera tags files with the appropriate metadata, file can show you information about the image, including its resolution.
I’ve been setting up my house with a ton of sensors and have been having fun with Home Assistant recently. I typically prefer Power Over Etherenet(PoE) devices such as the Olimex ESP32-POE-ISO, but I wanted to experiment with a smaller form factor WiFi solution in the event that I need a wireless approach. I found the Adafruit Huzzah32 which is effectively an Adafruit ESP32 Feather with some nice modifications:
We packed everything you love about Feathers: built in USB-to-Serial converter, automatic bootloader reset, Lithium Ion/Polymer charger, and all the GPIO brought out so you can use it with any of our Feather Wings.
I want to use these devices for any purpose, but I also want to report back to HomeAssistant the battery voltage so that I can attempt to recharge them when they get low. There is no built-in ESPHome battery monitor, but it allows for reading ADC voltages directly.
I found this post from cuddletech that explained how to convert the ADC readings into voltage. In short:
When you read the ADC you’ll get a value like 2339. The ADC value is a 12-bit number, so the maximum value is 4095 (counting from 0). To convert the ADC integer value to a real voltage you’ll need to divide it by the maximum value of 4095, then double it (note above that Adafruit halves the voltage), then multiply that by the reference voltage of the ESP32 which is 3.3V and then vinally, multiply that again by the ADC Reference Voltage of 1100mV.
I was having problems with this calculation due to the fact that my readings using these calculations kept coming out > 7 Volts, so I decided to dive in to see what’s wrong.
The code on this page references mongoose-os adc. It turns out, on the implemenation of the ADC, it is assumed that the default attenuation value for pin 35 is 11db:
// ... (truncated)// From https://github.com/mongoose-os-libs/adc/blob/b1d3bf6312d4c624314b6ca1dee1d4e722fe8417/src/esp32/esp32_adc.c#L40staticstructesp32_adc_channel_infos_chans[8]={{.pin=36,.ch=ADC1_CHANNEL_0,.atten=ADC_ATTEN_DB_11},{.pin=37,.ch=ADC1_CHANNEL_1,.atten=ADC_ATTEN_DB_11},{.pin=38,.ch=ADC1_CHANNEL_2,.atten=ADC_ATTEN_DB_11},{.pin=39,.ch=ADC1_CHANNEL_3,.atten=ADC_ATTEN_DB_11},{.pin=32,.ch=ADC1_CHANNEL_4,.atten=ADC_ATTEN_DB_11},{.pin=33,.ch=ADC1_CHANNEL_5,.atten=ADC_ATTEN_DB_11},{.pin=34,.ch=ADC1_CHANNEL_6,.atten=ADC_ATTEN_DB_11},{.pin=35,.ch=ADC1_CHANNEL_7,.atten=ADC_ATTEN_DB_11},};// ... (truncated)
This means that we have to set the attenuation properly in ESPHome’s adc sensor in order to read the correct range of voltages:
// From https://github.com/esphome/esphome/blob/410fad3b41640b76c7f902fb4656d0b1c2598681/esphome/components/adc/adc_sensor.cpp#L22#ifdef ARDUINO_ARCH_ESP32
analogSetPinAttenuation(this->pin_,this->attenuation_);#endif
Note that ESPHome automatically divides by 4095 and multiplies by the attenuation max value for us, so we’ll have to compensate in order to use cuddletech’s calculation.
// ... (truncated)// From https://github.com/esphome/esphome/blob/410fad3b41640b76c7f902fb4656d0b1c2598681/esphome/components/adc/adc_sensor.cpp#L61floatvalue_v=analogRead(this->pin_)/4095.0f;// NOLINT// ... (truncated)// From https://github.com/esphome/esphome/blob/410fad3b41640b76c7f902fb4656d0b1c2598681/esphome/components/adc/adc_sensor.cpp#L73caseADC_11db:value_v*=3.9;break;// ... (truncated)
Putting this all together, we can use the following ESPHome yaml configuration to define the calculation for battery voltage:
esphome:name:huzzah32_exampleplatform:ESP32# No built in huzzah32 board, but it seems identical to the featheresp32.board:featheresp32wifi:ssid:"myssid"password:"supersecretpassword"# Enable logginglogger:# Enable Home Assistant APIapi:ota:status_led:pin:LEDsensor:# Documentation: https://esphome.io/components/sensor/adc.html-platform:adc# https://learn.adafruit.com/adafruit-huzzah32-esp32-feather/power-managementpin:A13name:"ESP32BatteryVoltage"update_interval:10s# See https://murt.is/articles/2021-02/huzzah32-battery-monitoring-esphome.mdattenuation:11db# Calculation based on https://cuddletech.com/?p=1030, modified to account for# ESPHome's internal math# (https://github.com/esphome/esphome/blob/410fad3b41640b76c7f902fb4656d0b1c2598681/esphome/components/adc/adc_sensor.cpp#L59).# (x / 3.9) should be the adc measurement converted to Volts.filters:-lambda:return (x / 3.9) * 2 * 3.3 * 1.1;
And huzzah! We now have mathematically correct battery voltages streaming into HomeAssistant from the Huzzah32 board.
I’ll make a follow up post on calibrating the ADC readings and attempting to decrease power usage.
I just got in a Jetson Nano, figured I want to experiment with it. Step one was to get cross compiling working, now it’s time getting wifi working.
WiFi has been a serious pain point in my experience working on the TX1/TX2 because of Nvidia’s choice in WiFi chipsets. The built-in Broadcom chip requires some modprobe hacking if you want a dynamically configurable WiFi. Luckily, with the release of the Nano, Nvidia went back to the same design model that I loved about the TK1: let the user install their own WiFi chipset.
I’ve long maintained that the best dynamically configurable wifi solution is what I call the “chromecast” experience (it’s where I personally first saw it). The device will host an access point with a captive portal, and the user logs onto the portal and enters SSID and password information. The device reboots once configured, and will re-enter the host mode access point if the configuration failed. Luckily I know of a nice open source project that offers this exact functionality: https://github.com/balena-io/wifi-connect
I’ve pushed a docker container for aarch64 targets to auto configure wifi if there is no active wifi connection upon container start. Theoretically, this same container can run on the TX1 and TX2, but Nvidia’s choice in wifi chipsets to use within the SOM have proven to be tricky to get working with this approach. Check the README.md file on the github page for how it can be ran, but a quick summary, you can run this via:
I recommend using the nvidia recommended Intel 8265 PCI card, as for some reason I can only see 2.4 GHz WiFi SSID’s using the also recommended Edimax USB dongle.
If you are developing c++ code for use on any of the Nvidia Jetson product line, and you can use bazel to build your code, feel free to try out my bazel cross compile toolchain definition. I’m not a compiler expert, so use with caution and send PRs to fix my mistakes!
It is trivial to include the croosstool in your bazel project. Simply add the following to your WORKSPACE file:
Next, in your .bazelrc file:
You can then build with:
The resultant executable can then be ran on any of the Nvidia Jetson product line.
In a previous blog post I mentioned how easy it is to build containers that you can then deploy to the balena.io backend. In this post, I’ll present a more full formed example on my end to end workflow.
I work a lot in robotics. Sometimes testing robots outside of simulators can result in hardware damaging incidents. Debugging these incidents is a combination of digging through log files and relying on witness accounts. Witnesses are imperfect, as they can be biased or perhaps they see the whole incident.
Typically, witness cameras are used to combat this problem. It becomes part of the operational practice to always start a camera recording when you start a test. The problem with recording every test, is if it’s not automated, eventually you’ll forget to do it. This led me to develop the concept of an unbiased robotic witness. The idea is to have an API callable webcam that is scalable to however many witnesses I want to have.
Witness is capable of running a camera with configurable filenames via the witness API in the following modes:
Photo capture
Video capture
Timelapse
Security monitor
With this project complete, I can now integrate the API calls into my usual robotics testing procedures. Any time I’m about to test a new feature, I can tell all my witnesses to start a recording session with the same filename (the software prepends hostname to filename). Then, I can download all the witness viewpoints when I’m done with the test. This automates the process of synchronizing tests with witness accounts, as well as ensures a video is always recorded.
I’ve included some simple scripts under //witness/client/endpoint_scripts and a simple gui that demonstrate the capability of the project, but for the full feature set, take a look at the protobuf definition of the witness service.
Deploying as an IoT Project
I want to take this device into the real world, and I can’t be expected to place laptops everywhere that I want a witness. In order to scale this to as many devices as possible, I again used the infrastructure provided by balena.io in order to deploy and manage my fleet of witnesses. Because Balena now supports multiple concurrent docker containers, I’ve defined a docker-compose.yml file in order to bring up my witness service along with some other nice services. I’ve chosen to use Balena’s wifi-connect project to make the device as portable as I am.
Looking at the compose script, my balena deployment can be categorized into a couple chunks. Any service marked with the image: tag will be pulled from docker hub, and any service marked with the build: tag will be built via docker-compose. Because I’ve listed the witness service as an image, I’ll need a way to build and deploy that container to docker hub.
Previous blog posts have covered the way that I cross compile containers. In this case, my container is simply pushed to docker hub via:
That single command pulls and compiles all of the necessary code with its dependencies, produces an output an armv7 compatible docker container, and pushes said container to docker hub. Once this is complete, I can deploy the newly built container to all my witnesses simultaneously via:
I chose to run the project on the balena fin, but naturally it can be ran on most raspberry pi versions with minimal modifications.
Sample outputs
Here are some sample outputs (converted to gifs) from the witness project.
Here I used witness to record a test run of a robot arm.
Here I set up the witness in the monitor mode to keep track of the neighborhood wildlife.
Here I set up the witness to record a timelapse of my sourdough starter rising.
Because Bazel manages the complete dependency tree, it becomes incredibly easy to integrate Bazel with CI systems. For the most part, the entire Bazel CI build/testing system can be reduced down to a couple of Bazel commands.
That’s it! Those two commands will run all tests and build all targets under src, which is pretty much the primary goal of any CI system.
We can do better, however… by actually running these commands on a CI system. I chose to work with Travis-CI, as it is free for open source projects and super simple to set up. I’ve augmented my bazel_examples project over at https://github.com/curtismuntz/bazel_examples.
Docker CI Solution Within Travis
I’ve decided to maintain a Bazel Docker file in order to satisfy my reproducible build requirements. My container installs some basic compiler components and then uses the bazelisk project in order to have a run-time selectable version of bazel. This is super useful in CI systems, as you can increment the version of the build tool and verify that the entire build still works automatically. The container can be found on Dockerhub here.
Travis.yaml
I headed over to https://travis-ci.org/ in order to set up a public repository for Travis. Once set up, I added the following .travis.yaml file to the workspace root, and Travis started an automated build running the CI script under tools/ci/run_ci_tests.sh.
Note that sudo is needed in order to run Docker commands on a Travis worker instance.
The CI Script
My script simply spins up a Docker image and execs the Bazel commands to test and compile for both x86 and the raspberry pi toolchains. The CI container is given a specific bazelrc file that enables sandboxed compilation and testing, adds in verbose failures, and sets the Bazel server to start up in batch mode.
By using set -e, bash exits if any subcommand has a failure. This is necessary to stop the whole build in the event of a failure with one of the Docker execs.
Finally, I placed an exit trap so that this script stops the running Bazel container in the event of a failure.
Limitations
This ci script is incredibly simple, and the build times reflect this. Without setting up caching, the script as implemented will build all targets under //src and //deploy for both x86 and armv7hf architectures. On this small project, it’s not that big of a deal, but on a large repository, these build times can be very long.
If I wanted to be a little less verbose, I could use bazel’s example ci script, found here. This script is a bit more methodical, where it checks to see what files have changed and uses bazel query to determine what targets need to be rebuilt, then only builds/tests those targets.
I decided to take a look at resin.io recently. Being able to run containers on embedded devices vastly improves our ability to manage and maintain them.
I was wondering if it was possible to use the power of bazel to deploy fast, correct containers to my array of embedded devices that I have laying around. Turns out, you can, but there are a few limitations currently.
bazel rules for creating hermetic docker images for my software
bazel crosstool defined for armv7 (raspberry pi2 cpu)
resin_deploy script for automating the build/deploy to resin
Hello world software
I’ll start with a simple hello world cpp project to demonstrate functionality. It can be ran as follows:
Bazel building local docker images
Now I can use bazel to run this hello world software within a docker image. Bazel has built in rules for creating docker images, allowing us to build and mount a docker image into the docker runtime. The stock bazel cc_image() and py_image() rules will be sufficient to run on an amd64 platform, but will not be sufficient to run my projects on arm devices, as the underlying base image for these rules was not built for the arm architecture. For this reason, I have to use the contianer_image() rules. This rule requires a base field, which is effectively the same as the FROM field in a standard docker file.
I’ve created a build target that creates this docker image. It is trivial to depend on any cpp/python projects that I have defined under the bazel workspace such that they are included in the docker image.
Loading and running these images can be accomplished with:
Cross compiling docker images
Dockerhub maintained bazel base images
In order to create a target that can be ran either locally or cross compiled for embedded devices, I’ve created two docker base images that will be maintained by dockerhub. These images will be the base image for any cpp/python projects that I wish to create under this paradigm.
The amd64 image is based from Debian, and the raspberry_pi2 is based off of resin’s resin/armv7hf-debian:stretch image. Resin’s image allows us to emulate an armv7hf environment and prep a dockerfile to be ran on the pi. Any additional software that is needed by the system is included in the install_prereqs.sh file, which is ran by both dockerfiles on build. This helps keep both platforms in sync and reduces the number of files that I have to maintain. I have set up dockerhub to auto build these projects so that any committed changes are automatically updating the dockerhub images with the latest tag.
Cross compiling for the raspberry pi
The first step required for this project was to set up a cross compilation toolchain within bazel. This was covered in a previous blog post here.
Next, in the BUILD file I simply add a config_setting field to detect cpu type and a select statement in the container_image() rule to toggle between docker image bases.
Now, when I run bazel run --crosstool_top=//compilers/arm_compiler:toolchain --cpu=rpi deploy:hello_cpp_image, bazel will detect the cpu type, compile with the crosstool definition, select the proper container base image, and load the produced container into docker.
Deploying to an IoT device using resin.io
Resin’s cli allows us to deploy local docker images to a resin app by running resin deploy HelloImage bazel/deploy:hello_cpp_image.
Remembering all the steps to get my cpp project compiled and deployed can be tricky, so I wrote a script to automate the process.
Running the helper script will compile the hello cpp image with optimized compiler parameters, load into the docker engine, and then deploy up to resin.io. Now I can scale my hello world to as many raspberry pi’s as I want!
Current Limitations
Bazel
Bazel’s pkg_tar() rule does not currently package the runtime deps along with the binary. This is most evident when trying to package python binaries. The hello world python package fails to run in a docker container when executed as the bazel built binary:
There is currently work under way to fix this limitation. In the meantime, I’ve directed my entrypoint to use python3 hello.py instead of the bazel binary.
Resin
Resin currently only allows for one dockerfile to be ran on each device at a time. It is one of their most requested features to implement multi container support, but for now, I can get away with running multiple processes inside a container through the use of projects such as supervisord, Monit, or even running docker-in-docker. See update.
As far as I can tell, resin deploy only works to the cloud. If resin has a way to deploy locally in a quicker fasion, say to a local device, allowing for rapid development, it would go a long way to making it fully functional. It sounds like they are working on this feature: https://github.com/resin-io/resin-cli/issues/613.
Update
Resin has released multicontainer support since this was posted. I am working on a follow up post to investigate multicontainer with bazel.
I set up a simple hello world cpp project so I could test the bazel crosstool tutorial to run a bazel built executable on a raspberry pi. First, I set up and built a simple project:
Following the guide in the bazel wiki and modifying the bazel example project, I was able to crosscompile and run my hello world example on a raspberry pi 2.
Copying the file to the pi shows that it executes nominally:
Note that the repository may have a newer version than this post references. Next, in your .bazelrc file:
You can then build with:
Update May 2019
I have bumped the support for bazel crosstools up to the new bazel 0.25 crosstool syntax, as well as I have added a crosstool definition for aarch64. As of right now, I’m using a Linaro 5.3.1 toolchain, but I intend on updating this to a newer Linaro version soon.