I've been inspired by the development of hyperlapse recording techniques over the past few years, and I've been itching to apply them to capture beautiful wilderness scenes while backpacking. Here's one of the first runs I captured this June at Ediza Lake in California under the watchful gaze of the Minarets:
There are several constraints that make a hyperlapse more difficult when backpacking, but the most obvious one is technique. Many existing hyperlapse strategies rely on using large, stable platforms like sidewalks or roads, but the backcountry is filled with rugged terrain. The current approach I'm using is an inverted hanging dolly design like this:
Several professional cable cams use a similar design, but they place a large drive motor and heavy battery packs out near the camera, requiring stronger guide lines. Since I'm carrying everything in my backpack, I need the rig to stay as light as possible.
I'm using a 5mm Dyneema rope as a stationary guide line because it's strong, lightweight, and has a very low stretch factor. (In contrast, 550 cord would be a poor choice because of how much it stretches.) This line is strung between two anchor points, such as trees or rocks, and routed twice in parallel about 12" apart to build a nice planar surface. It's held taut using simple ratchet straps and aluminum channeling.
Next, I used a 12" square aluminum plate as a makeshift cheeseplate, hanging it from the guide line on four pulleys so it could freely move between the two anchor points. To move the cheeseplate at a constant speed along the guide line, I used a thin 1mm Dyneema rope connected to a very slow 2 RPM motor. I connected this motor to a simple controller and a 2100mAh LiPo battery pack, along with a voltage monitor for safety. LiPo batteries deserve lots of respect, even more so in the middle of a California drought.
Finally, I used a standard tripod head to hang a Canon 5D Mark III upside down from the cheeseplate. For the hyperlapse above, I used the excellent NIKKOR 14-24mm f/2.8G ED Lens with an adapter, and Magic Lantern with the built-in Intervalometer and Auto ETTR modules.
Here's a behind-the scenes video showing the entire rig in action:
And here's the equipment list, along with carried weight:
Item Weight (lbs) Price 5mm Dyneema guide rope, 100 meters 3.21 $188 1mm Dyneema drive rope, 500 meters 0.22 $26 Pulleys 2 $44 Aluminum cheeseplate 0.53 $15 Tripod head 1.14 $17 Aluminum channeling 0.6 $20 Ratchet straps 2.04 $16 2 RPM motor, mount, controller 0.86 $110 2100mAh LiPo battery 0.53 $25 Misc hardware (washers, bolts, etc) 0.25 $30 Total 11.38 $491
The core rig clocks in under 11.5 pounds, plus another 4.5 pounds for camera and lens gear which varies based on taste. Overall, 16 pounds is manageable if you go ultralight on other parts of your pack, or have someone to share the load with, like I did. (My brother Pat helped design and carry parts of the rig.)
So in summary, it's possible to build a backpacking hyperlapse rig for a very reasonable price, considering that smaller time lapse rigs are double that price.
The initial footage above was captured in early June, but it took over two months of spare weekends to produce the final results. First, I started with the raw footage and tried doing naïve alignment based on the stationary mountain range:
That looks aligned, but it's still pretty bumpy. Barrel distortion correction to the rescue! But doh, I used a Nikon lens on a Canon body, and there's no existing calibration data for that combination. Even if I borrowed the lens again, I had no EXIF data from the lens to know what focal length I had used in the field.
I had no other choice but to derive the lens correction equation constants by hand. The key insight was to realize that the post-correction Euclidean distance between two static points should remain constant between frames. Working backwards from hand-picked points, I brute-forced the search space, looking for values that minimized the standard deviation across all frames. I finally came up with my magic constants:
rcorr = 0.09198r3 + 0.00275r2
And hey, that's looking much better:
Next, instead of manually aligning hundreds of frames, I used those known points along with a convolution approach to automatically derive all the other alignment data. Final step was putting together a rawtherapee processing template, processing all the frames, a cropping pass, and one final alignment pass. You've probably already seen the final result above.
The entire end-to-end processing chain was done with open-source software: rawtherapee for processing, some OpenCV, numpy, and scipy for alignment work, and ImageMagick and libav for conversions. Thanks to all those projects for making this possible!