Jeff Sharkey

Building a Hyperlapse rig for backpacking

I’ve been inspired by the development of hyperlapse recording techniques over the past few years, and I’ve been itching to apply them to capture beautiful wilderness scenes while backpacking. Here’s one of the first runs I captured this June at Ediza Lake in California under the watchful gaze of the Minarets:

There are several constraints that make a hyperlapse more difficult when backpacking, but the most obvious one is technique. Many existing hyperlapse strategies rely on using large, stable platforms like sidewalks or roads, but the backcountry is filled with rugged terrain. The current approach I’m using is an inverted hanging dolly design like this:

Several professional cable cams use a similar design, but they place a large drive motor and heavy battery packs out near the camera, requiring stronger guide lines. Since I’m carrying everything in my backpack, I need the rig to stay as light as possible.

I’m using a 5mm Dyneema rope as a stationary guide line because it’s strong, lightweight, and has a very low stretch factor. (In contrast, 550 cord would be a poor choice because of how much it stretches.) This line is strung between two anchor points, such as trees or rocks, and routed twice in parallel about 12” apart to build a nice planar surface. It’s held taut using simple ratchet straps and aluminum channeling.

Next, I used a 12” square aluminum plate as a makeshift cheeseplate, hanging it from the guide line on four pulleys so it could freely move between the two anchor points. To move the cheeseplate at a constant speed along the guide line, I used a thin 1mm Dyneema rope connected to a very slow 2 RPM motor. I connected this motor to a simple controller and a 2100mAh LiPo battery pack, along with a voltage monitor for safety. LiPo batteries deserve lots of respect, even more so in the middle of a California drought.

Finally, I used a standard tripod head to hang a Canon 5D Mark III upside down from the cheeseplate. For the hyperlapse above, I used the excellent NIKKOR 14-24mm f/2.8G ED Lens with an adapter, and Magic Lantern with the built-in Intervalometer and Auto ETTR modules.

Here’s a behind-the scenes video showing the entire rig in action:

And here’s the equipment list, along with carried weight:

Item Weight (lbs) Price
5mm Dyneema guide rope, 100 meters 3.21 $188
1mm Dyneema drive rope, 500 meters 0.22 $26
Pulleys 2 $44
Aluminum cheeseplate 0.53 $15
Tripod head 1.14 $17
Aluminum channeling 0.6 $20
Ratchet straps 2.04 $16
2 RPM motor, mount, controller 0.86 $110
2100mAh LiPo battery 0.53 $25
Misc hardware (washers, bolts, etc) 0.25 $30
Total 11.38 $491

The core rig clocks in under 11.5 pounds, plus another 4.5 pounds for camera and lens gear which varies based on taste. Overall, 16 pounds is manageable if you go ultralight on other parts of your pack, or have someone to share the load with, like I did. (My brother Pat helped design and carry parts of the rig.)

So in summary, it’s possible to build a backpacking hyperlapse rig for a very reasonable price, considering that smaller time lapse rigs are double that price.

Post-production work

The initial footage above was captured in early June, but it took over two months of spare weekends to produce the final results. First, I started with the raw footage and tried doing naïve alignment based on the stationary mountain range:

That looks aligned, but it’s still pretty bumpy. Barrel distortion correction to the rescue! But doh, I used a Nikon lens on a Canon body, and there’s no existing calibration data for that combination. Even if I borrowed the lens again, I had no EXIF data from the lens to know what focal length I had used in the field.

I had no other choice but to derive the lens correction equation constants by hand. The key insight was to realize that the post-correction Euclidean distance between two static points should remain constant between frames. Working backwards from hand-picked points, I brute-forced the search space, looking for values that minimized the standard deviation across all frames. I finally came up with my magic constants:

rcorr = 0.09198r3 + 0.00275r2

And hey, that’s looking much better:

Next, instead of manually aligning hundreds of frames, I used those known points along with a convolution approach to automatically derive all the other alignment data. Final step was putting together a rawtherapee processing template, processing all the frames, a cropping pass, and one final alignment pass. You’ve probably already seen the final result above.

The entire end-to-end processing chain was done with open-source software: rawtherapee for processing, some OpenCV, numpy, and scipy for alignment work, and ImageMagick and libav for conversions. Thanks to all those projects for making this possible!

Deploying a pure-IPsec PKI VPN server for Android devices

Android offers built-in support for a handful of VPN configurations, including PPTP, L2TP/IPsec, and starting in ICS, pure-IPsec (without requiring L2TP).

Both pre-shared key (PSK) and public-key infrastructure (PKI) configurations are supported, but today we’ll be focusing on “IPsec Xauth RSA,” which uses PKI to connect. With good key management hygiene, certificates are much more secure than PSK, since you only need to share public keys and can keep private keys secret.

Today we’re going to turn an off-the-shelf Debian server into a pure-IPsec certificate-based server that our Android device can connect to. We’ll be cooking up our own certificates from scratch, and using racoon to handle key exchange and SA management. I’m assuming that our server has a static, publicly routable IPv4 address.

First we’ll start by installing IPsec tools and racoon:

# apt-get install ipsec-tools racoon

Generating PKI certificates

Next we’ll generate the certificates needed to drive our PKI configuration. This includes a new certificate authority (CA), a server certificate, and a client certificate. To make the configuration easier, you might want to edit some of the defaults in /etc/ssl/openssl.cnf:

countryName_default = US
stateOrProvinceName_default = California
0.organizationName_default = Setec

And let’s generate our certificates over near racoon:

$ mkdir /etc/racoon/certs
$ chmod 700 /etc/racoon/certs
$ cd /etc/racoon/certs

First let’s create our CA:

$ openssl req -new -x509 -extensions v3_ca -out myca.crt -keyout myca.key -days 3650

You can hit “enter” through most of the prompts, but be sure to provide good passwords and a unique Common Name for each certificate. Next let’s generate our server certificate and sign it with our CA:

$ openssl req -new -keyout myserver.key -out myserver.csr -days 3650
$ openssl x509 -req -in myserver.csr -CA myca.crt -CAkey myca.key -CAcreateserial -out myserver.crt

Next, let’s decrypt the server private key so that racoon can access it:

$ chmod 600 myserver.key
$ openssl rsa -in myserver.key -out myserver.key

And finally let’s generate a client certificate for our phone and sign it.

$ openssl req -new -keyout myphone.key -out myphone.csr -days 3650
$ openssl x509 -req -in myphone.csr -CA myca.crt -CAkey myca.key -CAcreateserial -out myphone.crt

While we’re working with certificates, let’s export our client public and private keys, along with our CA, into a PKCS #12 file, which can be easily imported by Android devices:

$ openssl pkcs12 -export -in myphone.crt -inkey myphone.key -certfile myca.crt -name myphone -out myphone.p12

I’d strongly recommend protecting it with an export password, since we’ll be pushing it to the SD card later, which is world-readable on most Android devices. (Unless you’ve enabled Settings > Developer options > Protect USB storage.)

Configuring server

Now that our certificates are ready, we can configure /etc/racoon/racoon.conf:

path certificate "/etc/racoon/certs";

timer {
	# NOTE: varies between carriers
	natt_keepalive 45 sec;

listen {

remote anonymous {
	exchange_mode aggressive,main;
	my_identifier asn1dn;

	certificate_type x509 "myserver.crt" "myserver.key";
	ca_type x509 "myca.crt";
	peers_certfile x509 "myphone.crt";

	passive on;
	proposal_check strict;
	generate_policy on;
	nat_traversal force;

	proposal {
		encryption_algorithm aes256;
		hash_algorithm sha1;
		authentication_method xauth_rsa_server;
		dh_group modp1024;

sainfo anonymous {
	encryption_algorithm aes256;
	authentication_algorithm hmac_sha1;
	compression_algorithm deflate;

log info;

mode_cfg {
	auth_source system;
	conf_source local;
	accounting system;

This is a fairly typical configuration, but there are a few things worth noting:

First, we’ve carefully chosen our natt_keepalive value, which is the frequency at which our server sends UDP keepalive packets. When our client connects through a NAT, the NAT allocates a public-facing UDP port to receive packets from our server. If no packets are received within a specific timeout, the NAT reclaims that port for allocation to other clients.

So we have a tradeoff: if our keepalive is too short, we waste battery by sending unnecessary keepalive packets; if it’s too long, the port will be reclaimed by the NAT, disconnecting us. To help figure out the best tradeoff, I wrote a tool to empirically derive UDP NAT timeouts, and found these values for popular carrier networks:

Network UDP NAT timeout
Verizon 4G LTE 60 sec
T-Mobile HSDPA 90 sec
AT&T HSDPA 120 sec

(When setting the natt_keepalive value for a T-Mobile device, I halved the timeout to give plenty of headroom, which explains the 45 sec value above.)

Second, it’s important to note that we’ve strictly limited the acceptable algorithms and key sizes for both IKE phases 1 and 2 to the strongest that ICS supports. Based on NIST recomendations, AES-256 should be strong enough to protect data beyond 2030, but 1024-bit asymmetric keys and SHA-1 hashes aren’t nearly as robust. If you’re building Android yourself, you could include stronger Diffie-Hellman groups and hashing algorithms.

Next, let’s add a NAT on the server so our Android device can reach the Internet when connected. We need to enable IPv4 forwarding, and create a Source NAT for all non-ESP traffic leaving the server:

# echo 1 > /proc/sys/net/ipv4/ip_forward
# iptables -t nat -A POSTROUTING ! -p esp -o eth0 -j SNAT --to-source

And let’s create a local user account for racoon to authenticate against:

# useradd -s /sbin/nologin setec
# passwd setec

Finally, let’s restart racoon to pick up our config changes:

# /etc/init.d/racoon restart

Configuring phone

Now that our server is ready, we can configure our Android device. First let’s push the myphone.p12 bundle we created earlier:

adb push myphone.p12 /sdcard

Then we can import the bundle using Settings > Security > Install from storage. You should confirm that it shows the client key, client certificate, and CA certificate we packed earlier.

Next let’s configure the VPN client in Settings > More > VPN. Our VPN type is “IPsec Xauth RSA”, and we’ll use the client and CA certificates we just installed. You’ll also want to configure a trusted DNS server under advanced options. (Otherwise Android will use the DNS server obtained from the local network, which could live in a non-routable private network.)

Finally, we can connect to the VPN with the username and password we defined earlier.

Final notes

Configuring a certificate-based IPsec VPN is complex, and error messages along the way can be cryptic and frustrating, but hopefully this guide is enough to help you get a VPN server running.

Along the way, I found the L2TP/IPsec Gentoo wiki guide to be helpful, including commands for generating Android-compatible certificates, and details on configuring SNAT. Also an excellent summary of key length recommendations, citing references from NIST and others.

If you’re interested in lower-level details, the IPsec HOWTO has sections on kernel configuration and racoon, and generating X.509 certificates.

Android SurfaceFlinger tricks for fun and profit

Some Android phones are now shipping with OLED displays, such as Nexus One, the Droid Incredible, and the Samsung Galaxy. Organic LED displays have separate pixel elements for each color channel (red, green, and blue), and each channel has a different efficiency.

Take, for example, the Nexus One. If powering only the red pixels at full intensity draws a current “i”, then powering all green pixels draws “1.5i”, and all blue pixels “2i”. (These ratios are derived from empirical measurements, and don’t hold in all cases.) Also, it’s worth noting that OLED displays don’t have backlights like LCD, meaning that darker colors draw less power.

If you could power only the red pixels you could save quite a bit of power.

So I started poking around SurfaceFlinger, the low-level window compositer on Android. I brushed off my OpenGL skills and after a few hours I had simple proof-of-concept. A couple hours later I had several filters between red-only and full-color:

I plugged the phone into an industrial power meter which takes very accurate current measurements, and started looking at the power needed for various color modes:

Baseline (mA) Default (mA) Red-only (mA) Green-only (mA) Blue-only (mA) Amber (mA) Salmon (mA)
Launcher 86.4 148.4 40.0 58.4 86.5 64.3 66.3
Browser 86.4 344.5 96.7 145.2 194.0 156.5 148.7
Maps 86.4 286.5 95.1 131.9 156.7 139.0 132.9
Settings 86.4 41.0 14.3 19.1 19.9 20.3 20.5
Email 86.4 337.1 94.4 142.4 187.1 153.6 146.2
Gallery 86.4 140.4 78.6 83.1 90.1 90.3 87.8

Average % of Default, OLED-only 35% 46% 56% 49% 48%
Best-case % of Default, including overall system baseline 42% 54% 65% 56% 55%

All measurements taken in airplane mode with GPS disabled. “Baseline” is the current used when showing Launcher with a SurfaceFlinger mask causing all pixels to be rendered black. (That is, everything along the pipeline was being exercised except the actual OLED pixels.)

Filtering to show only red pixels only requires 35% of the original baseline OLED panel current, on average. Adding back the baseline current, the best case overall is about 42% of the original system current, effectively doubling the battery life. Also, showing only red pixels doubles as an awesome night vision mode, perfect for astronomy. 🙂

If you’d like some other colors added back in, the amber and salmon filters can help, while still offering about 56% of the original system current. It’s also worth noting that the Nexus One OLED display uses a PenTile pixel layout, giving it twice as many directly-addressable green pixels as red and blue. Thus the Green-only filter results in the visually sharpest text.

(The video above is also available on YouTube.)

The actual SurfaceFlinger patch is straightforward, mostly living in LayerBase::drawWithOpenGL(). There is another patch to Development that adds options in Dev Tools for controlling it at runtime. It also reads from the sf.render_effect system property at boot.

The patches are contributed to the AOSP under the Apache 2.0 license, and cleanly apply to the freshly released froyo branch. Feel free to integrate them into your own tree to experiment, but note that you’ll need SurfaceFlinger to render in OpenGL mode, which might not be possible without specific hardware drivers.

The phone isn’t any less responsive when using these filters, but visually it can take time for your eyes to adjust. It’s more of a geeky hack, but hopefully you’ll find it useful.

iTunes DACP pairing hash is broken!

Last year I reverse engineered the iTunes DACP protocol and wrote an Android client that allowed you to remote control your iTunes desktop player from any Android device. (The code is open-sourced here, but I haven’t had the time to update it for quite awhile now.)

You might remember that there was a mysterious MD5 hash floating around the pairing process. Specifically, when you enter a pin code on the desktop iTunes client, it combines that code with the MDNS Pair value and hashes them. It then asks the device “does this match what you expected?” Because I wrote the Android client, I would naïvely always answer “yep, they match.”

Well, yesterday Michael Paul Bailey figured out the mystery behind that MD5 hash. 🙂 I had tried bruteforcing various combinations of the pairing data, but never succeeded. It turns out that it boils down to just concatenating them together with the pin code digits separated by null characters.

He posted the full C++ code over on his blog, and I boiled it down to some spiffy Python here:

import StringIO, md5

pair = "4EA92B4292701F31"
passcode = "8222"
expected = "BEFF520F8280591AC0BBCB83B468FAA5"

merged = StringIO.StringIO()
for c in passcode:

found =

print "expected =", expected
print "found    =", found.upper()

So what does this mean? Previously, we could write DACP clients easily because they could always return “yep, the MD5 matches” without even checking it. (This is why you could use any 4-digit pin code you wanted.)

Now, with this algorithm, we can do more than just check pin codes–we can write DACP servers that can pair with the original iPhone/iPod Remote app. For example, last year I wrote a DACP server for Amarok, but never got around to releasing it because the pairing process was very ugly. Now I need to find some time to polish and release it. 🙂

Google I/O Schedule App

Tomorrow at Google I/O I’ll be presenting some tips on how developers can save battery life when writing Android apps. I’m really stoked about all the stuff going on at I/O this year. 🙂

Late last week, Virgil Dobjanschi, who you might remember from ADC1, threw out the idea of writing an Android app for Google I/O that would have all the session details. Both of us brainstormed on Friday and came up with a quick design, and hacked through the holiday weekend to get an app working.

We just released the app on Market a few minutes ago, so go check it out! It lets you do all sorts of things, like star sessions you’re interested in, and search across the entire full-text abstracts for all sessions. Plus it includes handy maps to help you find the right rooms, and links directly to Google Moderator for the sessions using it.

The source for the whole app is also available on Google Code, so check that out if you’re looking for a peek behind-the-scenes. Some of the code is still a bit rough, but some good example code. 🙂 So grab the app before you head down to Moscone West tomorrow morning, and have a great couple of days at I/O!

Forecast widget for Android 1.5 (with source!)

Over the past few months I’ve been working on the new AppWidget framework that was released as part of the Android 1.5 SDK. I wanted to write a really in-depth widget and share it, so I decided to write a forecast widget.

It offers multiple configurations (both a 2×1 and tiny 1×1), and updates four times daily with the latest forecasts. You can also insert multiple widgets to keep track of the weather in different locations. And tapping the widget brings up a detailed forecast for the next few days. Here are some quick screenshots:

First, here’s the source code under an Apache 2.0 license. And you can grab an APK here.

Now, let’s talk about some of the details. We’re storing widget configuration and cached forecast data in a ContentProvider called ForecastProvider. Pretty simple stuff, and it offers two handy Uri’s to pick all forecasts for a widget using /appwidgets/[id]/forecasts, and only the most-current forecast using /appwidgets/[id]/forecast_at/[time]. These come in handy later when for the update process and details dialog. Also, WebserviceHelper performs most of the backend work of parsing XML returned from the API, and stuffing the forecasts and alerts into the ContentProvider.

As for updates, we maintain our own schedule that wakes us up each day 5:50AM, 11:50AM, 5:50PM, and 11:50PM. These times were mostly chosen to be equally spread out across the day, and to be ready before your 6AM alarm goes off. Notice I said your 6AM alarm, lol. The math also works out if you set your alarm for noon. 😉

Similar to the Wiktionary example, we handle our updates in UpdateService because we’re probably doing API queries that could expose our AppWidgetProvider to the ANR timeout. However, we handle things slightly different because we’re pushing unique updates for each widget. We push updates like this:

UpdateService.requestUpdate(new int[] { mAppWidgetId });
startService(new Intent(this, UpdateService.class));

Which is actually doing a synchronization dance behind the scenes to make sure we only ever have a single thread running updates. Multiple updates are just added to an internal queue. Once the update thread has cleared its work queue, the thread and Service terminate gracefully. This is very similar to how the Calendar widget handles it’s updates. During the update passes, we’re also a good citizen and only update forecasts if the data is more than 3 hours old. (We’re doing our part to keep the boot process spiffy and fast.)

The one final thing to mention is that when we’re building the PendingIntents for when you click on widgets, we’re using the ContentProvider Uri for the data field. This is because PendingIntents don’t keep unique extras bundles, and the details dialog needs a way of telling which widget was selected.

This is a pretty complex example, and it exercises most everything you’ll encounter when writing a widget. Not to mention it actually works and provides useful information! The data comes from the National Weather Service NDFD API, which offers awesome data under a public domain license. And the forecast icons came from the Tango Desktop Project, also under a public domain license.

Modifying the Android logcat stream for full-color debugging

I’ve been keeping busy writing all sorts of fun stuff lately, but a few weeks ago I was really fighting with Android’s logcat debugging stream.  It dumps out tons of useful information, but it’s easy to get lost in the flood of text.

So I whipped up a quick Python script that reformats the logcat output into a colorful stream that is much easier to visually follow.

One feature I really like is that it allocates unique colors for each “tag” used.  This makes it really easy to visually separate dozens of tags into their source apps, and makes it easy to pick your app out in the crowd.  This was inspired by the irssi nickcolor script, and the best way to explain is using an example:

There, isn’t that better? Just pipe your adb logcat output through the Python script to get started, or you can run the script directly to invoke adb:

$ adb logcat | ~/
$ ~/

To keep things simple, it assumes you’re using an ANSI-compatible terminal (most xterms are fine), and it uses a quick hack to detect your column width for wrapping.

It only took about 30 minutes to write up, but it’s already saved me more than that in lost debugging time.  Feel free to use it yourself, and improve on it–here’s the source code released under an Apache 2.0 license.

OilCan: Greasemonkey on steroids for Android

So I’ve been rushing to wrap up some Android side projects, and I’d like to get them out there before I start my new job tomorrow. OilCan is Greasemonkey on steroids for Android. It lets you customize any website by inserting JavaScript to change the website and help it reach into the Android world using intents.

Vimeo is being a bit odd with videos that don’t have sound. Start playing the video, then drag the seek bar slightly to bring it back to normal speed. Or let it play through the video fast and press play again to watch at normal speed.

Using intents to call other Android apps really powerful, and opens the door to all sorts of web-based apps. For example, you can make a JavaScript call to scan a barcode, pick a contact, or launch into Maps or other Android apps. You really have to peek at this video to get an idea of what it does:

There is an OilCan site with more details about the Userscript format and security model. Check out the source dump for OilCan, or grab a ready-to-run OilCan APK.

Greasemonkey scripts are known for customizing websites to your personal tastes, and this can really help when working on a small screen. One of the scripts in the video above trims away extra columns and margins on Wikipedia pages, giving it more screen real estate.

There are thousands of GreaseMonkey scripts, like one that puts favicons into Google Reader, or one that wraps Google search results into two columns.

OilCan is different than the efforts of PhoneGap, which is focusing on providing GPS, vibration, and accelerometer access to webapps.

GroupHome: organize your Android apps into groups

So I’ve been rushing to wrap up some Android side projects, and I’d like to get them out there before I start my new job tomorrow.

GroupHome is an app that organizes all the apps you’ve installed on your phone. It automatically groups together apps using the categories shown in Android Market. The “all apps” drawer on my homescreen has become pretty cluttered, and this grouping approach helps you find apps faster.

Oh, and one feature I really like is that you can long-press on an app to uninstall it or view its details.

I wrote GroupHome in about 3 days last week, so it’s still a bit rushed and still rough around the edges. The three remaining things are full-text search, remembering expanded/collapsed groups on close, and moving the static JSON category string to a server.

Check out the source dump for GroupHome, or grab a ready-to-run GroupHome APK.

Leveraging the Android Emulator

It sounds like preorders for the T-Mobile G1 have been flying off the shelves even before it’s available in stores. Also, it’s been rumored that the phone will only be sold in 3G areas.

Added together, these facts mean it might be hard for developers to get their hands on devices, especially if the G1 becomes a hot holiday item. Geeks are known for their superhuman ability to stand in line for hours on end, but this might not be enough. 🙂 If you don’t get your hands on a device, it’s important that you leverage the emulator to best reflect an actual G1 experience:

Showing things in actual size on your screen. I’ve seen several apps with touch targets that would be almost impossible to trigger in real life. Bigger targets are always better, and your users will thank you. To give you an idea, the home screen icons are 48px square, and default list items are 64px high. Anything below 48px is going to be pretty hard to for fingers to hit. (Side note: you should be using device independant pixels, or dip, instead of raw pixel values–they will automatically scale to future devices with different DPIs.)

The phone dimensions are 115mm tall by 55mm wide. However, the emulator shows up double that size on my Linux desktop and 24″ monitor. This makes it hard to judge how finger-friendly your interface is. Thankfully the emulator provides a nice switch to solve this:

./emulator -scale 0.5

This makes the emulator 50% of its original size on my desktop, which is about perfect compared to a physical ruler. You might have to play with that value to make it appear right. Remember that the phone screen is much higher DPI (dots-per-inch) than your monitor, so scaling might make things harder to read on the emulator. I find it’s best to develop at the default scale, then launch it scaled down for finger testing.

Getting an actual G1 emulator skin. The default emulator skin is getting very old after almost a year of staring at it. 🙂 About a week ago, T-Mobile released an interesting Flash-based G1 emulator. It’s cute, but you can’t install your apps on it.

So earlier today I created a new emulator skin using the background from that Flash player. Just copy the G1 folder into your tools/lib/images/skins/ folder and launch using the command line below. To flip the keyboard in/out press Numpad 7 or Ctrl+F12. This simple scenery change can really help boost motivation:

./emulator -skin G1

Next Page »