Jeff Sharkey



Thesis in Six Weeks

So the past 6 weeks of my life have been almost non-existent. They give advice to new graduate students saying you should start writing your thesis early, usually about 6 months before you graduate. Somehow I kept putting if off with excuses of “just one more test dataset” and “let’s make the code even faster.” Because of these excuses, I ended up with was the most intense 6 weeks of my life to date. When I wasn’t writing or studying for my oral exam, I was sleeping or eating. Even during classes, all I could think about was my thesis.

During this time, I jumped into my code just long enough to setup new datasets, then left it to run for a few days while I went back to writing. Thankfully I had access to two dual-processor Intel Xeon E5345 machines (sixteen 2.33GHz cores in total) with 16GB of RAM each. As I frantically ran different parameter settings, I was extremely thankful for all the time I had spent optimizing the codebase. In the end I probably ran those boxes for about 6 days straight, which ends up being about 3 months of single-CPU time.

Looking back, I think I saved a lot of time by carefully crafting everything I put into my thesis. This caused writing to be tedious and very slow, but it made for solid drafts. Over the 6 weeks I also saw the value of keeping a regular sleeping and eating schedule. You really need to pace yourself for the long haul–it’s not like a class project that you can skip a night of sleep to complete.

To keep from going completely insane during the process, I started getting serious about coffee. Until now I’ve only had a cup of the office coffee on occasion, but not on a regular basis. On the advice of a few friends, I picked up some beans, a grinder, and a French press, all for about $35. While writing my thesis, coffee turned into a sort of comforting ritual. Anyway, I digress. To summarize the things I took away, I learned that pacing yourself is key to large projects, and that an hourly cron’ed rsync of your thesis to seven servers on three different continents can really help you sleep soundly.

At some point I actually finished a solid second draft of my thesis, and then was faced with preparing and giving my thesis defense. From what I remember it went well, besides losing my voice for the rest of the night. My oral exam the next morning went well too. So at this point I’m pretty much a Master of Science, I just need to finish the one class I’m taking this semester. :)

If you’re interested in my thesis research, we put together a nice poster back in January for a conference. In short, we’re using artificial intelligence to design radio networks. If you’re really interested, you could watch my thesis defense or read my second thesis draft linked below.

I can’t believe it’s all over so quickly. I suppose that’s how academia works–steady progress over time with short bursts of intense activity around deadlines. Like I said earlier, this has been the most intense 6 weeks of my life to date. I wasn’t really overwhelmed at any point, and to some extent it was a blast. I wouldn’t mind doing it again sometime soon.



Android TabHost in the M5 SDK

So about a week ago I wrote up a quick example for TabHost. Then yesterday Google released a new version of the Android SDK which changed quite a bit. In the process, TabHost is no longer deprecated (yay!), but its interface has changed quite a bit, so I spent about an hour changing my earlier example to work.

Here is a quick run-down of what applies to TabHost: all of our id’s change to android:id’s, TabWidget’s width needs to fill_parent, and its height needs to be about 65px. The top padding of the content FrameLayout also needs to be about 65px.

The Java interface to TabHost also changed, letting us put both text and icons in the tabs:

setContentView(R.layout.tabs);

TabHost tabs = (TabHost)this.findViewById(R.id.tabs);
tabs.setup();

TabHost.TabSpec one = tabs.newTabSpec(“one”);
one.setContent(R.id.content1);
one.setIndicator(“labelone”, this.getResources().getDrawable(R.drawable.gohome));
tabs.addTab(one);

TabHost.TabSpec two = tabs.newTabSpec(“two”);
two.setContent(R.id.content2);
two.setIndicator(“labeltwo”);
tabs.addTab(two);

tabs.setCurrentTab(0);

And here’s a peek at what the resulting output looks like:

The one nice feature is that the tabs are now clickable instead of just keypad-only.  That’s about it, so have fun with TabHost in the new SDK. :) The home icon above came from the Tango Icon Library.



Using Android TabHost

Update: the code below needs some slight changes to work with the new M5 SDK.

So over the past few weeks I’ve jumped into Google’s Android platform. It’s a blast and very well designed, but there are still some rough edges. One of those rough spots is getting a tab or paging control to work. The API documentation talks about a TabHost widget, but it has been marked as deprecated. (Already!? The API was just formed a few months ago, lol.) It speaks of a phantom “views/Tabs2.java” example which never shipped. There is also talk about a PageTurner widget, but no examples exist for that either.

Because I needed a tab-like control badly, I brute forced my way through getting a TabHost working. Hopefully this will save other developers some time. Remember that the TabHost widget is marked as deprecated in this version of the API. However, someone mentioned that TabHost might be making a comeback. In either case, here is a quick example of TabHost in action:

First let’s create an XML layout for our example, just a simple LinearLayout with a TabHost widget inside. It’s important to notice that the TabHost must contain both a TabWidget and a FrameLayout with specific id’s in order to work.

<?xml version=”1.0″ encoding=”utf-8″?>
<LinearLayout xmlns:android=”http://schemas.android.com/apk/res/android”
android:orientation=”vertical”
android:layout_width=”fill_parent”
android:layout_height=”fill_parent”
>

<TabHost
id=”@+id/tabs”
android:layout_width=”fill_parent”
android:layout_height=”fill_parent”
>

<TabWidget
id=”@android:id/tabs”
android:layout_width=”wrap_content”
android:layout_height=”wrap_content”
/>

<FrameLayout
id=”@android:id/tabcontent”
android:layout_width=”fill_parent”
android:layout_height=”200px”
android:paddingTop=”30px”
>

<LinearLayout
id=”@+id/content1″
android:orientation=”vertical”
android:layout_width=”fill_parent”
android:layout_height=”fill_parent”
android:background=”#ff99ccff”
>

<TextView
android:layout_width=”fill_parent”
android:layout_height=”wrap_content”
android:text=”tab item uno :)”
/>

</LinearLayout>

<LinearLayout
id=”@+id/content2″
android:orientation=”vertical”
android:layout_width=”fill_parent”
android:layout_height=”fill_parent”
android:background=”#ffffcc99″
>

<TextView
android:layout_width=”fill_parent”
android:layout_height=”wrap_content”
android:text=”tab item dos :/”
/>

<Button
android:layout_width=”fill_parent”
android:layout_height=”wrap_content”
android:text=”tabhost needs”
/>

<Button
android:layout_width=”fill_parent”
android:layout_height=”wrap_content”
android:text=”to be upgraded ;)”
/>

</LinearLayout>

</FrameLayout>

</TabHost>

</LinearLayout>

Inside the FrameLayout we can put our tab contents, which in this case are two LinearLayouts with different contents. These, of course, could be pulled by id and filled with dynamic content as needed. If I remember correctly, I think it was important that some sort of tab-content container needed to actually exist as a sub-widget under the FrameLayout in order for the tabs to work.

Next, let’s jump over to the Java code and build up the tab example.

setContentView(R.layout.tabs);

TabHost tabs = (TabHost)this.findViewById(R.id.tabs);
tabs.setup();

tabs.addTab(“one”, R.id.content1, “labelone”);
tabs.addTab(“two”, R.id.content2, “labeltwo”);

Nothing too fancy, just pull the TabHost, initialize it, and let it know about the tabs we have. Now let’s give the example a spin on the emulator:

This is the first real look we’ve had at the TabHost widget, and it looks okay. Mouse clicking doesn’t switch tabs, so you need to use the keypad’s left/right arrow keys to navigate. The up/down keys work as expected on a tab with buttons, so not much to complain about. :)



Laser Graffiti on Buildings

Over a weekend back in August I wrote a C++ program that allows people to draw graffiti on any surface. The person draws using a normal red laser pointer, and a MiniDV camera then detects the red dot. The C++ program maps the laser dot from camera coordinates into OpenGL screen coordinates using a simple homography. The computer projects its screen image back onto the surface, offering a persistent view of what the user has drawn.

The system is self-calibrating on startup–it uses four green squares to generate the homography. They disappear once calibration is complete. We then use OpenGL textures to render a chalk-like pen, following the laser pointer when it’s turned on. The system can also detect crude gestures to change ink color or clear the screen.

Using a typical VGA projector, we filled the side of a seven story building at night. Above are some pictures of things we drew. Because the system recognized gestures, we could easily draw using the laser pointer from a few blocks away from the actual equipment. We used a Panasonic MiniDV camcorder connected through firewire to a Intel Core2 2.4GHz box running Gentoo Linux. The computer was connected to an Epson 2400-lumens projector through a VGA cable and pointed towards the building.

Future work includes moving to an ffmpeg-based image processing solution, instead of using pipes to pass raw image data. Gestures need work so they are detected more reliably. Some effort should also be put into the laser pointer detection, as its thresholds can vary widely under different lighting conditions.



Radio propagation using libprop

As part of my thesis work, I’ve been writing algorithms to design wireless radio networks. One important question we needed to answer was whether a given pair of locations could communicate using a set of radio equipment. This question is pretty hard to answer because there is so much uncertainty in the real-world. If we make some simplifying assumptions, we can boil the problem down to a handful of calculations:

First we need to check if the link is line-of-sight. That is, can both the towers physically see each other, or is there a mountain in the way? We’re using high-resolution 10-meter data from USGS to walk along a path between the towers, checking for obstructions as we go. If nothing blocks the path, the link might be possible.

Next we need to check our link budget. If the radio isn’t powerful enough, it will be too weak when we reach the other tower. This boils down to a neat equation:

transmitter power (dBm) + antenna (dB) – free space (db) – Fresnel loss (db) – vegetation loss (db)

From our equipment, we know the transmitter power and antenna specifications. From the distance and terrain we can calculate free space, Fresnel, and vegetation losses. Then we compare this budget against the receiver sensitivity of the destination tower. If there is enough signal power left, the link is possible.

This is just a quick summary of what I’ve spent months writing and testing. As part of my fellowship here, I’m allowed to open-source the code I’ve written for my thesis. So if you’re interested in a fast C++ library that can do propagation testing, libprop is the way to go. It can easily read USGS terrain data, and there are some quick examples.

I’ve also written an awesome little mashup that overlays propagation images in Google Maps. It’s an instant “What if we put a tower here?” tool. Other propagation tools on the market are a few thousand bucks, and they don’t even offer anything like this. :) Check it out and let me know what you think.



Reclaim screen space by using Two-column Google

Okay, so here at school we have these huge Dell LCD monitors that are just gorgeous. However, it’s annoying every time I do a Google search because over half of the screen space is just wasted with whitespace. :( Yeah, it’s because Google decided to be friendly to everyone still running at 640×480. But I don’t want to deal with that! I hate having to scroll down only a few inches to get to the page-navigation links.

So I pulled out my long-lost friend Greasemonkey, an excellent Firefox extension that essentially lets you change any website on the fly. I wrote a quick script that wraps the lower half of every Google search result page onto a second column on the right-hand side of the screen. Our screen-waste problem is solved, yay! It does this by adding a two-column table to the page, and then splitting the Google search results equally into the two columns. The results read like newspaper columns (vertically first, then horizontally).

So yeah, check it out, and let me know if you find it useful. :)



Skiing Bridger

Last week I woke up on a Saturday and we had perfect weather for skiing here in Bozeman. It was overcast all day, with a good chance for a few inches of fresh snow. I grabbed my stuff and went over to Bridger Bowl for the day.

It was my first time skiing out here, and it was definitely very different compared to Minnesota. If I could describe it in one word, it would be “powder.” Skiing in powder is hard work!! The entire mountain has dozens of runs, most of which are either blue or black. The blue runs here are more like the blacks I’m used to back home. :)

Anyway, I found out there were over 2,000 people out on the entire hill, but it didn’t seem like that at all. It’s just such a large hill that everyone ends up being really spread out. So most of the lift lines were fairly short. The weather isn’t the greatest this weekend, but I’m keeping my eyes open for good skiing weather on the weekends.



Thanksgiving and Christmas

A couple of us from Chi Alpha got together and carpooled back to Minnesota for Thanksgiving. Along the way we dropped off Leanne somewhere in North Dakota, and ran into an evil McDonalds. It must have been something weird with their neon lights, because their sign was as a nice red. :)

On Thanksgiving night, we went to sit outside the Best Buy in Green Bay, Wisconsin all night for the Black Friday sales. We got there around 11PM, and there were already about 70 people in line. We ended up sitting about half way down the side of the building. Most people said they were trying to grab a cheap laptop. By around 5AM there were easily over 300 people in line, stretching around the back of the store.

Anyway, this year they had some nice deals on LCDs, and I needed one badly. I picked up both the 19″ Samsung and a 22″ Westinghouse, because I wasn’t sure which one would work best for my desktop. It ends up the 22″ was a huge overkill, and so I stuck with the 19″ for my desktop. For a few months I’ve been using the 22″ for a media center downstairs, but I just sold it to a friend.

Well, I also flew home for Christmas. Instead of the rushed couple of days around Thanksgiving, this time I came home for two weeks. I was still really wound up from the semester. I probably would have kept myself working full-time if I had stayed in Bozeman, so I’m glad I went back. While I was back we got to go skiing as a family over at White Cap, which was great because I hadn’t been skiing for almost a year.



More Snow

Well, we finally got more snow! Last night around 2AM it started snowing like crazy, and we ended up with about 3-4 inches on the ground by this morning. So now it’s going to be awfully cold over the next week, barely getting above freezing if we’re lucky.

Remember that webcam I setup last week? I’ve been archiving it, and I wrote a nice Linux script that runs each day and makes a time-lapse video of the entire day. Here’s what happened today with the snow:

Not much happened, oh well. Anyway, right now it encodes to MPEG-4 and each second of video is about 15 minutes of real time. I’m using a PHP script I wrote, along with mencoder (MPEG-4) and ffdshow (Flash) to encode the video, and then archive the day in a zip file.



Live Webcam

Over the past few days its been cold and rainy and snowy out here. Yesterday we woke up to about two inches of snow (yay!), but most of it’s gone now.

Anyway, back in July I ran across an eBay auction for a cheap webcam, and picked it up with the idea I would start a live webcam when I finally had high-speed Internet. Well, here it is! :) This is the view out over Michael Grove looking west. It should update every 10 seconds, and is also being archived so I can make some cool time-lapse video later. Here are some interesting links for the project:

Right now there is a dedicated Windows 2000 computer that captures and archives images from the webcam. It uploads to an Ubuntu Apache server, which is then port-forwarded through my Qwest DSL modem to the world.

« Previous PageNext Page »