Cutting the Cord: Free Whole-house DVR without Cable TV

March 9th, 2013

Buh Bye, Time Warner Cable!  Hello, cleaner, crisper OTA HDTV!


After years of marginal support for CableCard-based DVRs (a Moxi and a TivoHD), incessant rate increases, and interminable waits on hold for support, I finally decided to do away with my $100/month cable bill.

Of course, it’s not as simple as just canceling the service.  Thanks to the aforementioned Moxi and Tivo, the DW and I are now very accustomed to DVRing our favorite shows (watching live TV is so 48 seconds ago,) so we needed the same capability if we were to get our programming over the air.  The solution would also need to have a high WAF (wife acceptance factor,) meaning it is easy to navigate and use.


  1. Free!!! After NRE (non-recurring engineering, or setup) costs
  2. Depending on number of TVs supported, NRE is recouped within months, saving $100/mo
  3. Whole-house DVR (or at least to every TV with an AppleTV connected)
  4. Consistency–all TVs now provide same look and feel with respect to DVR usage
  5. No more monthly support calls when the tuning adapter(s) goes out!!!

The Setup

My solution is a combination of the following hardware:

(Street price of the each piece of hardware, with the exception of the iMac, is less than $100. So for me setup costs were less than $400.)

Along with the following software:

  • Elgato EyeTV v3.6 (DVR Software)
  • mc2xml v1.2 (for guide data)
  • iTunes (for sharing DVR recordings)
  • A couple of custom AppleScripts (for EyeTV and iTunes cleanup and organization)

(The EyeTV software I’d purchased long ago, but even now it is less than $80 at Elgato’s website.)

Wiring Up TVs and HDHomeRun

Nothing too complicated here–not even worth taking pictures.

  1. Run a coax from the OTA antenna into the drop amp input.
  2. From the drop amp outputs, run a coax into each of the HDHomeRun inputs.
  3. From the drop amp outputs, run a coax to each TV that will tune OTA signals (for live TV.)
  4. Connect a network patch cable from each HDHomeRun to your router/switch.
  5. Apply power to the drop amp and the HDHomeRun units.
  6. Connect an AppleTV to each TV, connect network patch cable (or use wifi) to each AppleTV

Mac Setup

Follow installation instructions for the individual software packages.

  1. The iMac must be on the same network as the HDHomeRun units.
  2. Install EyeTV software
  3. Run EyeTV,
    1. Choose EyeTV | EyeTV Setup Assistant
    2. Follow wizard to detect HDHomeRun units and scan for channels
    3. Skip ‘TV Guide’ or TitanTV account setup steps
  4. Plug Turbo.264 HD into a USB port
  5. Install mc2xml
  6. Configure mc2xml for use with EyeTV (a good reference can be found here.)
  7. Go back to EyeTV and set the EPG setting for all channels to XMLTV
  8. In iTunes, turn on Home Sharing (File | Home Sharing | Turn On Home Sharing)

AppleTV Setup

The AppleTVs must be on the same network as the iMac. Turn on Home Sharing (Settings | Computers | Turn On Home Sharing) using the same account used in iTunes above.


Automate Guide Data Download

First a script to simplify things,

cd /Users/girls/guide

# open EyeTV with file
open -a EyeTV /Users/girls/guide/xmltv.xml

(I used my girls’ account, ‘/Users/girls’, on the iMac since this is the account which is setup for autologin on the iMac and the account most used.)

And a simple cron job, as mentioned in the reference above, to automate things:

00 20 7,14,21,28 * * /Users/girls/guide/

This job will download new guide data on the 7th, 14th, 21st, and 28th of each month. A daily download is not necessary since the guide data is generally available two weeks in advance. But to play it safe, a once a week download is used to allow for a 50% failure rate on the update.

/Library/Application Support/EyeTV/Scripts/TriggeredScripts/ExportDone.scpt

EyeTV allows users to extend its functionality through the use of event handlers.

These event handlers are implemented in AppleScript and are invoked by name, e.g. ExportDone, if a script with the event name is found in the folder /Library/Application Support/EyeTV/Scripts/TriggeredScripts.

To make the recorded shows easier to navigate on AppleTV, the following script adds the date of recording to the name of the show in iTunes, e.g. ‘Elementary (1/4)’ in the image above.

It also resets the genre of the show in iTunes to ‘EyeTV’. This genre will be used later for maintaining the iTunes library.

-- ExportDone.scpt
on ExportDone(recordingID)
  set myid to recordingID as integer

  tell application "EyeTV"
    set theRec to recording id myid

    -- gather some info from EyeTV about the recording that just finished
    set origdur to get the actual duration of theRec
    set myshortname to get the title of theRec
    set episodeID to get the episode of theRec
    set thisdate to get the start time of theRec
    set mm to (month of thisdate) as integer
    set dd to day of thisdate

    -- add date to name for iTunes
    set itunesname to myshortname & " (" & mm & "/" & dd & ")"
  end tell

  -- wait a while to make sure iTunes has imported the exported recording
  delay 30

  tell application "iTunes"
    -- all EyeTV exports go to playlist 'EyeTV'
    tell playlist "EyeTV"
      set theShows to tracks whose name is myshortname
      if (count of theShows) = 0 then
        set theShows to tracks whose episode ID is episodeID
      end if
      set a_show to (the first item of theShows)
    end tell
    if a_show is not {} then
      -- change genre so we can find it later
      set genre of a_show to "EyeTV"

      set video kind of a_show to TV show
      set show of a_show to myshortname
      set name of a_show to itunesname
    end if
  end tell
  log myshortname & " exported to iTunes"
  display dialog (myshortname & " exported to iTunes") giving up after 10
end ExportDone
-- ExportDone.scpt

Automator Script — iTunesCleanup

Without some maintenance the iTunes library would grow continually.

I decided to automate the maintenance by deleting nightly news show daily, and other shows after thirty days.

While this could be done with a cron job, I chose to use an Automator event in iCal (this will be easier for those who are Terminal averse.)

In addition to cleaning up iTunes this script also deletes the recording from EyeTV since, once they are exported, I no longer need them in the EyeTV archive.

Before deleting from EyeTV a comparison is made between the exported duration in iTunes and the recorded duration in EyeTV–if the difference is too great then the export was probably not successful so the recording is not deleted in case we want to re-export by hand.

I’ve configured the following script to run daily at 6am in iCal:

on run
end run

to cleanup_iTunes()
  log "cleanup_iTunes()"

  set tracksToDelete to {}
  set now to current date

  tell application "iTunes"
    -- EyeTV exported videos automatically get added to playlist "EyeTV"
    set shows to (every track of playlist named "EyeTV")

    repeat with a_show in shows
      set showName to (name of a_show)

      -- look at recordings (we know them because we set the genre)
      if genre of a_show is "EyeTV" then
        -- delete older recordings, news daily, otherwise 30 days old

        -- we record NBC news, local and national, so regex match to get news shows
        set regexscript to "echo \"" & showName & "\" | awk /^NBC.*News.*$/"
        tell current application to set news to (do shell script regexscript)

        if length of news is greater than 0 then
          -- delete news daily
          set theDate to (now - 18 * hours)
          -- delete everything else older than 1 month
          set theDate to (now - 30 * days)
        end if

        -- add to delete list if old enough
        set toDel to ((date added of a_show) is less than theDate)

        if toDel then
          -- cache files to delete so we don't alter library while iterating
          set tracksToDelete to tracksToDelete & (get database ID of a_show)
        end if
      end if
    end repeat
  end tell

  if tracksToDelete is not {} then
    log "tracksToDelete -- " & tracksToDelete as string
    set filesToDelete to {}

    tell application "iTunes"
      set myLib to playlist 1
      repeat with theID in tracksToDelete
        set toDel to (first track of myLib whose database ID is theID)
        if (class of toDel) is file track then
          set filesToDelete to filesToDelete & (location of toDel)
        end if
        delete toDel
      end repeat
    end tell

    tell application "Finder"
      repeat with theFile in filesToDelete
        my delete_the_file(theFile)
      end repeat
    end tell
  end if

  --delete EyeTV recordings if exported
  my delete_exported_recordings()

    -- quit iTunes to force library sync on a regular basis
    tell application "iTunes" to quit

    delay 60

    tell application "iTunes"
      -- just do something here so we know iTunes is running 
      -- and ready for Home Sharing connections from the AppleTV
      set shows to (every track of playlist named "EyeTV")
    end tell
  end try
end cleanup_iTunes

to delete_exported_recordings()
  log "delete_exported_recordings"
  set recs to {}
  tell application "EyeTV"
    repeat with a_rec in recordings
      set thisdate to get the start time of a_rec
      set mm to (month of thisdate) as integer
      set dd to day of thisdate
      set itunesname to (title of a_rec) & " (" & mm & "/" & dd & ")"
      set recs to recs & [[(title of a_rec), itunesname, actual duration of a_rec]]
    end repeat
  end tell
  repeat with a_title in recs
    set myshortname to get item 1 of a_title
    log "  checking export " & myshortname
    set exportdur to get_duration_of_show(get item 2 of a_title)
    set origdur to get item 3 of a_title
    if origdur > exportdur then
      set thediff to (origdur - exportdur)
      set thediff to (exportdur - origdur)
    end if
    if thediff < origdur * (0.15) then
      my delete_recording(myshortname)
    end if
  end repeat
end delete_exported_recordings

on delete_recording(shortname)
  log "delete eyetv recording -- " & shortname
  tell application "EyeTV"
    repeat with a_rec in recordings
      if title of a_rec is equal to shortname then
        delete recording id (unique ID of a_rec)
        log "-Recording " & shortname & " deleted"
        exit repeat
      end if
    end repeat
  end tell
end delete_recording

on get_duration_of_show(show_name)
  set exportdur to 0
  tell application "iTunes"
    tell playlist "EyeTV"
        set theShows to tracks whose name is show_name
        if (count of theShows) > 0 then
          set exportdur to (duration of first item of theShows)
        end if
      end try
    end tell
  end tell
  log "Exported duration " & exportdur
  return exportdur
end get_duration_of_show

to delete_the_file(floc)
  log "Attempt to delete file" & POSIX path of (floc as string)
    do shell script "rm -f " & quoted form of POSIX path of (floc as string)
  on error
    log "Done. However, the file could not be deleted."
  end try
end delete_the_file

Everyday Use

Using ‘Smart Guides’ in EyeTV makes it simple to set up a ‘Season Pass’ for any show. In the smart guide options you can also configure the recordings to export automatically to iTunes. When they do, the ExportDone script is invoked on completion of the export which puts the date on the show in iTunes.

Using AppleTV to connect to the computer hosting iTunes, it is simple to traverse the TV shows in iTunes. Selecting the ‘TV Shows’ category from the shared computer and navigating back to the top menu puts the TV shows in the top row on the AppleTV (see image above,) with the most recent recordings on the left, and the date of the recording in the title (again, see ‘Elementary (1/4)’ above.) It is then a simple matter of selecting shows from this top row–pressing ‘play’ plays the most recent recording of a show, selecting the show will present the list of episodes if multiple episodes exist.


As I complete this post, our new whole-house DVR system has been up and running for two months. There were a few hiccups along the way–the scripts above have been revised a few times–but at this point all the rough edges appear to have been eliminated and the WAF is high.

Mission Accomplished!

Six Month Followup

After having this solution up for a little more than six months, I thought it might be useful to share some updates.

Some of the processes and scripts have been updated. Some of the updates are bug fixes, others are for reducing the load on my aging iMac when transcoding the recordings.

MC2XML Updates

After rereading the xc2xml install instructions, I noticed an EyeTV “clear EPG database” command that I’d missed the first time around. Instead of performing this manually, I decided to add it to the script:

cd /Users/girls/guide

# clear epg db
tell application "EyeTV" to clear EPG database

# open EyeTV with file
open -a EyeTV /Users/girls/guide/xmltv.xml

Serialization of Transcoding

I’ve moved away from the auto-export (ExdportDone.scpt) and now serialize the transcoding through the use of a couple of other scripts. I had issues with transcoding taking days when there were two or three shows being exported at once. I needed a solution to serialize this process so as to not overload the iMac. After a bit of trial and error I came up with this multi-stage solution.

First, I use the EyeTV RecordingDone hook to invoke a shell script, RecordingDone.scpt:

property SHELL_SCRIPT_SUFFIX : " >> /Users/girls/Documents/TriggeredScripts/eyetv_script.log 2>&1 "

on RecordingDone(recordingID)
  my logger("RecordingDone id: " & recordingID)
  do shell script "/Library/Application\\ Support/EyeTV/Scripts/TriggeredScripts/invoke_script.bash " & recordingID 
end RecordingDone

on logger(logThis)
  set dtg to do shell script "date \"+%I:%M:%S %p -- \""
  do shell script "echo \"" & dtg & logThis & "\"" & SHELL_SCRIPT_SUFFIX
end logger

In the second step I create a bash script-based Turbo.264 job queue from within the bash script invoked by RecodingDone.scpt. Here’s invoke_script.bash:


touch $logFile
chmod 777 $logFile
dtg=`date "+%I:%M:%S %p -- "`
echo "${dtg}=================================" >> $logFile
echo "${dtg}recordingID = $1" >> $logFile

cat > ${jobFile} <<DELIM
# wait for Turbo.264 HD to finish
while pgrep Turbo.264 > /dev/null 2>&1 || [ -f ${lockFile} ]; do
  sleep $(( 10 + $RANDOM % 30 ))
touch ${lockFile}
# start new job
nohup osascript /Library/Application\ Support/EyeTV/Scripts/TriggeredScripts/RecordingDone-called.scpt $1 >> $logFile 2>&1 &
# remove thyself
rm -rf $jobFile

chmod +x ${jobFile}
${jobFile} &
echo "${dtg}${jobFile} queued" >> $logFile

This script creates job scripts which are invoked in the background. The job scripts all wait until the previous invocation of Turbo.264 has exited, as indicated by the absence of an active Turbo.264 task and removal of the lock file created when transcoding starts. The use of a RANDOM delay reduces the chances that jobs for two shows which ended about the same time will not wakeup simultaneously.The transcoding itself, as well as removal of the lock file is done by another AppleScript, RecordingDone-called.scpt:

property TARGET_PATH : "/Users/girls/Documents/EyeTV Archive/Transcoded/"
property TARGET_TYPE : ".mp4"
property SOURCE_TYPE : ".mpg"
property SHELL_SCRIPT_SUFFIX : " >> /Users/girls/Documents/TriggeredScripts/eyetv_script.log 2>&1 "

property CLEAN_FILENAME_DISALLOWED_CHARS : ";|!@#$%^&*+()/"

on run argv
  set recordingID to item 1 of argv

  -- Obtain some show information from EyeTV
  -- Transcode recorded video to conform to desired format
  -- Delete original EyeTV recording

  with timeout of (480 * 60) seconds
    tell application "EyeTV"
      set myid to recordingID as integer
      set show_title to title of recording id myid as text
      set show_episode to episode of recording id myid as text
      set thisdate to start time of recording id myid
      set mm to (month of thisdate) as integer
      set dd to day of thisdate
      set timestamp to " (" & mm & "/" & dd & ")"
      if show_episode = "" then
        set show_episode to thisdate as text
        set suffix to timestamp 
        set suffix to " - " & show_episode & timestamp 
      end if
      set show_description to description of recording id myid as text
      set recording_location to location of recording id myid as text
    end tell

    set AppleScript's text item delimiters to "."
    set recording_path to text items 1 through -2 of recording_location as string
    set AppleScript's text item delimiters to ""
    set recording_path to POSIX path of recording_path
    set input_file to (recording_path & SOURCE_TYPE) as string
    set show_filename to (my clean_filename(show_title & " - " & show_episode) & TARGET_TYPE)
    set transcoded_file to (TARGET_PATH & show_filename) as string

    my logger("Turbo.264 HD (" & recordingID & ") - " & input_file & " to " & transcoded_file)
    tell application "Turbo.264 HD"
    	add file input_file with destination transcoded_file exporting as HD720p
    	set busyEncoding to true
    end tell
    -- Loop until this export is finished
    repeat while busyEncoding
    	do shell script "sleep 60"
    	tell application "Turbo.264 HD"
    		set busyEncoding to isEncoding
    	end tell
    end repeat
    -- quit Turbo.264 HD
    tell application "Turbo.264 HD" to quit
    -- Remove lock file 
    do shell script "rm -f /Users/girls/Documents/TriggeredScripts/__turbo_lock"

    -- prep target for iTunes
    set cmd to "chmod 666 " & (quoted form of TARGET_PATH) & "*" & TARGET_TYPE
    my logger(cmd)
    do shell script cmd 

    -- delete recording from EyeTV
    my logger("Delete recording " & quoted form of input_file)
    tell application "EyeTV"
      delete recording id myid
    end tell
  end timeout

  -- Add the video file as it resides on the NAS server to the 
  -- iTunes library as a TV show.

  my logger("Add '" & show_title & suffix & "' to iTunes")
  tell application "iTunes"
    set transcoded_folder to ("Macintosh HD:Users:girls:Documents:EyeTV Archive:Transcoded:") as string
    set newShow to (add (transcoded_folder & show_filename))
    set genre of newShow to "EyeTV"
    my logger("  genre '" & (genre of newShow) & "'")
    set video kind of newShow to TV show
    my logger("  kind '" & (video kind of newShow) & "'")
    set name of newShow to (show_title & suffix)
    my logger("  name '" & (name of newShow) & "'")
    set show of newShow to show_title
    my logger("  show '" & (show of newShow) & "'")
    set episode ID of newShow to show_episode
    my logger("  episode ID '" & (episode ID of newShow) & "'")
    set description of newShow to show_description
    my logger("  description '" & (description of newShow) & "'")
  end tell
  my logger("Finished")
end run

on logger(logThis)
  set dtg to do shell script "date \"+%I:%M:%S %p -- \""
  do shell script "echo \"" & dtg & logThis & "\"" & SHELL_SCRIPT_SUFFIX
end logger

on clean_filename(theName)
  set newName to ""
  repeat with i from 1 to length of theName
    -- check if the character is in CLEAN_FILENAME_DISALLOWED_CHARS
    -- replace it with the CLEAN_FILENAME_REPLACEMENT if it is
    if ((character i of theName) is in CLEAN_FILENAME_DISALLOWED_CHARS) then
      set newName to newName & CLEAN_FILENAME_REPLACEMENT
    -- check if the character is in CLEAN_FILENAME_DISALLOWED_CHARS2
    -- remove it completely if it is
    else if ((character i of theName) is in CLEAN_FILENAME_DISALLOWED_CHARS2) then
      set newName to newName & ""
    -- if the character is not in either CLEAN_FILENAME_DISALLOWED_CHARS or
    -- CLEAN_FILENAME_DISALLOWED_CHARS2, keep it in the file name
      set newName to newName & character i of theName
    end if
  end repeat
  return newName
end clean_filename

This AppleScript invokes the Turbo.264 HD application to transcode the recording indicated on the command line. It waits until transcoding completes, then quits the Turbo.264 HD app, and removes the lock file created by the job queue script. When one of the waiting job queue scripts wakes from its sleep and sees Turbo.264 HD app is not running and the lock file from the previous show has been removed, then is invokes RecodingDone-called.scpt for its own recording. It also deletes the recording from EyeTV so we don’t have the original lying around after the transcoded version is available after export to iTunes.

Again I use /Users/girls as the base account for my solution. if you use these scripts you will need to update the paths according to your own installation.

Good Luck!

Expanding SMXMLDocument

June 13th, 2012

Just a quick post here due to time constraints.

I don’t have time to submit this properly through GitHub because I haven’t cloned the repo, but I did want to share it since finding and using this class saved me a few hours of effort.

This is a small extension to SMXMLDocument (a very useful iOS XML parser, thanks, Nick) which will return all children for a given path (specified as an array of strings), not just the first match it finds.

- (NSArray *)descendantsWithPath:(NSArray *)path {
  NSMutableArray *lineage = [NSMutableArray arrayWithArray:path];
  NSMutableArray *array = [NSMutableArray array];

  NSArray *kids = [self childrenNamed:[lineage objectAtIndex:0]];
  [lineage removeObjectAtIndex:0];

  if ([kids count] > 0) {
    if (0 == [lineage count]) {
      // bottom of path
      [array addObjectsFromArray:kids];
    } else {
      // recurse into path
      for (SMXMLElement *el in kids) {
        NSArray *elements = [el descendantsWithPath:lineage];

        if ([elements count] > 0)
          [array addObjectsFromArray:elements];
  return array;

This can be easily extended to:

- (NSArray *)descendantsWithPath:(NSArray *)path andAttribute:(NSString *)attribute

To find only leaf nodes on the given path with a specific attribute, but I haven’t gotten that far in my own project yet–possibly a future update to this post.

Note: This was based on the master-arc branch supporting ARC.


AirPort Express as Ethernet Bridge with Access Control

October 25th, 2011

The Problem

I have an older, out-of-warranty MacMini whose WiFi is acting flaky after the recent upgrade to OS X Lion. This wouldn’t be a problem in most places in the house since we’ve got wired gigabit connections in most rooms, but this Mac sits on my youngest child’s desk and, after a recent rearrangement of the kid’s office, this desk happens to be on the opposite side of the room from the wired Ethernet jack. After rearranging the room, but prior to the Lion upgrade, the WiFi was working just fine for getting that particular Mac connected to the Internet.

The immediate fix was to run a fifty foot patch cable from the Ethernet jack over the door, around the windows, and along the baseboard to the desk. Expedient, but not very decorative. I knew this had to be temporary and that’s what I told the darling wife. At the time I said it, I hadn’t formulated the eventual solution, but I did have vague recollections of reading about the various modes available with the Apple AirPort Express Base Station (AEBS).

As it turns out, the ProxySTA mode is exactly what I needed to solve this problem, the most succinct explanation of which I found in a TiredDonkey blog post.

Some Background

My WiFi setup consists of a Time Capsule (TC), an Airport Extreme in the north end of the house, and another in the south end. All three of these devices are configured to “Create a wireless network” with the same network name, enabling roaming on a single network throughout the house.

The TC is located in the basement next to the cable modem and is configured as the DHCP server for the house. The TC also serves up the guest network. The Extremes are both configured in bridge mode to pass all DHCP-related traffic to the TC. They are all connected via gigabit. One of the great features of the newer Time Capsules and Extreme base stations is that, when configured to serve up the same network, they also synchronize their Access Control lists–a configuration change in the access list on one device is shared with the others greatly simplifying maintenance.

A number of AEBSes scattered throughout the house complete the setup providing whole house audio via AirTunes. One of these AEBS is used to provide the bridge to the MacMini.

Back to the Problem

As I was following the instructions in the blog post I was skeptical because the instructions explicitly state turn off access control–I am security conscious and not only do I have WPA2 password protection on my networks, I also use access control to deny access to unknown WiFi devices.

As I expected, this solution, as written, did not work for me.  Due to my security precautions the AEBS was not connecting to the WiFi network.

As it turns out it wasn’t just the access control, but my larger WiFi infrastructure (beyond the single Express, single Extreme setup in the blog referenced above) that caused the failure.

The first difference between my solution and the blog post referenced above is the setting of Mac Address Access Control to Timed Access:

Adding the Airport ID of the AEBS to the access control list (ACL) to allow 24×7 access (add Airport ID in MAC Address field below) was necessary:

It was also necessary to add the MAC address of the Ethernet port of the device being connected to the AEBS to the access control list.  This would allow the access point to reply to the DHCP requests from the device connected to the AEBS via Cat-5.  If you have assigned a static IP to this device, then adding this MAC to the ACL is not necessary.

I added the access control entries to the Den Extreme base station which I knew would be closet to the AEBS, I also ensured the Allow this network to be extended box was checked for the same Extreme on the Wireless tab:

I thought that this would be it and I would have my connection. As it turns out I was wrong.

Almost There

I checked the AEBS to make sure that it was in fact a client of the Den Extreme base station, and it was. I then rechecked the access control list in the Den Extreme and it still contained the AEBS AirPort ID–that configuration was saved properly. It then occurred to me that I should check the access control lists on the TC and the other Extreme base station. As expected, they both had inherited the changes to the access control list allowing the AEBS access. What was different is that the Allow this network to be extended box was not checked on these other devices. So I checked the box on the second Extreme base station:

and on the Time Capsule:

After saving these configurations and allowing all the base stations to restart I finally had my solution.

In Short

The solution consists of:

  • Starting with the blog post as referenced above
  • Adding the AEBS AirPort ID to the Access Control List (ACL) for the wireless network, as well as the MAC address of the connected device (if not assigned a static IP)
  • Ensuring Allow this network to be extended is checked on all wireless base stations serving the wireless network

Automating Shared iTunes Library Access

February 1st, 2011

I’ve been sharing my iTunes library in-house with multiple Macs reading from the same library on a shared drive for about three months now.

Most of the time iTunes is running on a MacMini server in the office and access to the single library containing all our content is done through the iTunes Home Sharing. This work great (most of the time) when all we are doing is playing music or videos. Things get more difficult when I want to add some content to the library from the MacBook.

The major issue I keep running into is iTunes’ lock of the library file and the need to shut down the server instance of iTunes. This requires a trip to the office or a remote login to the server. Not a huge bother, but more work than it ought to be, IMHO.

I figured there had to be a way to simpify things and after several hours of research, help from the TheMacTipper, Daring Fireball, and a lot of trial-and-error, I’ve crafted a solution which runs with only a click or two with the help of my DropBox account.

Folder Actions Fail

Since I was using a shared folder for my iTunes library I started off thinking that the solution would entail the use of some AppleScript and Folder Actions. I was half right. The AppleScript is required, but the Folder Actions weren’t up to snuff.

I put together a Folder Actions script which would shut down iTunes when triggered to do so. I figured an easy trigger would the existence of a file with a specific name, say “iTunesQuit”. Simple enough. And it worked. Sort of.

First there was the issue of folder actions not being reliable. So I decided to research using LaunchD instead.

Second, it turns out that the afp: protocol which defines the shared mount has an indeterminate lag when syncing writes to the disk. This lag was longer than I was willing to wait within a scripted action. When I need access to my iTunes library, I’d like it to happen quickly, not in twenty or thirty seconds.

Dropbox to the Rescue

Since writing a file to a shared drive was too slow I started to look at the other ways I share data between computers and I was immediately drawn to DropBox. The key feature in this solution is the “Enable LAN Sync” option which Dropbox uses to reduce network traffic to its servers.

It turns out that Dropbox is pretty quick on the draw with this LAN sync and I could script a wait of mere seconds–more than fast enough for what I wanted to accomplish.

Dropbox had the added benefit of making the solution presented below portable as well. The AppleScript to control things could be saved in a Dropbox folder and referenced from any machine configure to sync with Dropbox.

The Solution in Four Parts

Since I was automating the shutdown of the iTunes instance on the office server I thought I could do the reverse and automate the startup of iTunes on the server once I was done accessing the library from the MacBook. My need to get the server in the office running iTunes again is not as urgent so I use a little longer delay in coordinating this action.

So my ultimate solution is comprised of four parts:

  1. iTunesControl.scpt AppleScript to control things.
  2. com.wh1t3s.iTunesControl.plist Lauch Agent to invoke AppleScript above as needed
  3. Automator application to trigger remote shutdown
  4. Automator application to trigger remote iTunes restart


I want to preface my code here with the caveat that this is the first AppleScript I’ve ever written. There may be simpler, more elegant, or simply more correct ways to do the things I am doing, but I stopped my development at what worked for me. (Please kindly leave suggestions for improvement in the comments below, preferably sans judgement.)

This is the script which is tied to the LaunchD launch agent created to watch the Dropbox folder (/Users/myUserName/Dropbox/iTunesSync) I am using to trigger my iTunes actions: existence of “iTunesQuit” will shut down iTunes on any machine configured with the launch agent, existence of “iTunesRun” will activate iTunes on the named server, while also shutting down other iTunes instances by creating “iTunesQuit”.

Please remember to change myUserName and MacMiniServer items below with comparable items suitable to your implementation.

-- saved as /Users/myUserName/Dropbox/iTunesControl.scpt
property quitFile : POSIX file "/Users/myUserName/Dropbox/iTunesSync/iTunesQuit"
property runFile : POSIX file "/Users/myUserName/Dropbox/iTunesSync/iTunesRun"

on run
    set isRunning to appIsRunning("iTunes")

    tell application "Finder"
      if exists quitFile then
        if isRunning then
          tell application "iTunes" to quit
        end if

        -- delay to allow Dropbox to complete
        delay 5

        -- check existence again in case another Mac already deleted it
        if exists quitFile then
          move quitFile to trash
        end if

      else if exists runFile then
        if "MacMiniServer" is equal to computer name of (system info) then
          -- delete runFile
          move runFile to trash

          -- trigger remote iTunes shutdown
          do shell script "touch /Users/myUserName/Dropbox/iTunesSync/iTunesQuit"

          -- delay while any other instances of iTunes are shutdown
          delay 15

          -- start iTunes on server
          tell application "iTunes" to activate
        end if
      end if
    end tell
  end try
end run

on appIsRunning(app_name)
  tell application "System Events"
    set app_list to every application process whose name is equal to app_name
    if the (count of app_list) > 0 then
      return true
      return false
    end if
  end tell
end appIsRunning


I actually created this launch agent with Lingon since this was my first attempt at launch agents. I will save you the new and improved cost of the app in the new Apple Mac App Store of $4.99 and post the resulting plist file in its entirety. This file was saved as /Users/myUserName/LIbrary/LaunchAgents/com.wh1t3s.iTunesControl.plist.

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "">
<plist version="1.0">

This is a very simple Automator application; a shell script to create the iTunes shut down trigger file, wait a few seconds, then start iTunes.

An even simpler Automator application; a script to create the server run iTunes trigger file.

Making it All Work

To bring it all together:

  1. Install the Dropbox client on all machines.
  2. Save iTunesControl.scpt to ~/Dropbox/ (or wherever your Dropbox folder is located, I put mine in my home directory.)
  3. Save com.wh1t3s.iTunesControl.plist to ~/Library/LaunchAgents on all machines.
  4. Copy to all machines.
  5. Copy to all machines but the server.

To start iTunes on the server, start the app iTunesOnServer on any machine. To shutdown the server instance and run iTunes on another machine, start the _iTunes_ app on that machine. Shared instances point to the same iTunes library on the shared disk.

This solution is working for me for a few days now, YMMV.

How to Setup a Dell Precision M6500 for GPGPU Development with FC13

October 29th, 2010


This post will hopefully save someone a few hours of trial-and-error. It is a result of three separate attempts in a 12-hour period to get things up and running myself. I believe that my notes are complete, but please remember, you are getting the information for free so YMMV! If it doesn’t get you a 100% solution hopefully it gets you 90%. Just so you are aware up front–this process as documented is for a 32-bit Fedora Core 13 installation with the latest NVIDIA drivers (v260.19.12 at the time of writing.) The graphics card on my M6500 is the NVIDIA Quadro FX 2800M. I expect the process would not be much different any other CUDA-capable NVIDIA card. And one last point, if you are afraid of, or enable to use, the command line then this is not the post for you.

Installing FC13

I started with the FC13 LiveCD since I needed to verify that the certain peripheral drivers worked for out-of-the-box. I am not a big fan of rebuilding a kernel unless it is absolutely necessary and like to start with as complete a solution as I can. After verifying the drivers needed were present I did an install-to-disk from the LiveCD. This was done via the icon on the LiveCD user desktop.

Once the install is complete, eject the CD and reboot the system. When the system returns, complete the install setting the root password, creating a user (for the purpose of this post that name of that user will be ‘me’,) etc. Once the setup tasks are complete I reboot again for good measure. I then login as ‘me’.

I find it useful to add myself to the sudoers list to allow sudo access without requiring a password. Do this from a terminal (Applications | System Tools | Terminal):

[me@m6500 ~]$ su -
[root@m6500]$ cat >> /etc/sudoers
[root@m6500]$ exit
[me@m6500 ~]$

This simplifies things moving forward since much of the following requires root privs and would require numerous password entries to complete. This configuration allows sudo usage without a password, moving things along a little quicker. It also means that I can stay logged in as ‘me’ to accomplish everything.

The next thing I like to do is disable the firewall, which is enabled in the default install. Using System | Administration | Firewall allows the firewall to be disabled (after entering the root password.)

One last step may be required before we get to updating the default installation with yum and it depends on your network configuration. If you have a proxy server in place you need to let yum know about it. I prefer to do this in the /etc/yum.conf file:

[me@m6500 ~]$ sudo cat >> /etc/yum.conf
[me@m6500 ~]$

Of course, set <path-to-proxy-server> and <proxy-port> to values consistent with your network configuration.

The system is now ready to invoke yum to update the default install. This is done with the following command:

[me@m6500 ~]$ sudo yum update
[me@m6500 ~]$ 

In my instance, there were over 450 updates to be applied and this process takes quite a while. Be patient. In the mean time, we can do some parallel processing and download the driver, toolkit and SDK sample code from NVIDIA. These are the links I used for the 3.2 RC version of things (full paths are included in case you want to work from a hardcopy of this post):

NVIDIA Downloads Page –

Driver –

Toolkit –

SDK Samples –

You should ensure you are getting the latest (unless you are trying to replicate my install) from NVIDIA here:


I saved all the downloads to my home folder (~me or /home/me in my case.) Once the yum update is finished and all your files are finished downloading it is time to once again reboot. The installed updates include a kernel update so a reboot is required prior to driver installation so we do not apply the driver to the current kernel but instead to the freshly updated kernel.

Since some X drivers are going to be installed, it is now time to shutdown the X-server and login at the command line. You can do this either in the above reboot by editing the boot command line and entering a ‘3’ at the end (indicating you want to boot to runlevel 3,) or once the GUI boot is complete, login, start a terminal and enter the command:

[me@m6500 ~]$ sudo init 3
m6500 login: me
Password: ********
[me@m6500 ~]$ 

Login to your user account (‘me’ in my case) once you see the command line login prompt. We can now move on to completing the install.

Installing the NVIDIA Pieces

FC13 ships with a default open-source driver for NVIDIA cards called nouveau. Unfortunately, installing over this driver is more complicated than simply following the default driver installation instructions from NVIDIA. Fortunately for you, others before me (reference links:, and have done the hard work and I am passing on their knowledge in a more complete form (as it fit my purposes, at least.)

Based on the references above, I created a few scripts (also located in ~me.) The first,, adds RPMFusion repositories to the yum.conf, installs a few NVIDIA packages from RPMFusion, rebuilds the initrd image, and reconfigures grub to override the default nouveau driver.


# add RPMFusion repositories
rpm -Uvh
rpm -Uvh

# install nvidia from RPMFusion
yum install kmod-nvidia xorg-x11-drv-nvidia-libs.i686

# blacklist nouveau driver from initrd in grub.conf
sed -i ‘/root=/s|$| rdblacklist=nouveau vmalloc=256M|’ /boot/grub/grub.conf

# regen initrd
mv /boot/initramfs-$(uname -r).img /boot/initramfs-$(uname -r)-nouveau.img
dracut /boot/initramfs-$(uname -r).img $(uname -r)

This script must be invoked with sudo:

[me@m6500 ~]$ sudo ./
[me@m6500 ~]$

The second script,, installs development packages required for the NVIDIA GPU Computing SDK (and a couple of GUI config utilities I find useful):


# install development stuff
yum install kernel-source kernel-devel
yum install gcc gcc-c++
yum install mesa-libGLU-devel
yum install libXi-devel
yum install libXmu-devel
yum install freeglut
ln -s /usr/lib/ /usr/lib/

# install misc
yum install samba
yum install system-config-samba
yum install system-config-network
yum install system-config-services

This script is also invoked with sudo:

[me@m6500 ~]$ sudo ./
[me@m6500 ~]$

Next we move on to installing the CUDA Toolkit. I chose to use the default install paths for everything, and this and any future posts will reflect this. So the toolkit, by default, gets installed in /usr/local/cuda:

[me@m6500 ~]$ sudo ./
[me@m6500 ~]$

Once the toolkit is installed we can move on to the GPU Computing SDK. This can be a local install in a single user directory, so as invoked below using default paths, it installs to /home/me/NVIDIA_GPU_Computing_SDK:

[me@m6500 ~]$ cd && pwd
[me@m6500 ~]$ ./
[me@m6500 ~]$

In the reference material I found, there were indications that one did not have to run the devdriver install from NVIDIA. My experience was that I was missing libGL and found reference to the fact it is built by the driver install. I ran the install, YMMV:

[me@m6500 ~]$ sudo ./
[me@m6500 ~]$

At this point the install should be complete. Invoking the SDK build, it should now succeed:

[me@m6500 ~]$ cd ~/NVIDIA_GPU_Computing_SDK/C
[me@m6500 ~]$ make && bin/linux/release/deviceQuery
Finished building all
bin/linux/release/deviceQuery Starting...

 CUDA Device Query (Runtime API) version (CUDART static linking)

There is 1 device supporting CUDA

Device 0: "Quadro FX 2800M"
  CUDA Driver Version:                           3.20
  CUDA Runtime Version:                          3.20
  CUDA Capability Major/Minor version number:    1.1
  Total amount of global memory:                 1073020928 bytes
  Multiprocessors x Cores/MP = Cores:            12 (MP) x 8 (Cores/MP) = 96 (Cores)
  Total amount of constant memory:               65536 bytes
  Total amount of shared memory per block:       16384 bytes
  Total number of registers available per block: 8192
  Warp size:                                     32
  Maximum number of threads per block:           512
  Maximum sizes of each dimension of a block:    512 x 512 x 64
  Maximum sizes of each dimension of a grid:     65535 x 65535 x 1
  Maximum memory pitch:                          2147483647 bytes
  Texture alignment:                             256 bytes
  Clock rate:                                    1.50 GHz
  Concurrent copy and execution:                 Yes
  Run time limit on kernels:                     Yes
  Integrated:                                    No
  Support host page-locked memory mapping:       Yes
  Compute mode:                                  Default (multiple host threads can use this device 
  Concurrent kernel execution:                   No
  Device has ECC support enabled:                No
  Device is using TCC driver mode:               No

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 3.20, CUDA Runtime Version = 3.20, NumDevs 
= 1, Device = Quadro FX 2800M


Press <Enter> to Quit...

[me@m6500 ~]$

Success!!! (Hopefully your installation is successful as well.

This post is long enough. I will follow it shortly with a post describing how to setup your own Makefile-based CUDA projects based on the NVIDIA GPU Computing SDK.

So long for now.

A Quick Follow-up

After looking closer at the boot logs I noticed I had an error:

Checking for module nvidia.ko: ESC[60G[ESC[0;31mFAILEDESC[0;39m]^M
nvidia.ko for kernel was not found.ESC[60G[ESC[0;33mWARNINGESC[0;39m]^M
The nvidia driver will not be enabled until one is found.ESC[60G[ESC[0;33mWARNINGESC[0;39m]^M
*** glibc detected *** /usr/bin/python: free(): invalid pointer: 0x00902822 ***
======= Backtrace: =========

I did two things two correct these errors. First I fixed the path in /etc/init.d/nvidia, adding:

elif test -e “${modpath}/kernel/drivers/video/${modname}”;then

at line 28. This is where my nvidia.ko module landed–once this was added the “module not found” message disappeared, but I still had the invalid pointer error.

The second thing I did was a result of reading this ( Following the steps in post #4 of this thread:

[me@m6500 ~]$ sudo rm -f /etc/X11/xorg.conf
[me@m6500 ~]$ sudo nvidia-config-display disable
[me@m6500 ~]$ sudo nvidia-config-display enable

cleared up the remaining errors. The system now boots with no errors.

A New App — Four³

August 15th, 2010

After investing in some tools, a few months of working evenings, and a ten day wait in the App Store approval queue my second iOS app is now available.

Initial Thoughts

The Easter Bunny was kind enough to wait in line on release day at my local Apple store and deliver to wife a shiny new iPad. When I saw the resolution and clarity of the screen I knew I wanted to do a game for it. After looking for an hour at the simple games that already existed in the app store, it occurred to me that–since some of the apps had set the bar pretty low–it shouldn’t be too hard to improve on a graphical 3D Tic Tac Toe. And since 3x3x3 Tic Tac Toe is no challenge what-so-ever, I decided to go to 4x4x4–hence the name of the game Four³.

Four³ is a true 3D, four-in-a-row implementation of Tic Tac Toe.


My initial thinking was to do the game in OpenGL (which led to my previous post) but as I researched what it would take to implement my game it became apparent that using OpenGL would require me to do more than just the graphics and I was really hoping to get something into the App Store sooner rather than later. Since I didn’t want to give up all my family time, and my App Store exposure is not (yet) great enough for me to quit my day job, I decided to invest in some game development tools to simplify the process.

I googled 3D game engines and wound up selecting Unity3D. After downloading the demo (and a gracious extension of said demo after a two week halt in development due to the uncertainty of the new iOS 4 TOS) I was able to almost fully prototype my application with only the demo license. At that point I was convinced the $300 cost of the Unity iPhone basic license was warranted. Although games based on the Unity engine are still being approved in the App Store the folks at Unity3D are working hard on a workaround to reduce the ambiguity about the new terms of service and the use of a tool like Unity.

The Game Board

My 4x4x4 tic tic toe game board is simply defined by 9 intersecting planes which delineate 64 game spaces. These planes are easily fabricated.

The game tokens were also easy to generate–the “O” token is simply a sphere and the “X” token is six carefully arranged smaller cubes combined into a single “prefab”.

So the game board and the player tokens were the easy part; more complicated was determining how to allow players to select a game space. In my prototyping phase I realized it would be a simply matter to allow the player to choose a smaller sphere–a “move dot”– which is pre-populated in a game space. The Unity engine allows the kinematics to be defined on an object-by-object basis, so it was a simple matter to configure the grid planes not to respond to touches and configure the “move dots” to do so. Essentially the translucent planes defining the board are invisible to touches. (This is the type of thing that would have been much more time consuming had I gone straight to OpenGL.)

Suspending the game board in space was a simple matter of surrounding it with a skybox.

Since the game tokens can quickly fill any single plain of the game board, it was imperative that the user be able to rotate/spin the board to be able to view the unused game spaces.

The Game Play

It was easy to determine the set of winning vectors. It was a simple matter to track when either player had control of winning vector–control being defined as one player, but not both, having a token in the vector. It was easy to implement the two-player game as no AI was needed, other than determining when a draw had occurred. Implementing the AI for the more advanced device game play slowed me down a bit. I actually set things aside for about ten days to fiddle with some other development.

I decided to make the easy level really easy–on this level the device simply picks a random unused game space. This makes it very easy to beat the device as there is really no offensive or defensive strategy involved when the device chooses its next move.

The medium and hard levels present different combinations of offensive and defensive strategy. I will not be revealing the full details, but I will say that much of the decision making is based on how much control a given player has of a given win vector and the hard level presents a greater defensive strategy than the medium level. I was quite pleasantly surprised when, even as the developer, the hard level beat me the first two out of three games I played against it (I let the device go first.)

iPhone Input Sample Script

The primary purpose of this post is to return something to the Unity community. I learned an awful lot from the forums and other resources I discovered.

Below is a portion of my game script which deals with iPhone touch input. There was no single example available when I started my research that showed quite this much interaction so I am publishing this to help others with their Unity development.

The highlights in this script include:

  1. Detecting taps to select a move dot
  2. Detecting a swipe to rotate the game board
  3. Detecting pinches to zoom in and out

I hope someone finds it useful.


// Control all user interactions here

#pragma strict
private var touchBegan: boolean;
private var previous: Vector2;
private var swipe: int;
private var dx: float;
private var dy: float;
private var dVec: Vector3;
private var minDist: float; 
private var maxDist: float;
private var moveFactor: float;
private var minMajorDist: float;
private var curDist: Vector2;
private var prevDist: Vector2;
private var	touch2: iPhoneTouch;	
private var nTouch;
private var dpos: Vector2;
private var pinch: boolean;
private	var slide: float;
private var nextDeviceMove: GameObject;
private var go: GameObject;
private var pos: Vector3;
private var hit: RaycastHit;
private var currentPlayer: int;
private var undoLimit: float;
private var undoAllowed: boolean;
private var undo: boolean;
private var devicePlays: int;
private var timeSinceLastMove: float;
private var iPhoneInUse: boolean;
private var myPosition: Vector2;
private var rotationRate: float;
static var touch: iPhoneTouch;
static var popup : boolean;
static var tap: boolean;

function Start() {
	// currentPlayer and devicePlays set by settings screen
	Debug.Log("PlayGame() - currentPlayer ("+currentPlayer+")");
 	if (devicePlays && 2 == currentPlayer) {

function resetGamePlay() {
	touchBegan = false;
	swipe = 0;
	pinch = false;
	dVec =;
	minDist = 16; 
	maxDist = 30;
	moveFactor = 0.05;
	minMajorDist = 15;
	maxMinorDist = 7; 
	popup = false;
	tap = false;
	defRotationRate = 45.0;
	rotationRate = defRotationRate;
	undoLimit = 2.0;
 	nextDeviceMove = null;
	orientationReset = iPhoneSettings.screenOrientation;
	undoAllowed = false;
	undo = false;
	iPhoneInUse = (iPhoneSettings.model.Substring(0,1) == "i");

// Check for iPhone Touches here
function FixedUpdate () {
 	// if the popup menu is visible don't do normal processing
	if (popup)
	timeSinceLastMove += Time.deltaTime;
	if (undoAllowed && timeSinceLastMove > undoLimit) {
  		undoAllowed = false;

		Debug.Log("Time for device move "+timeSinceLastMove);
		if (devicePlays && 2 == currentPlayer) {
	if (!iPhoneInUse) {
		// get position from mouse (for developemnt only)
		tap = Input.GetMouseButtonUp(0); 
		myPosition = Input.mousePosition;
	// Decode touches here
	nTouch = iPhoneInput.touchCount;
	if (nTouch == 1) {
		pinch = false;
		touch = iPhoneInput.GetTouch(0); 
		if (touch.phase == iPhoneTouchPhase.Began) {
			dvec =;
			previous = touch.position;
			touchBegan = true;
			swipe = 0;
			tap = false;
		} else if (touchBegan && touch.phase == iPhoneTouchPhase.Moved) {
			dpos = touch.position - previous;
			dx = Mathf.Abs(dpos.x);
			dy = Mathf.Abs(dpos.y);
			if (dx >= minMajorDist && dy <= dx) {
				// swipe in x-axis
				swipe = (dpos.x<0) ? -1 : 1;
				previous = touch.position;
				dVec = Vector3.up;
			} else if (dy >= minMajorDist && dx <= dy) {
				// swipe in y-axis
				swipe = (dpos.y<0) ? -1 : 1;
				previous = touch.position;
				dVec = -transform.right;
		} else if (touch.phase == iPhoneTouchPhase.Ended) {
			touchBegan = false;
			tap = (0 == swipe);
			swipe = 0;
	} else if (nTouch == 2) {
		pinch = false;
		touch = iPhoneInput.GetTouch(0); 
 		touch2 = iPhoneInput.GetTouch(1); 
		dVec =;
		if (touch.phase == iPhoneTouchPhase.Moved &&
		    touch2.phase == iPhoneTouchPhase.Moved) {
			curDist = touch.position - touch2.position; 

			prevDist = (touch.position - touch.deltaPosition) - 
			                  (touch2.position - touch2.deltaPosition); 
			slide = moveFactor * (prevDist.magnitude - curDist.magnitude);
			mag = transform.position.magnitude;
			slide = Mathf.Clamp(mag + slide, minDist, maxDist);
			dVec = Vector3.forward * (mag - slide); 		
			pinch = true;

function Update () {
	// process all frame updates here
	// show next device move
	if (null != nextDeviceMove) {
		nextDeviceMove = null;
	if (undo) {
		undo = false;

	if (popup)

	// Do 3D dtuff here
	if (swipe != 0) {
		// rotate around the origin along the selected major axis (dVec)
		transform.RotateAround (, dVec, swipe * rotationRate * Time.deltaTime);
		//swipe = swipe - k*i;
		// As coded we get a continuos rotation if the swipe has not ended,
		// even when the touch is held stationary.
		// Uncomment the line below to stop rotation when touch is 
		// stationary but not ended

		// swipe = 0;
	} else if (pinch) {
		// move the camera in and out based on how far we pinched
		// make sure we're still looking at the origin
		// don't pinch on next update
		pinch = false;
	// Check to see if the user selected a MoveDot
	// don't process taps while we're in the undo time interval
	if (!tap || undoAllowed)
	tap = false;

	if (iPhoneInUse) {
		myPosition = touch.position;
	// We need to actually tap on an object
	if (!Physics.Raycast(Camera.main.ScreenPointToRay(myPosition),  hit, 100))
	// And we need to hit a rigidbody that is not kinematic
	if (!hit.rigidbody || hit.rigidbody.isKinematic)

	go = hit.rigidbody.gameObject;
	// get position of move dot that was tapped
	pos = go.transform.position;

	// destroy move dot that was tapped
	undoAllowed = PlayerMove(pos);  

function PlayerMove(pos: Vector3) {
	// place player token in gameboard
	return true;

function DeviceMove() {
	// logic for next device move
	// nextDeviceMove = Game Object of selected move dot	

function LastMoveUndo() {
	// remove player token
	// restore move dot
	undo = false;

iPhone Utility App with EAGLView on Flipside

May 27th, 2010

I am just getting started on OpenGL ES development for the iPhone. There’s a lot of sample code out there, but it’s mostly basic stuff. This post presents (hopefully) a slightly more useful example.

I started with the existing instructions found here. I will not walk you through creating the basic template (OpenGL ES Application and Utility Application) applications in XCode–if you can’t do at least that much on your own, then it’s probably best you learn how to do that much and come back later! The following was done in XCode 3.2.2.

I will repeat the basic steps from the link above and highlight my changes, as such.

  1. Use the utility app as a base
  2. Add QuartzCore and OpenGLES frameworks
  3. Copy EAGLView files (*Render*, EAGLView*) across from your OpenGL template app (these last two steps are easily accomplished having both template application projects open in XCode at the same time and dragging from one project to the other.)
  4. In the FlipsideView.xib file change View to be type EAGLView
  5. In FlipsideViewController add “@class EAGLView” and an EAGLView ivar called glView and make it an IBOutlet property, so it looks like this:
  6. //
    //  FlipsideViewController.h
    #import <UIKit/UIKit.h>
    @class EAGLView;
    @protocol FlipsideViewControllerDelegate;
    @interface FlipsideViewController : UIViewController {
        id <FlipsideViewControllerDelegate> delegate;
        EAGLView *glView;
    @property (nonatomic, assign)
            id <FlipsideViewControllerDelegate> delegate;
    @property (nonatomic, retain)
            IBOutlet EAGLView *glView;
    - (IBAction)done;
    @protocol FlipsideViewControllerDelegate
    - (void)flipsideViewControllerDidFinish:
            (FlipsideViewController *)controller;
  7. In IB FlipsideView.xib connect from File’s Owner to the new glView.  At this point if you save all files in IB and invoke build and run (ignoring the @synthesize warning,) you have the basic functionality.  Running in the simulator you should see this:

    When you click the info button the flipside will appear and you should see this:

    Note that we have a static image here. The code to animate the colored box is shown in the next step.
  8. Make changes to FlipsideViewController.m methods so it looks like this:
  9. //
    //  FlipsideViewController.m
    //  util
    #import "FlipsideViewController.h"
    #import "EAGLView.h"
    @implementation FlipsideViewController
    @synthesize delegate;
    @synthesize glView;
    - (void)viewDidLoad {
        [super viewDidLoad];
        self.view.backgroundColor =
             [UIColor viewFlipsideBackgroundColor];
        self.glView.animationFrameInterval = 1.0 / 60.0;
        [self.glView startAnimation];
    - (IBAction)done {
        self.glView.animationFrameInterval = 1.0 / 5.0;
        [self.glView stopAnimation];
        [self.delegate flipsideViewControllerDidFinish:self];
    - (void)didReceiveMemoryWarning {
        // Releases the view if it doesn't have a superview.
        [super didReceiveMemoryWarning];
        // Release any cached data, images, etc that aren't in use.
    - (void)viewDidUnload {
        // Release any retained subviews of the main view.
        // e.g. self.myOutlet = nil;
    - (void)dealloc {
        [super dealloc];

At this point, build and debug, then hit the info button–you should have a bouncing box in the flip side! (Note that I’ve added the “@synthesize glView;” as I should have earlier.)

Most of the games I’ve seen present some GUI elements first to select number of players, level, etc., prior to the actual game play. I think this example presents a more realistic template for implementing that use case; selecting number of players and such can be done on the main view then a button push invokes the flip side view for game play. Good luck with your development!

Trials and Tribulations of using Embedded TCP

May 18th, 2010

I was recently called in for a consult on a program that was having trouble with their 802.11 link.

The team working on this program had created a system using a number of embedded micros which were to communicate via Ethernet on an embedded LAN. In my experience, network communications on an embedded LAN normally run fairly smoothly because you are in total control of the environment. You can design the system based on the bandwidth required and put in Ethernet controllers which support those bandwidth requirements.  You can control who talks when and totally avoid the possibility of collisions occurring.

As it turns out that the system created was so complex that the team was unable to get all these micros communicating effectively in a timely fashion while at the same time doing all the number crunching that needed to be done.  The decision was made to backtrack a bit and prototype some of the systems on PCs instead of micros.

This course of action lead to the use of the 802.11 link; what was to be an embedded LAN now became partially embedded and partially a wireless LAN connecting the PCs.  Wireless LANs have their own issues–link saturation, SNR, etc.–some of which I’d had to deal with in the past on prior projects. This is what prompted the request for my help; the team was getting very little data across their wireless link and couldn’t understand why.

After asking a few questions I discovered a couple things:

  1. they were using TCP/IP for their network connections, and
  2. the software engineers had never done network programming

These two factors, combined with the wireless LAN, made for the perfect storm.

The low bandwidth that the team was seeing was due to the fact that TCP uses an exponential backoff mechanism when attempting to guarantee packet delivery. What caused the backoff to occur in the first place were some easily fixed wireless hardware issues.

What compounded the issue was the fact the the socket code on the micros was sending data without regard for the health and status of the socket.  In essence, they were also overflowing their transmit buffers.  This was because the engineers writing the code didn’t know any better.

After shaking my head and rolling my eyes at the state of affairs, the issues were fixed by resolving the wireless hardware issues and instructing the engineers in the use of the select() function to control the flow of data on the socket and monitor its health.

The system now works and the team recently executed a very successful demonstration, but I still have an issue with the fact they are using TCP in the system. Since you control the network and all the traffic on an embedded LAN, TCP is not required.  TCP is designed for traveling long distances through hardware of unknown origin and state; it is not required in a highly controlled embedded environment. In this environment, for this program, UDP is more than sufficient. Here’s why:

  1. The system is tolerant to a small percentage of data loss.
  2. UDP packets are checksummed at higher level–Ethernet CRC checksum and IP Header checksum.  If you get a packet then you are pretty much guaranteed the data is correct.
  3. The 100Mbps links on the system above provides more than ten times the bandwidth required–it had 5 nodes each transmitting less than 1 Mbps.  Staggering their communications to avoid collisions is a simple matter.
  4. Fragmentation can be eliminated by sending data in blocks no larger than a single MSU.
  5. UDP simplifies.  Creating and maintaining connections of a TCP socket can be time consuming and distracting, adding a lot of code with no added value.
  6. UDP datagram loss on a closed embedded LAN is negligible.

Item 5 and 6 above were particularly costly in this instance, many hours were spent maintaining connection oriented code when the occasional loss of data would not have had a negative impact on the system results.  In this case, even including the wireless LAN, iperf tests showed less than 0.02% datagram loss at the bandwidths this system was running.

Just as everything else posted here, this is one engineer’s opinion.  I hope by stating it, I can help you avoid some of the travails I’ve experienced.

The End of Endianness

May 13th, 2010

I very much dislike dealing with cross-platform endian issues.   When it comes to defining structures with bitfields–it can sometimes become a pain to order all the fields correctly depending on the platform one is using.

Another headache is dealing with host byte ordering and network traffic on Intel platforms–all that byte swapping!!!

Anyway, I’ve been using some simple functions that allow me to parse the message on the fly while it is still in network byte order with no need for byte swapping or structures with bitfields.

The great thing about this code is that it is cross-platform; absolutely no endian issues to deal with.  The price paid for this portability is execution speed–this code will likely be slower when parsing many fields out of a large message.  But if you only need one or two fields from a large message, then this code will actually be faster than byte swapping the entire message.

Below you will find the bitfield extract code header file, then give a small sample program which uses it, and its corresponding output.

Here’s the bitfieldextract.h header file:


#if !defined(WIN32)
typedef unsigned long UINT32;
typedef unsigned short UINT16;
typedef unsigned char UCHAR;
typedef unsigned char *PUCHAR;
typedef char *PCHAR;
#include <windows.h>

// bfx -- bit field extract
// extract up to a 32-bit value at any bit
// offset in a byte array
inline UINT32 bfx(
  const PUCHAR cptr,
  UINT32 bit_offset,
  UINT32 bit_len)
  // Portable bit field extract code

  UINT32 byte_off    = ( bit_offset >> 3 );
  UINT32 left_shift  = bit_offset - ( byte_off << 3 );
  UINT32 bytes       = ( left_shift + bit_len + 7 ) >> 3;
  UINT32 right_shift = ( bytes << 3 ) - ( bit_len + left_shift );
  UINT8  cval;
  UINT32 val, i;

  /* grab first byte  and apply shift */
  cval = cptr[byte_off] << left_shift;
  val  = cval;
  bytes -= 1;

  if (bytes) {
    /* shift back high order byte */
    val >>= left_shift;

    /* reset left shift since we did it already */
    left_shift = 0;

  for (i=1;i<bytes;i++) {
    /* shift then OR in only complete bytes */
    val = ( val << 8 ) | cptr[byte_off+i];

  if (bytes) {
    /* OR in low order byte after correct shifts */
    val =  val << ( 8 - right_shift );
    val |= ( cptr[byte_off+i] >> right_shift );

    /* reset right shift since we did it already */
    right_shift = 0;

  return val >> ( left_shift + right_shift );

// bfxi -- bit field extract and increment
// extract up to a 32-bit value at any bit
// offset in a byte array and auto-increment
// the bit offset by the number of bits read
inline UINT32 bfxi(
  const PUCHAR cptr,
  UINT32 &bit_offset,
  UINT32 bit_len)
  UINT32 val = bfx(cptr, bit_offset, bit_len);

  bit_offset += bit_len;

  return val;


Here’s a small program that uses it:

#include <iostream>
#include "bitfieldextract.h"

using namespace std;

int main(int argc, char* argv[])
  UCHAR x[6] = { 0x12, 0x34, 0x56, 0x78, 0x9a, 0xbc };
  unsigned int ofs = 0;
  int i;

  cout << "Binary representation of buffer "
    << "[ 0x12 0x34 0x56 0x78 0x9a 0xbc ]:"
    << endl << endl << "  ";

  for (i=0;i<48;i++)
    cout << bfxi(x, ofs, 1);
  cout << endl << endl; 

  cout << "Each 4-bit nibble:"
    << endl << endl << "  ";

  ofs = 0;
  for (i=0;i<12;i++)
    printf("0x%1x ", bfxi(x, ofs, 4));
  cout << endl << endl; 

  ofs = 2;
  cout << " 2 bits at " << ofs << " = ";
  cout << bfxi( x, ofs, 2 ) << endl;
  cout << " 4 bits at " << ofs << " = ";
  cout << bfxi( x, ofs, 4 ) << endl;
  cout << " 6 bits at " << ofs << " = ";
  cout << bfxi( x, ofs, 6 ) << endl;
  cout << " 8 bits at " << ofs << " = ";
  cout << bfxi( x, ofs, 8 ) << endl;
  cout << "10 bits at " << ofs << " = ";
  cout << bfxi( x, ofs, 10) << endl;

  for (i=4;i<=8;i++) {
    ofs = i;
    cout << "32 bits at " << ofs << " = ";
    printf("0x%08x\n", bfxi( x, ofs, 32));
  ofs = 16;
  cout << "32 bits at " << ofs << " = "; 
  printf("0x%08x\n", bfxi( x, ofs, 32));

  return 0;


And the associated output:

Binary representation of buffer [ 0x12 0x34 0x56 0x78 0x9a 0xbc ]:


Each 4-bit nibble:

  0x1 0x2 0x3 0x4 0x5 0x6 0x7 0x8 0x9 0xa 0xb 0xc

 2 bits at 2 = 1
 4 bits at 4 = 2
 6 bits at 8 = 13
 8 bits at 14 = 21
10 bits at 22 = 632
32 bits at 4 = 0x23456789
32 bits at 5 = 0x468acf13
32 bits at 6 = 0x8d159e26
32 bits at 7 = 0x1a2b3c4d
32 bits at 8 = 0x3456789a
32 bits at 16 = 0x56789abc

Reading BeagleBoard User Button (or any GPIO)

May 14th, 2009

This one is short and sweet, based on the blinking LED example found here.

Here’s a shell script to read a GPIO and generate a square wave on the console:

# Read a GPIO input


cleanup() { # Release the GPIO port
  echo $GPIO > /sys/class/gpio/unexport
  echo ""
  echo ""

# Open the GPIO port
echo "$GPIO" > /sys/class/gpio/export
echo "in" > /sys/class/gpio/gpio${GPIO}/direction

trap cleanup SIGINT # call cleanup on Ctrl-C

THIS_VALUE=`cat /sys/class/gpio/gpio${GPIO}/value`

# Read forever

while [ "1" = "1" ]; do
  # next three lines detect state transition
  if [ "$THIS_VALUE" != "$LAST_VALUE" ]; then

  # "^" for high, '_' for low
  if [ "1" = "$THIS_VALUE" ]; then
  echo -n $EV

  # sleep for a while
  sleep 0.05

  # wrap line every 72 samples
  THIS_VALUE=`cat /sys/class/gpio/gpio${GPIO}/value`
  NEWLINE=`expr $NEWLINE + 1`
  if [ "$NEWLINE" = "72" ]; then
    echo ""


cleanup # call the cleanup routine

I saved this as ~/read_gpio, did a ‘chmod 755 read_gpio’and invoked it to read the user button, GPIO 7:

root@beagleboard:~# ./read_gpio 7


Sampling at a 50ms interval appeared to catch most of my button pushes, even at an unreasonably high rate. A 100ms interval was too long and some of the faster button pushes were missed.