Sunday, December 6, 2009

Fat iPhone Static Libraries: One File = Device + Simulator Code

Abstract
This post explains how to build a static iPhone library that contains ARM code (for iPhones and iPod Touches) and x86 code (for the simulator), in a single file.

This is useful for companies which do not wish to expose their source code when distributing iPhone code, e.g. AdMob. The currently accepted practice is to distribute separate static library files for devices and for the simulator. This means the that the library clients have to setup separate targets for the simulator and device. The separate targets duplicate most information, and only differ in the static libraries included. This violates the DRY principle,

Solution

Open a Terminal, cd into your library's Xcode project directory, and issue the following commands.
xcodebuild -sdk iphoneos3.0 "ARCHS=armv6 armv7" clean build
xcodebuild -sdk iphonesimulator3.0 "ARCHS=i386 x86_64" "VALID_ARCHS=i386 x86_64" clean build
lipo -output build/libClosedLib.a -create build/Release-iphoneos/libClosedLib.a build/Release-iphonesimulator/libClosedLib.a

The commands above assume your library's name is ClosedLib and you're targeting iPhone OS 3.0. It should be straightforward to tweak the commands to accomodate different library names and SDK versions.


Assumptions
I assume that the library is not updated too often, so a couple of manual steps are acceptable. I would like to make add an automatic build feature to my zerg-xcode tool, but there is no ETA for that. I also assume that your project's default target is the static library.

I do not assume that the device and simulator SDKs have similar headers. The beauty of my method is that the simulator code is built with the simulator SDK, and the device code is build with the device SDK.

Explanation
The format used in Mac OS X and iPhone OS libraries and executable files is Mach-O. The format supports "fat" binaries, which are pretty much concatenated Mach-O files for different architectures, with a thin header.

Xcode can build fat binaries. In fact, if you set Architectures to Optimized in your project's build options, you'll get a fat binary containing both ARMv6 (iPhone 2G + 3G, iPod Touch) and ARMv7 (iPhone 3GS, iPod Touch 2G + 3G) code.

Xcode's output is controlled by the options ARCH (Architectures) and VALID_ARCH (Valid architectures). The architectures that actually get built are the intersections of these two options. Due to the different ARM processors in iPhones, device builds have VALID_ARCH set to include ARMv6 and ARMv7. However, simulator builds only target the i386 platform. I want a fat binary for the simulator as well, so I change the VALID_ARCH option to include the AMD64 platform.

The last step in the process is the lipo tool that comes with the Xcode Tools, and manages fat binaries. I use it to create a fat binary containing the union of the architectures in the two static libraries. The device build contributes the ARM architectures, and the simulator build contributes the Intel architectures.

The build process can be tweaked to throw out the AMD64 code, but I wanted to avoid hard-coding processor constants. Most importantly, using a fat library does not translate to larger iPhone applications, because the GNU linker strips out unused architectures when building binaries.

Testing
I tested this method by creating a iPhone static library, and an iPhone Window Application. I built the library using the steps above, and I included the headers and static library in the application. Then I used the debugger to confirm that the library code works both on the simulator and on an iPod Touch 2G.

My solution was only tested with the latest version of Xcode at the time of this writing (Xcode 3.1.2), but it should work on any version of Xcode 3, assuming no bugs come up. I tested on Snow Leopard, but the Leopard version of the iPhone SDK should work as well.

References
Mac Dev Center: Mac OS X ABI Reference
Amit Singh: Mac OS X Internals

Tuesday, November 17, 2009

JCOP Smartcard Performance

Abstract
I use NXP JCOP smart-cards for prototyping in my research. I have recently benchmarked the cards using my research code, which contains computational and cryptographic workloads.

I found a couple of surprising results that I want to share, so fellow developers can make informed decisions when choosing their prototyping platforms.

Findings
There is a 1.5-2x speed difference between the different revisions of the same high-end chip, the NXP JCOP41 with 72kb of EEPROM. The V2.2 revision for smart-cards (no longer available, replaced by 2.2.1) has the best performance, and the V2.2.1 revision for the SIM (ID 000) form factor has the worst performance.

There is a significant speed difference between the same revision (V2.2.1) of the same chip (NXP JCOP41, 72kb of EEPROM), in different form factors. The chip for the smart-card form factor is almost as fast as the older V2.2 revision, while the chip for the SIM (ID 000) form factor is significantly slower.

There is a 2-4x speed difference between the same revisions (V2.2, smart-card form factor) of the NXP JCOP31 and the NXP JCOP41 chips.

The 3DES encryption/decryption engine has non-linear performance. The time it takes to decrypt 128 bytes is not very different from the time it takes to decrypt 24 bytes, on the JCOP41 chips. It seems that there is a huge setup cost for the DES engine, which outweighs the actual encryption cost.

There is a 4-8x speed difference between RSA and 3DES encryption on the JCOP41 chips, and a 3x speed difference on the JCOP31 chip. This goes against the conventional wisdom that symmetric encryption is 2 orders of magnitude faster then asymmetric encryption. This is probably due to the time it takes to setup the 3DES engine.

Conclusion
Secure processors in smart-cards have non-obvious performance characteristics. I hope my work saves you from the unpleasant surprises that I had.

Motivation
To the best of my knowledge, there are no easy to get benchmarks on smart-card processors. At least, I couldn't find anything when I searched.

Smart-card retailers disclose vital specifications, like EEPROM size and the cryptographic primitives that are implemented on the chip, but tend to be quiet about speed. The sites I used don't mention anything about the type of processor used, or the frequency of the processor.

For some applications (e.g. prototyping, where I want my unit tests to run quickly), speed is just as critical as the other specifications, and its more important than cost.

Data
The data that I used to reach my conclusions is available below. The benchmarks are described in section 5.1 (page 13) in my paper on a successor to the TPM.

decrypt_3des decrypts 24 bytes of data, while decrypt_3des_long and decrypt_rsa work on 128 bytes of data. 3DES is configured in EDE-CBC mode (112 bits of key material) and uses the ISO-9797 method 2 for padding. RSA decryption uses PCKS#1 padding.

The benchmarks can be reproduced by installing Rubygems, then installing the tem_ruby gem, and issuing the following commands
tem_upload_fw  # Uploads my JavaCard applet to the active smart-card.
tem_bench  # Runs the benchmarks.


NXP JCOP41 v2.2/72k (no longer available)
time_blank_bound_secpack_3des: 0.20757s
time_blank_bound_secpack_rsa: 0.86173s
time_blank_sec: 0.18017s
time_devchip_decrypt_3des: 0.05803s
time_devchip_decrypt_3des_long: 0.08042s
time_devchip_decrypt_rsa_long: 0.74047s
time_post_buffer: 0.08280s
time_simple_apdu: 0.00515s
time_vm_perf: 0.73887s
time_vm_perf_bound_3des: 0.78137s
time_vm_perf_bound_rsa: 1.43647s


NXP JCOP41 v2.2.1/72k (usasmartcard.com product link)
time_blank_bound_secpack_3des: 0.24155s
time_blank_bound_secpack_rsa: 0.89740s
time_blank_sec: 0.18937s
time_devchip_decrypt_3des: 0.08420s
time_devchip_decrypt_3des_long: 0.10800s
time_devchip_decrypt_rsa_long: 0.76480s
time_post_buffer: 0.08577s
time_simple_apdu: 0.00610s
time_vm_perf: 0.83257s
time_vm_perf_bound_3des: 0.90033s
time_vm_perf_bound_rsa: 1.55637s


NXP JCOP41 v2.2.1/72k USB token (usasmartcard.com product link, probably using this card)
time_blank_bound_secpack_3des: 0.41070s
time_blank_bound_secpack_rsa: 1.23089s
time_blank_sec: 0.34530s
time_devchip_decrypt_3des: 0.19010s
time_devchip_decrypt_3des_long: 0.21410s
time_devchip_decrypt_rsa_long: 1.05600s
time_post_buffer: 0.17213s
time_simple_apdu: 0.01000s
time_vm_perf: 1.11310s
time_vm_perf_bound_3des: 1.19703s
time_vm_perf_bound_rsa: 2.01420s


NXP JCOP31 v2.2 (usasmartcard.com product link)
time_blank_bound_secpack_3des: 0.84673s
time_blank_bound_secpack_rsa: 1.78957s
time_blank_sec: 0.78120s
time_devchip_decrypt_3des: 0.23553s
time_devchip_decrypt_3des_long: 0.50060s
time_devchip_decrypt_rsa_long: 1.54990s
time_post_buffer: 0.88864s
time_simple_apdu: 0.02813s
time_vm_perf: 1.84374s
time_vm_perf_bound_3des: 1.92594s
time_vm_perf_bound_rsa: 2.87900s

Saturday, November 14, 2009

Quick Way to See Your Gems' Documentation

Summary
The fastest way to see the RDoc for your installed gems is to type the following command into a terminal, and point your Web browser to the address it returns (usually http://localhost:8808)
gem server

Details
The Web server created by the gem server command contains the RDocs for all the gems you have installed, unless you disabled RDoc generation when you installed your gems.


Motivation
I was used to either searching via Google, or going to a gem's RubyForge page to see the RDoc for the gem.

Initially, it seemed that the bit of effort is worth not having to learn some way to generate and bring up the RDocs myself. To my surprise, the procedure is very simple, and it's totally faster than browsing to some Web site that has the RDocs.

That aside, using a local server has the advantage that you'll see the RDocs corresponding to the exact versions of your gems. And, last but not least, RubyForge seems to be slowly falling into oblivion, and newer gems don't seem to bother publishing their RDocs.

I hope this post saves you some time.

Tuesday, July 28, 2009

Sandbox Push Notifications on Hacktivated iPhones

This post reports my finding that push notifications in the sandbox environment don't work on hacktivated iPhones, even with the Push Fix package.

I spent a lot of time developing code for Push Notifications, because I was debugging my code under the assumption that Push Notifications work on my iPhone. I hope this post saves some time for other iPhone SDK developers.

Test Environment
I used an iPhone 2G (hardware model iPhone1,1) which was never activated with AT&T. The phone was connected to the Internet via WiFi, and it had no SIM card in it. My control is a newly bought iPod Touch 2G (hardware model iPod2,1) was activated with iTunes, upgraded to iPhone OS 3.0, and connected to the Internet via the same WiFi router.

The iPhone was jailbroken, hacktivated, and unlocked with Pwnage Tool 3.0, and it received Push Fix from the iPhoneil.net repository. I used AIM (the free edition) to confirm that Push Notifications work on the iPhone.

I used ZergSupport's test suite to collect the push tokens for the iPhone and iPod, and I used imobile's test suite to send the push notifications.

Test Results
The hacktivated iPhone never received notifications from the sandbox (a.k.a. development) servers. It did receive notifications from the production servers. The iTunes-activated iPod received both development and production notifications.

Conclusion
If you're considering developing for the iPhone, and you want to implement and test Push Notifications for your application, you'll need an iTunes-activated device. The cheapest option is probably an iPod Touch 2G.

Motivation
I can't afford an iPhone. I can afford the device, but I can't afford AT&T's plan. On the other hand, I want to address the iPhone's user base, because it consists of wealthy people who spend money easily.

I have an iPhone 2G, from the good days when you could buy one in an Apple store, and not have to deal with AT&T at all. I like testing my application on its EDGE connection, to ensure they behave under the worst-case network connectivity scenario.

Saturday, July 25, 2009

Rebuild Your Ruby Gems If You Update To Snow Leopard

This post contains a command that you must absolutely issue if you are a Ruby developer upgrading to Snow Leopard.

The Commands
Update: Type the following command in terminal:
sudo gem update --system; sudo gem pristine --all
rubygems will produce some errors which are safe to ignore

For historical reasons, here's my initial solution to re-compiling all the native gems.
Fire up irb, and type the following command:
`gem list`.each_line {|line| system "sudo gem install #{line.split.first}"}

Motivation
Snow Leopard introduces a disrupting change: everything runs in 64-bit mode by default. Most importantly to me, ruby is now 64-bit. This is a problem when upgrading to Snow Leopard, as opposed to doing a fresh install, because the Ruby extensions in your old gems are probably 32-bit.

A quick solution to this problem is getting all the extensions rebuilt, which is done by reinstalling all the gems.

In case you're wondering, gem update won't do the trick, because it will not rebuild all your gems.

Symptoms
If you're lucky, you'll get a library loading error when trying to use some gem with an extension (example: json), and you'll figure out rather quickly that you need to reinstall the gem.

A subtle symptom of the same bug is experiencing slowdowns running a Rails development server. In my case, webrick was really slow - start-up took about 30 seconds. For this reason, it's better to re-compile all the gems, as oppose to fire-fighting load error messages.

I hope this post saves you some time.

Saturday, July 11, 2009

Downloading YouTube videos with TubeTV again

One-line Summary
If the TubeTV download button is disabled in a YouTube video page, refresh the page (Apple+R) and hit the button while it's available.

Whole Story
I use TubeTV to download music videos those funny user-generated videos from YouTube. I've set it to encode the videos for my iPhone, then them directly in my iTunes library, tagged as Music Videos. Amazingly, it was released in early 2008, and it's still avoiding all the crap that has come to YouTube since (annotations, ads). Thank you YouTube for not tampering with the H.264 stream in the Flash files!

I usually wait until the YouTube video loads completely before I hit TubeTV's download button, so I don't download the same bits twice. But, as of recently, I've noticed that the download button becomes disabled at some point during the video's load. I was scared for a second, and thought the days of my easy downloading are over.

After my 2-seconds panic went away, I tried refreshing the page, and the download button was enabled again. The videos are still downloaded, encoded for iPhone, and deposited into iTunes' library just fine.

Closing Thoughts
Too bad TubeTV wasn't open sourced, even though looks like its author abandoned it. One day, it will stop working. Hopefully, something better will be written by then. Or YouTube will start using <video> tags and serve us the H.264 data on a silver platter. And the RIAA / MPAA will let that slide. Right.

Monday, May 25, 2009

Snooping on iPhone Applications

Most iPhone applications communicate with a server to perform their functions. This post is a step-by-step guide for snooping on the communications between the iPhone application and its server. The instructions can come in handy for debugging your own application, or if you're curious how other applications communicate with their servers.

Overview
My method uses Wireshark's WiFi RadioTap promiscuous mode to capture all the radio traffic, and find the iPhone's traffic. This article starts by laying out the necessary "ingredients", and guides you through setting up your iPhone and your Mac. Snooping the traffic is demonstrated by snooping on Apple's Stocks application. The post wraps up by describing my motivation for snooping on iPhone apps.

Ingredients
This post is tailored to my home environment, which is described below. Most differences between that environment and yours can be compensated by a bit of creativity. Here's what you need:
  1. iPhone or iPod Touch. As long as it can run the application and connect to WiFi, it works.
  2. Open WiFi network. Most schools and work places have open guest networks, which work for this purpose. At home, I disable the security on my router/AP for the duration of my snooping, and re-enable it later on.
  3. Mac computer with OSX Leopard. It may work on Tiger, I haven't tried. It may work on hackintoshes, but I haven't tried that either. The software I'm using also has Windows/Linux ports, which I haven't tried.
iPhone / iPod Touch Setup
The iPhone can communicate using the cellular network, in addition to the WiFi. We want to make sure that doesn't happen. The fastest way I know is to go to Settings and enable Airplane Mode, and then select and enable WiFi and re-connect to the access point. If you are using an iPod Touch, you don't have to worry about this: it can only communicate via WiFi.

Snooping on applications is a lot easier if you know your iPhone / iPod's IP address. To find the IP launch Settings, and select WiFi, click on the blue arrow next to your access point's name, and read the IP from under the DHCP tab. This blog post has a thorough guide for this step, with pictures.

Computer Setup
Go to Wireshark's download page and download the stable version .dmg for your computer. The stable version at the time of this writing has all the necessary features for snooping, so you don't need the development version unless you feel adventurous. Yes, I knew you'd ask!

The Wireshark installation is not straightforward yet (this writing uses version 1.12), so I will go through the steps. Start off with the easy part, and drag the Wireshark icon to the Applications folder. The following commands (which you can copy-paste in Terminal) implement the instructions in the Readme.rtf included in the .dmg download.

sudo cp /Volumes/Wireshark/Utilities/Command\ Line/* /usr/local/bin/
sudo cp -r /Volumes/Wireshark/Utilities/ChmodBPF /Library/StartupItems/
sudo /Library/StartupItems/ChmodBPF/ChmodBPF start
You can unmount and delete the .dmg now.

Application Traffic Snooping
Before starting Wireshark, make sure your Mac is using WiFi. I have both LAN and WiFi connections, and I pull out my LAN cable before starting up Wireshark.

Start Wireshark, ignore the dialog boxes (there should be one informing you about a potentially long startup time, and one about missing stuff while loading MIBs). Open the Capture menu, and select Intefaces. Identify your WiFi interface - it's usually en1 (that's always the case on a Macbook / Macbook Pro). Click the Options button, change the Link-layer header type to IEEE 802.11 plus radiotap WLAN header, and enable Promiscuous mode.

Your capture should be as short as possible, to make analysis easy. For this reason, get ready to launch your iPhone application, and launch it as soon as Wireshark starts capturing traffic. Click the Start button at the bottom-right of Wireshark's dialog to start playing.

To stop the capture, select Stop from the Capture menu. For a shortcut, you can use the 4th toolbar icon from the left. If everything went well, all the iPhone traffic is available for your analyzing pleasure.

The example below shows an easy method for looking at an application's traffic.

Apple Stocks Traffic
This section describes how you can observe the traffic of Apple's Stocks application that comes pre-installed on iPhone OS. You can safely skip it if you feel like exploring Wireshark on your own.

First, use the instructions above for capturing the iPhone's Internet traffic for a few seconds, right when Stocks is launched. In the packets table, click on the Source header to sort packets by source. Find the packets originating from your iPhone / iPod Touch. Go through until you find something interesting.

For the Stocks application, the first interesting packet is a DNS resolution request for iphone-wu.apple.com which is the server feeding Stocks its information. The packets right under that are TCP packets, and you can right-click on any of them and Follow TCP stream. You will see a HTTP request / response between the Stocks application and Apple's servers. The imei parameter there caused some uproar (and blog traffic) a couple of years ago, so traffic snooping can definitely pay off.

When you close the TCP stream window, your packets window will only show the packets related to the request / response pair that you just saw. If you look in the Filter field under the toolbar, you can get a glimpse of Wireshark's filter syntax. The filter can be edited. For example, if you remove the predicate consisting of tcp.port eq and some big number, you will have all the HTTP packets between exchanged between the iPhone / iPod Touch and Apple's server.

By now, you should have a good glimpse into Stocks' communication protocol. Of course, the method described here applies to any other application, as long as it doesn't use encryption (e.g. SSL / TLS).

Motivation
I use this method to see where iPhone applications get their data from, and how they communicate with their servers. For example, the Stocks application claims it uses Yahoo data, and I wanted to see if it has a private XML feed, or if it implemented its own JSON parsing.

I also used this method to analyze the protocol of an online game that I like, so that I can write a script for automating the boring tasks.


Thank you for reading this post! I'm looking forward to your feedback. I would especially appreciate comments on simplifying the setup process. Happy snooping!

Wednesday, May 13, 2009

iPhone Web Service Toolkit Upgrade: JSON FTW

I have recently open-sourced the ZergSupport code updates used in StockPlay versions 0.3 and 0.4. The high-visibility high-impact change is support for JSON parsing. This post shows what you can do with JSON parsing.

Compact Collection Initialization
Unfortunately, Objective C does not have literals for collections (arrays, dictionaries, or sets). Setting up complex nested structures normally requires ugly objective C code. Fortunately, the whole JSON specification is a compact literal notation. Compare and contrast the following.



To sweeten the deal even more, ZergSupport's JSON parser was extended so you can conveniently embed literals in Objective C strings. First, strings can be delimited by ' (single quotes) asides from the standard delimiter " (double quotes). Second, the extended parser understands sets, which look like arrays, but are delimited by angled brackets ( < > ) instead of square brackets ( [ ] ). Without further ado, here's how to use JSON literal support.



A small wrinkle to be aware of is that JSON parsing is slower than building the collections directly in Objective C, so JSON literals should be used in features that are not performance-sensistive, like tests and configuration files.

Web Services
The API for working with JSON Web services is very similar to the API for XML services, which is showcased in my first post on ZergSupport. When used from the Web service API, the JSON parser ignores everything up to the first { character, so it is able to parse JSONP output.

The code below is a complete implementation for stock ticker symbol search, using Yahoo Finance. The code uses Object Query (covered below) to indicate which parts of the JSON response should be transformed into ModelSupport models.



Object Query
Object Query implements a domain-specific language (DSL) for retrieving objects from a structure of deeply nested Cocoa collections. Queries are specified as strings, and are performed against the root object in a structure of nested collections. The result of a query is an array of zero or more objects matching the query.

The queries are property names, joined by a separator character (usually /). The first character in the query is the separator character. For example /results/1 matches the value that object['results'][1] would return in JavaScript. The special property names * and ? are inspired from glob expressions: * means the next property may be found in the current object or in some descendant object (some levels deeper in the object graph), whereas ? means the next property may be found in the current object or in the object's direct children (one level deeper in the object graph). For example /?/1 will return the same result as /results/1, if the initial object is {'results':['Jane','John']} (the result will be a NSArray containing the NSString @"John").

ObjectQuery can be used directly, outside of JsonHttpRequest, as shown in the following code sample.



The decision to introduce a DSL is motivated by the need to extract ModelSupport models from JSON Web service responses. XML objects have a tag/content separation, and tags are usually good indicators for model extraction purposes. For JSON objects, the closest equivalent to a tag name would be the property name whose value is the object hash describing the model. This is fragile, and does not work if the response contains an array of models. ObjectQuery is a bit more general than name tags, but does not degenerate into XPath's complexity. The implementation is 182 lines of Objective C, including comments and whitespace.

Conclusion
JSON support does not stop at the parser. The toolkit fully embraces JSON, which is now available for initializing ModelSupport models. The toolkit's new Object Query bridges the gap between XML-based and JSON-based Web Services, and preserves API consistency in WebSupport.

Thank you reading this post! I'm looking forward to receiving your feedback or (even better) pull requests for ZergSupport.

Sunday, May 10, 2009

Community Effort for iPhone Application Security

This post is a short description of the community effort I'm trying to start around the iPhone application security model. It describes the effort, my motives for starting it, and the method I have chosen. The effort is hosted on George Hotz' theiphonewiki.com, with George's permission.

Effort

I have created an Application Copy Protection section on The iPhone Wiki. I hope that the wiki will become a place for developers to pool their knowledge on iPhone application security. In turn, this will make iPhone development less expensive and more enjoyable. Ideally, we would develop a code obfuscation method, as well as a server-side integrity check method, which are non-trivial to reverse. Once there is a barrier against automated programs and beginner crackers, piracy will hopefully go down to a more acceptable rate.

Motivation
I'm dissatisfied with the asymmetry in the iPhone security landscape. On one hand, application pirates have a good infrastructure, ranging from tutorials to the Crackulous application for automated piracy, and to the Appulous infrastructure for distributing pirated applications. On the other hand, developers have to fight many unknowns, like the unspecified signature system, because Apple designed the system on the assumption that developers will not have to worry about copy protection themselves. Application security information is spread across Apple's documentation and various blogs and forums, which makes it hard for developers to learn and implement application security.

I'm also unhappy with RIPdev's approach of charging setup fees and royalties, because the application developers are already paying Apple an up-front development fee, as well as distribution fees.

Last but not least, I'm obviously a bit upset that my application got pirated the next day after it launched in the iTunes store :)

Method
I am documenting my thought process and method for establishing the effort for historical reasons. They will hopefully be useful to other people who want to start similar initiatives.

I wanted to do a wiki on iPhone application security, but I didn't take the time to think the logistics until recently. I was initially thinking of opening a Google Site, and adding as a collaborator any person that would e-mail me an useful piece of information. Then I realized that Google don't look as open to contributions as Wikis do. At the same time, I also started thinking about getting visibility for the site. After a bit of thinking, I realized I'm better off hosting the effort on The iPhone Wiki, because it's already a well-known site, its topic is security on the iPhone, and it contains information that can be useful to developers researching application security.

After I decided on The iPhone Wiki, I did some googling to find out that it was started by George Hotz, and I read the wiki's Constitution to see if my effort belonged there. I was still unsure if my effort fits in, so I decided to ask for George's permission. After some more googling, I eventually tracked him down, and he gave his consent quickly.

Having gotten George's consent, I spent a bit of time thinking of the best way to blend the topics I wanted to add with the existing content on the Wiki. I chose to create a separate section named Application Copy Protection on the front page, and created a skeleton under it. This optimizes for visibility, and makes it easy for me to optimize my thoughts, but may not be the best solution for the overall site. Fortunately, it's a Wiki, so I don't have to worry too much. If I made a mistake, someone else will jump in and fix it.

My next steps are:
  • contribute enough content to make the wiki worth reading for iPhone application developers
  • create a skeleton for what I think the rest of the content should be, so other people can easily jump in and contribute their knowledge
  • pitch the effort to high-traffic iPhone-related blogs, to make developers aware of the Wiki; the fact that the pages are hosted on The iPhone Wiki should help
Conclusion
This is my first grown-up attempt at starting a community effort. I would appreciate any suggestions or generic feedback. I hope you found the post at least amusing, if not useful.

Saturday, April 25, 2009

iPhone Piracy: Hard Numbers For A Soft Problem

Update: Apple has fixed the piracy problem by implementing In-App Purchases, which use signed receipts that can be validated by servers. In-App Purchases have become available for free applications on or around October 14, 2009. Therefore, this post is only relevant for historic interest. 

This post will give hard numbers representing the current state of piracy on the iPhone platform. Its main purpose is to help independent developers that are considering working on the iPhone decide if they should invest their efforts into the platform.

Overview
This post analyzes the piracy rate of my iPhone application, StockPlay. The article begins by describing the application used for measurements, then argues that the real piracy rate for the application is over 90%, and explains why this state of affairs is unlikely to change. The post closes with advice for individual developers considering entering the iPhone market.

Background
StockPlay is a simulated stock trading game, where the virtual market is strongly correlated with the real market. The game is backed by a Ruby on Rails server that the iPhone client must connect to in order to play, which made it possible to get hard numbers on piracy.

The game retails for $9.99 (price tier 10) in the App Store, and is available world-wide since April 6, 2009 (19 days before this writing). This post on the game's official blog explains the motivation behind the pricing. The game does not contain any copy protection like Ripdev's Kali, and solely relies on Apple's DRM obfuscation.

StockPlay was cracked and became available on the most popular site for cracked applications 1 day after its launch.

StockPlay's Piracy Rate
To this date (April 25, 2009), we have 40 sales, and 2902 users. However, as most pirates would say to defend themselves, some of these people only tried StockPlay because it was available for free. To account for this, I will restrict my calculation to the 456 users that were still trading (and thus actively using the application) 24 hours after they registered with the server. This yields a piracy rate of 91%.

Pirates also say that some people would not have afforded the application, but I claim that price is not an issue, given the cost of buying an iPhone and a data plan for it.

Apple Doesn't Care
After reading the above numbers, you're probably thinking that Apple will come in and fix the situation. This section argues that Apple has no financial incentive to eliminate piracy, and their behavior indicates that they're well aware of that.

First, the iTunes App Store is expected to break even. According to their statements, Apple doesn't expect to make profit out of operating the store. This means that they don't care if an application is purchased through the store, or downloaded from elsewhere, not using their bandwidth. On the other hand, more free (from the consumers' points of view) applications translate into better demand for Apple's hardware.

Second, Apple already knows about the issue. I filed a bug in Radar, explaining how easy it is to crack applications with Crackulous (yes, it's really that easy), and providing a solution to prevent piracy for server-based apps. The bug received the ID 6755444, and was marked as a duplicate of 6707901, which was probably filed in mid-February. I'm making this claim based on the IDs of my other bugs, and on the assumption that Radar IDs are serial. Bottom line: Apple has other priorities.

Last, but not least, Apple makes it ridiculously difficult for developers to implement their own solution. The iPhone SDK developer agreement bans developers from getting involved with jailbreaking, which is a prerequisite to understanding how our applications are being cracked. To make matters worse, Apple does not make it easy for developers to obtain the final application binary, as it will be distributed on the iPhone. This means we cannot implement server-side binary checksums without having to jump through a lot of hoops. Furthermore, implementing a decent anti-cracking system requires messing with the binary bits and application loader at a low level. This runs the risk of get your application rejected, which pushes your launch date back by a couple of weeks.

Conclusion
If you're hoping to make easy money on the iPhone, look elsewhere. Don't believe the hype about Apple users having better morals, and being much more likely to pay for software. iPhone users are educated enough to Google search for pirated applications, and dishonest enough to use them. Just like PC users.

The piracy rate of over 90% suggests that you're better off developing desktop applications. Sure, they'll be pirated as well, but at least you don't have to put up with Apple's approval process and you won't have to design and code around the excessive technical limitations of the iPhone SDK.

Want to avoid piracy and stay ahead of the pack? It's a great time to be a Web programmer.

Now What?
If you're determined on writing an iPhone application (we programmers like to play with cool toys, after all), and want to monetize your effort, you should stick to one of the following:
  • in-app advertising - admob seems to offer the best toolkit at the moment. Google has been experimenting with iPhone ads, but they don't offer an SDK to the public quite yet. Downsides: ads take up a sizable chunk of screen real-estate, so you'll have to work harder at designing your app. If the application isn't wildly popular, the ad revenue will not be worth the effort.
  • traditional payment methods - if you have a server in your application (like StockPlay does), you can distribute the application for free, then charge for accounts on the server. Disadvantages: your users will have to have PayPal or other payment methods, and will have to log in using their mobile phones (I hate typing on iPhone). People may get frustrated if they blindly download the app because it's free, then realize they have to pay. Frustrated people give bad reviews.
  • third-party copy protection - the best solution that I know of is Ripdev's kali. Ripdev plays an active role in the jailbreak community, so they're likely to stay ahead of the crackers. Disadvantages: they charge a setup fee per application, and royalties. You'll have that nasty feeling of being ripped off, as you're already paying Apple 30% of your revenue for the same service.
  • develop your own copy protection - not worth it, unless you want the learning experience, or you're a big company. Copy protection is boring as hell, and it's unrewarding - no matter what you do, you eventually lose.
Motivation
I wrote this post to help my fellow developers decide if they should pursue the iPhone as a development platform. When my friends and I decided to write an iPhone application, the development blogs seemed to agree on a piracy rate of 60%, so I wanted to share my completely different findings with the developer community.

I believe the findings are novel and worth sharing, because they are based on hard numbers, as opposed to proxy measurements such as declines in sales, or in-app analytics. Most applications can function without a server, so the majority of developers cannot obtain 100%-accurate user statistics.

My friends and I particularly cared about piracy because our application uses a server, which means that pirates are not just lost business, but also unauthorized consumers of server resources such as bandwidth and CPU time.

Sunday, April 19, 2009

Toolkit for Web Service-Backed iPhone Apps

This post describes the chunk of iPhone code that I have recently open sourced (edit: I wrote outsourced before; Epic FAIL). I wrote the code while developing the StockPlay trading simulation game, because the currently released iPhone SDK does not ship with a good infrastructure for building applications that talk to Web services.

Overview
I named the toolkit ZergSupport, and you can get it from my GitHub repository. The README file contains a thorough description of the library so, instead of rehashing that, my post will highlight the main reasons why you should care about the toolkit, and discuss some of the thoughts that went into writing this code.

The code is organized as a toolkit not a framework, which means that ZergSupport is a collection of supporting classes, and does not impose a rigid architecture on your application, like a framework would. As you read this post, please keep in mind that you can use the parts that you want, and ignore everything else. ZergSupport is liberally licensed under the MIT license, so feel free to go to GitHub and jump right into it, as soon as this post convinces you that it's useful.

Web Service Communication
Without further ado, this is how data exchange is done.
The code above makes a Web service call, passing in the data in the user and device models, and parsing data that comes back into models. The data that gets passed to the Web server is formatted such that Rails and PHP would parse the models into hashes, which fits right into how Rails structures its forms. The code expects the Web service to respond in XML format, as implied by the ZNXmlHttpRequest class name. The models in the Web service response are sent to processResponse:, as an array of models.

You have to agree that the code above is much more convenient than having to deal with low-level HTTP yourself. That is, unless setting up the models is a real hassle. Read on, to see how easy (I hope) it is to declare models.

Models
On Mac OS X, you have Core Data to help you with your models. Sadly, this feature didn't make it into iPhone 2.x, so you have to write your own model code. Since StockPlay works with a lot of models, I couldn't write quick hacks and ignore this underlying problem. Actually, I could have, but I didn't want to.
The following listing shows an example ZergSupport model declaration.

The model's attributes are defined as Objective C 2.0 properties. I did this to keep the code as DRY as possible, thinking that models will need accessor methods anyway, and having explicit property declarations makes Xcode be happy and not clutter up the code window with compilation warnings. Right now, the model declaration is a big FAIL in terms of DRY, because the iPhone Objective C runtime requires declaring fields to back up the properties.  However, the 64-bit Objective C supports omitting the field declarations, so I have reason to hope that the iPhone runtime will do this as well, eventually.

An advantage I liked for using properties to declare model attributes is that the model declaration is plain code, which is easy to work with using version control, and is easy to code-review. I think this is as close as it gets to the convenience of Rails models.

Models And The Web
Models change, usually by gaining more attributes. If you're writing an iPhone application on top of an MVC (e.g. Rails) Web service, your iPhone models will probably mirror the Web models. I assert that this strategy can only work well if the iPhone code is capable of ignoring model attributes that it does not understand. The motivation is that models change over the life time of the application, and most of the time they change by gaining attributes. If your iPhone code cannot handle unknown attributes, you have to synchronize your Web server changes with your iPhone application release dates, which is a pain.

So, the ZergSupport models accept unknown attributes. In fact, they go one step further, and store unknown attributes as they are, so these attributes survive serialization / de-serialization. This is particularly handy for using iPhone-side models to cache server-side models. As soon as the server emits new attributes, these are stored on the iPhone cache, ready to be used by a future version of the application.

Just One More Thing (x5)
ZergSupport model serializers and deserializers can convert between iPhoneVariableCasing and script_variable_casing on the fly, so your iPhone models follow the iPhone's Objective C naming conventions, and your server-side models follow the naming conventions in your Web application language.

The toolkit includes reusable bits of logic that can come in handy for Web service-based iPhone applications, such as a communication controller that regularly synchronizes models between the iPhone and the Web server.

The ZergSupport code base packages a subset of Google's Toolkit for Mac that provides unit testing. You create unit tests simply by adding a new executable target to your project, and including the testing libraries into it. The testing code is wrapped in a separate target from the main code, so you don't ship unit testing code in your final application.

Speaking of testing, ZergSupport has automated test cases covering all its functionality. The Web service code is tested by an open-source mock Web service which is hosted for free, courtesy of Heroku Garden. I used a hosting service because I wanted to make sure that the OS integration works correctly, and to allow any user and contributor to run the unit tests without the hassle of server setup.

Last, but definitely not least, the ZergSupport code can be automatically imported into your project, using the zerg-xcode tool that I have open-sourced earlier this year.

Conclusion
The final conclusion is yours. I hope you will find the ZergSupport toolkit useful, and incorporate it in your code. I promise to share all future improvements that I make to ZergSupport, and I hope you will do the same, should you find it useful enough to look through the code and change it.

Wednesday, April 15, 2009

App Engine supports Ruby! Sort-of.

This post is a follow-up to my Great Time To Be a Web Programmer post, where I assert that HTML / CSS / JavaScript are the technology to learn in 2009, if you don't know them already. In that post, I said that Google's App Engine only supports Python, and that has changed. I am writing this quick update so my blog's readers are aware of the change in the Web application hosting landscape.

Java leads the way to Ruby
As of early April, Google's App Engine supports Java. The really good news here, if you don't care for low-productivity languages, is that the Java 6 VM provided by the App Engine has near-native performance, and most high-level languages have interpreters written in Java.

This opens the route to my favorite language, Ruby, being available on Google's App Engine. appengine-jruby is an experimental open-source project aimed at making Jruby available for the App Engine, and at implementing Ruby-esque abstractions over Google's APIs. At the same time, Ola Bini from ThoughtWorks took the time to get Rails to run on the App Engine, and wrote a blog post documenting his method.

There is still a devil in the details, however. According to Ola Bini, performance is nothing to write home about, and developers still have to zip up their source code to work around App Engine's 1,000 file limit.

Why this matters 
I think Google's App Engine is an important cloud-hosting platform because of its generous free tier. It is the best solution that I know for hosting hobby projects, or for a project's incubating phase

Conclusion
Sooner or later, Rails applications will run seamlessly on Google's App Engine. I believe it will happen sooner rather than later. Once Rails 3 shows up in the horizon and delivers on its promise of modularity, developers will be in a good position to rewrite the right parts for the App Engine.

In the bigger picture, the reality is shifting towards my guess that the cloud hosting platforms will soon support all the high-level programming languages. So the last programming language that you will have to learn for Web development is JavaScript, because the browser is still tied to it.

I hope that you have found this post useful, and look forward to your comments.

Saturday, April 4, 2009

Ubuntu 9.04 on Dell Mini 910

This post outlines a cheap and reasonably fast procedure for upgrading the stock Ubuntu installation on a Dell mini 910. The method does not require an external CD drive. Instead, it uses things that you're likely to have around, if you like messing with computers.

As always, I go through the actual procedure first, and leave the motivation for last. If you're here, it's likely that you have already decided you want to upgrade, and want to get the job done. If you're uncertain, skip to the Motivation section to find out why you might care.

Requirements
  • Dell mini 910 (the Ubuntu model); you can probably get away with the Windows XP model, but it is unclear whether all your hardware will be supported by drivers
  • 1Gb+ USB stick (cheap, you're likely to have one around already)
  • another computer running Ubuntu 8.10 or newer


Method
First, we have to load Ubuntu on a USB stick, and make it bootable. We'll use the Ubuntu 8.10+ computer for that.
  1. Download the latest reasonably stable 9.04 CD image (I recommend avoiding daily CD images, you can use Update Manager to get the latest updates). This google query should point you in the right direction.
  2. While waiting for the download, backup anything you need off your USB drive. It will get erased in the next step.
  3. Go to System > Administration > USB Startup Disk Creator and go through the instructions to end up with a bootable USB stick.
  4. If the USB stick is automatically mounted, eject it and take it out.
Second, we need to get Ubuntu onto the Dell mini.
  1. Power off the computer, insert the USB stick.
  2. Power the computer back on, press and hold the 0 (zero) key until you see a menu. The mini will be annoying and beep at you, ignore that.
  3. Select your language (I recommend English even if it's not your primary language, especially for beta software) and choose to install Ubuntu (as opposed to running the live image).
  4. Breeze through the easy choices in setup. Stop at the disk partitioning phase, as you might want to give that a thought.
  5. For my configuration (1Gb RAM, 8Gb disk) I recommend choosing manual partitioning, and creating a 1Gb swap partition. I did run out of RAM while running my development scripts on the machine, so I decided I need the swap. I also recommend ext3 over ext4, because you won't store too much data on your mini's disk, so ext4's benefits are not worth the risk in this case. For the default configuration (512Mb RAM, 4Gb disk), I'd spend 512Mb or at least 256Mb on swap.
  6. Defaults are fine for everything else until the installation reboots.
  7. Enjoy the improvements in 9.04, and the lack of Dell branding.
    Motivation
    I'm using the Dell mini as a demo machine that I can easily carry around. Its low cost also means that, if necessary, I can leave it with the people I'm demoing to, and I won't feel too bad about that. For that reason, I want the Dell branding removed, I want the latest and greatest from my linux distribution, and I want regular the x86 architecture, not LPIA (low-power intel architecture).
    My wishes aside, I think that the UI improvements in 9.04 and getting rid of Dell's stuff is sufficient reason to upgrade.

    Alternatives
    If you want to use the Dell mini as your portable computer, you might prefer the LPIA architecture to plain vanilla x86. The 9.04 download pages offer both netbook-optimized builds, and LPIA builds. Disclaimer: YMMV (your mileage may vary), I haven't tried this because I want don't want extra hassles during my development cycle.

    If you don't have an Ubuntu 8.10 computer and/or a USB drive, you can try Unetbootin. Google searches indicate that it gives mixed results, I haven't tried it because I had another Ubuntu machine. The procedure in might work, and it only requires your mini and internet access.

    If you have a lot of time on your hands and want to play, you can explore setting up a PXE server. Requires lots of software and access to the network hardware (easy if that's your home router, more difficult if you're in a school or company).

    I hope you found this post useful. Please comment if you know better method, or you found some tweaks that everyone should know about.

    Even monitors need power-cycling

    This post publicizes my latest finding that LCD monitor firmware has reached the level of unreliability of consumer-grade computer software, and therefore we even have to reboot our screens, every once in a while.

    Background
    Just to make things crystal clear, power-cycling is turning a computer off, and then back on. It's also known as cold booting, and hard reset. It is different from resetting (warm-booting) a computer, because the equipment has to lose power completely, and not just undergo a complete software reload.

    Today's desktop computers have removed the Reset button, so we have to resort to power-cycling the computer (usually by holding the power button for 4 seconds) any time it freezes completely. Warm-booting (slightly more gentle) is usually associated with software updates, and it's become a common but rare occurrence. Poor Windows users are forced into it once a month, by Microsoft's mandatory "you can say later, but turn your back for 5 minutes and I'll reboot your system" security updates.

    So, cold reboots are associated with software failures. I learned to accept that as an inevitable consequence of operating systems being complex software (hundreds of millions of line of code) which are released based on time, not quality, to meet revenue goals.

    Lesson Learned
    Imagine my bedazzlement when I had to do the same thing to... my LCD monitor. I have a (reasonably old, granted) Dell E228WPFc (entry-level  22" widescreen, not HD). I tried to switch the DVI cable from my laptop to my Mac mini, and the monitor just wouldn't get out of sleep. After wasting 5 minutes wondering if any of the cables is broken, I yanked the power cable out of the monitor, waited for a second, then put it back in. And the screen lit up, and it worked.

    Next time, I'll try power-cycling the screen earlier in the debugging process. And, as power-saving modes are implemented into more and more devices, I'll hope I don't step into an elevator which hangs getting out of sleep. Or in a car, for that matter.

    Monday, March 30, 2009

    Managing Software Dependencies

    Some software development decisions are more important than others. This post argues that decisions involving dependencies are among the very important ones, and describes my approach to managing dependencies.

    What Are Dependencies
    For the purpose of this post, dependencies are pieces of software outside the project or component that you are considering. Software development does entail other dependencies, like the value of a local currency, but those are outside the scope of my write-up.

    Why Worry About Dependencies
    Decisions where we take dependencies are among the most important software development decisions we take, because dependencies come with costs and constraints.

    Maintenance costs are the ongoing cost associated with keeping the dependency. This cost does include traditional maintenance, such as staying informed about new versions, and applying security updates, but it can go much further. For example, taking a dependency on a Windows-only API in a Web server imposes the cost of a Windows license on every machine running the server.  Furthermore, maintenance costs aren't always easy to estimate. For example, the biggest cost in using a library developed by a small group of people is not licensing or integration, but rather the potential cost of having to take on the development of that library, if the initial developers cease working on the library.

    Replacement costs are more straightforward -- they are the price paid to completely remove the dependency on a piece of software. Their importance lies in the implication that replacement costs are the maximum "premium" that you will pay in maintainance cost for a dependency, over the optimum cost. The explanation for this is: if the maintainance cost for using Windows becomes so large that it's cheaper to pay the replacement cost for Linux, and the maintenance cost for Linux, then you will switch to Linux. So the biggest premium that you will pay to stick with Windows is how much it would take to replace it.

    Incompatibility constraints come with every dependency taken. Technical incompatibilities tend to be obvious, for example DirectX requires Windows, Cocoa requires MacOS, so there is no straightforward way to write a Cocoa application using DirectX. Other incompatibilties are more subtle, like licensing. The GPL license is the most well-known pain, because GPL code cannot be linked together with code released under some other free licenses. Last but not least, there are "versioning hell" incompatibilities, where library A requires library B, at most version 1.0, and library C requires library C, version 1.1 or above, and for this reason, A and C cannot be used together.

    These costs and constraints are the factors I consider first when considering taking new dependencies, which I describe below.

    Managing Dependencies
    In a nutshell, my strategy around dependencies is as follows. Avoid unnecessary dependencies, and take cheap dependencies. Failing that, make the expensive dependencies easy to replace.

    Unnecessary Dependencies
    To me, the most important aspect of managing dependencies is being aware when I'm taking them. For example, Linux or OSX developers can habitually use fork or POSIX filesystem permissions. This habit becomes a problem when developing multi-platform code, because the features are not present on Windows. Higher-level languages are not immune to platform dependencies either. In SQL, it's all too easy to use a database-specific extension, and popular scripting languages (ruby, python) have extensions which may not be available on Windows, or may crash on OSX. Versioning hell dependencies are also a pain, and keeping track of them requires a perspective that is more commonly posessed by accountants than by coders.

    Fortunately, continuous builds can be used to delegate the tedious bookkeeping to computers. Continuous builds set up to run on Windows and Mac OSX protect from taking an unwanted dependency on Linux. A continuous build running tests against SQLlite and PostgreSQL database backends protects from dependencies on MySQL. Continous builds warn about troublesome code early on, when programmers will still be inclined to fix it. For example, it's easier to replace the fork / exec pair with a system call before it becomes a pattern sprinkled around the entire codebase.

    Awareness is only the first step. Most of the time, a dependency has to be taken in return for extra functionality, and I have to decide what dependency I'm taking, and write the integration code. In this case, I consider the issues I presented in the previous section.

    Cheap Dependencies
    If the maintainance cost will clearly be low, I don't worry too much about the dependency. For example, if I'm using ruby, I assume the Rubygems library is installed or easily available, so I don't think twice before using its functionality. When figuring out maintainance cost, I pay most attention to incompatibility constraints. The following findings ring alarm bells in my head:
    • platform dependencies; Example: if it doesn't work on Windows, I can't use it in a deskop application.
    • restrictive licenses; Examples: GPL, licenses forbidding using code in a commercial setting
    • patents; A subtle example is that Adobe's supposedly open Flex platform uses the Flash file format, which is patented by Adobe. Though Adobe published a specification of the Flash format, it prohibits the use of the specification to build competing Flash players
    • niche open-source; Ohloh tracks some statistics that can indicate a potentially troublesome open-source project, like a short revision history, a single committer, and uncommented code
    Expensive Dependencies
    When the maintainance cost of a dependency  will be high, I take extra precautions to lower the replacement cost. I try to learn about at least one alternative, and write the integration code in such a way that it would be easy to swap that alternative in. The goal behind this is to develop a good abstraction layer that insulates the rest of my application from the dependency, and keeps the replacement cost low. Two common examples of this practice are JavaScript frameworks, which insulate application code from browser quirks, and ORM layers such as ActiveRecord that put a lot of work into database independence.

    Having good automated tests provides many advantages that prolong the life of a codebase. One of them is reducing the replacement costs for the all the dependencies. Uprooting a dependency is a nightmare when developers have to sift through piles of code by hand. The same task becomes routine when the computer can point at the code that needs to be changed. Without a good automated test suite, dependencies can become really rigid ("this application only works with Rails 2.2, it'd take forever to port to Rails 2.3" versus "we spend a few hours to update the application when a new version of Rails comes out").

    The effort that goes into keeping replacement costs low is typically repaid many times over by the benefits of being able to replace old or troublesome dependencies. Of course, this only holds for long-lived projects, and I wouldn't pay as much attention to how I integrate my dependencies when I'm exploring or building a throw-away prototype.


    Conclusion
    Many good software projects don't shine because of their dependencies (example: Cocoa, because it only runs on Mac OS X). The total cost of long-lived projects is largely influenced by the cost of living with their dependencies. Therefore, it makes sense to invest effort into steering away from dependencies that may bring trouble or even doom the project down the line. Hopefully, this post has presented a few considerations that will help you spot these troublesome dependencies, and either avoid them or at least insulate your codebase from them.

    One More Thing
    I promise I won't make this a habit, but I want to end this post with something for you to think about. As a programmer, choosing which skill to learn next is closely related to the dependencies problem explored above. We learn new technologies to use them in our projects, which means the projects will take dependencies on those technologies. So, we might not want to learn technologies which translate into troublesome dependencies.

    I will write more about looking at dependencies from this different angle, next week.

    Wednesday, March 25, 2009

    Removing Default Ruby Gems on OSX Leopard

    This post describes a quick way to remove the gems that come pre-installed on OSX Leopard.

    Method
    First, you should update your gems, so you have newer versions for all the gems you're about to remove. While you're at it, update rubygems as well.
    sudo gem update --system
    sudo gem update

    Now blast the directory containing the gems that came with OSX.
    sudo rm -r /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8

    If, for some reason, that directory does not exist on your system, you can see rubygems stores its gems by running gem env paths. Most likely, the old gems have already been cleaned.

    Enjoy being able to clean up all the old gems on your system.
    sudo gem clean

    Warning
    Removing the gems this way is permanent. If you don't like that thought, rename the 1.8 directory to 1.8.dead, and create an empty 1.8.  This way, rubygems doesn't see the old gems, but they are still around, if you need them for some reason. So, instead of rming,
    sudo mv /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8 /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8.dead
    sudo mkdir /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8

    Motivation
    The pre-installed gems have been released 2 years ago, so they're really old by now. They need to go away. Doing a gem clean will fail to remove them. (tested with Rubygems 1.3.1 and below) What's worse, gem clean will fail to remove other old gems that you have installed so, after a while, you'll have a lot of cruft on your system.

    I wrote this post because, up until now, I've been too lazy to figure out the gem cleanup situation. Now that I finally did, I want to make it easy for others to get their systems clean.

    Conclusion
    I've described a quick way to remove the old ruby gems that come preinstalled with OSX Leopard. This is useful because gem clean is non-functional in the presence of those gems. I hope you have found the post useful. Please comment if you have better or quicker solutions to this problem.

    Sunday, March 22, 2009

    Your Web Server and Dynamic IPs

    This post describes the techniques I'm using to host my application from a server whose IP changes over time. The post assumes the server's IP only changes when the server is not in use, and therefore I do not address servicing requests during the IP change. Instead, I am concerned with restoring the mapping between the server's DNS entries and its IP in an automated and reasonably quick manner.

    Overview
    I signed up for dynamic DNS service. This gives me a DNS name that points to any IP I want, and some software that I install on my server to automatically change the DNS name. Then I set the user-visible DNS hostname (www.something.com) as a CNAME pointing to the dynamic DNS hostname.

    The technique generalizes to serving multiple applications (with separate domains) from a single server. The DNS entries for all the applications are set as CNAMEs pointing to the server's dynamic DNS entry. The HTTP port on the server is owned by a reverse proxy and load balancer dispatching requests to each application's backends based on the Host: header in the HTTP request.

    Dynamic DNS Service
    You can get dynamic DNS for free. I use dyndns.com's service, and it worked for me. If you want to shop around, here's a list of providers, courtesy of Google Search.

    Once you sign up for service, you should get a hostname (like victor.dyndns.com) that you can point to any IP. This host name will be transparent to your users, so you don't need to worry about branding when choosing it. Your only worry is having to remember it.

    The important decision you have to make here is the TTL (time-to-live) of your entry. This is the time it takes to propagate an IP change. Shorter values have the advantage that your server can be accessed quickly after it is moved. Longer values mean the IP address stays longer in the users' browser cache, so they have to do DNS queries less often. This matters because the dynamic DNS adds an extra DNS query that users' browsers must perform before accessing your site, which in turn adds up in the perceived latency of your site. Your TTL choice will be a compromise between availability after a move and the average latency increase caused by the extra DNS lookup.

    Dynamic DNS Updater
    To make the most out of your dynamic DNS service, you need software that updates the IP associated with the DNS hostname.

    My Rails deployment script automatically configures the updater for me (source code here). I use ddclient, because it's recommended by my dynamic DNS service provider.

    In order to use DynDNS on Ubuntu:
    1. sudo apt-get install ddclient
    2. Edit /etc/init.d/ddclient and replace run_daemon=false with run_daemon=true
    3. Use the following configuration in your /etc/ddclient.conf
    pid=/var/run/ddclient.pid
    use=web, web=checkip.dyndns.com/, web-skip='IP Address'
    protocol=dyndns2server=members.dyndns.org
    login=dyndns_username
    password='dyndns_password'
    dyndns_hostname
    

    The updater will start on reboot. If you want to start it right away,
    sudo /etc/init.d/ddclient start


    Other Options
    If you use DynDNS, but don't run Linux, they have clients for Windows and OSX. If you don't use DynDNS, this Google search might be a good start.

    My home router (running dd-wrt) uses inadyn. I don't like that on my server, because it takes my password on the command-line, so anyone that can run ps will see my password.


    Application DNS Setup
    Having done all the hard work, you close the loop by setting up a CNAME mapping your application's pretty DNS name to the dynamic DNS hostname. If you don't want to pay for a domain, you can give out the dynamic DNS hostname to your users... but it's probably not as pretty.

    The process for setting up the CNAME mapping depends on your domain name provider (who sold you www.something.com). The best source of instructions I know is the Google Apps Help. If you use that, remember to replace ghs.google.com with your dynamic DNS hostname.

    Debugging
    Chances are, your setup will not come out the first time. Even if that doesn't happen, your setup might break at some point. Your best aid in debugging the DNS setup is dig, which comes pre-installed on Mac OSX and most Linux distributions.

    Run dig www.something.com, and you'll get an output that looks like this:
    moonstone:~ victor$ dig www.mymovienights.com
    (irrelevant header, removed)
    ;; QUESTION SECTION:
    ;www.mymovienights.com.        IN    A
    
    ;; ANSWER SECTION:
    www.mymovienights.com.    1742    IN    CNAME    chubby.kicks-ass.net.
    chubby.kicks-ass.net.    2    IN    A    18.242.5.133
    
    ;; Query time: 211 msec
    ;; SERVER: 192.168.1.1#53(192.168.1.1)
    
    (irrelevant footer, removed)
    I removed the part that is completely uninteresting, and made interesting parts bold. The answer section shows a DNS chain built following this post. If your chain doesn't look like this, you know where to fix the error. If everything looks good here, but you still can't reach your server, the problem is either at the networking layer (can you ping the server?) or at the application layer (your load balancer or application server is misconfigured).

    Another interesting result you get from dig is the query time, which shows the latency introduced by DNS to the users who visit your site for the first time. Unfortunately, this doesn't give accurate numbers if dig's answer is in some DNS cache, so be sure to account for that in some way when measuring latency.

    Monitoring
    I use Google's Webmaster Tools because they provide free monitoring. The overview is sufficient to see if the site is up or down. If you have a Gmail account and use it frequently, you can embed a gadget showing your site's status into your Gmail view.

    Multiple Applications
    I use the same server for multiple Web applications. I have a separate DNS hostname for each application, and they all point to the same dynamic DNS hostname via CNAMEs.

    On the server, I use nginx as my reverse proxy because it is fast and it can be reconfigured with no downtime, as it's serving user requests. You can use apache if you prefer, using these instructions.

    My reverse proxy setup is done automatically by my Rails deployment script (source code here). Here's how you can get a similar configuration:
    1. sudo apt-get install nginx
    2. For each application, create a file in /etc/nginx/sites-enabled/ with the following configuration
    upstream application_name {
        server 127.0.0.1:8080;
      }
    
      server {
        listen 80;
        server_name www.something.com;
        root /path/to/your/application/html/files;
        client_max_body_size 48M;
        location / {
          proxy_set_header X-Real-IP $remote_addr;
          proxy_set_header Host $host;
          proxy_redirect false;
          proxy_connect_timeout 2;
          proxy_read_timeout 86400;
    
          if (-f $request_filename) {
            break;
          }
    
          if (-f $request_filename/index.html) {
            rewrite (.*) $1/index.html break;
          }
          if (-f $request_filename.html) {
            rewrite (.*) $1.html break;
          }
          if (!-f $request_filename) {
            proxy_pass http://application_name;
            break;
          }
        }
      }
    This configuration handles requests for www.something.com by serving static files directly through nginx when they are available, and by forwarding the HTTP requests to your application server at port 8080 otherwise. If you do not want to serve static files from nginx, remove the root clause, and all the if clauses. Tweak any other numbers as needed.

    Of course, you cannot use port 80 for any of your application servers.

    The server will start on reboot. If you want to start it right away,
    sudo /etc/init.d/ddclient start

    DNS Prefetching
    If you're worried about the latency added by the extra layer of DNS, you can use prefetching to go around this limitation. DNS prefetching is a fancy name for tricking the user to do a DNS lookup for your hostname, before he/she interacts with your application.


    If you're wondering whether this prefetching thing actually matters, know that Google uses DNS prefetching in Chrome. Sadly, most Web developers don't have enough leverage over their users to convince them to install custom software.

    Firefox supports link prefetching, and you can find it useful if your users install a widget / gadget that's served from a CDN (e.g. Google Gadgets).

    You can also be more creative by looking at the bigger picture. For instance, if your users install an application of yours on their mobile phones, those phones will likely do DNS queries using your users' home routers. So, if your mobile application synchronizes with the server using a sync interval that's smaller than the TTL on your DNS entries... you've killed most of the latency.

    Motivation
    My servers have been hosted in random places. I've had my application server in my dorm room, in my friends' dorm rooms, and in random labs around MIT.

    Given that my servers travel so much, I like to keep them light (Mac Mini or Dell Studio Hybrid) and I want to be able to move them without any manual configuration change. This means the servers can be headless, and that my friends can move the servers for me, without the need any training.

    Conclusion
    Thanks for reading, and I hope you found this post useful. Please leave a comment if you have any suggestion for an easier or better setup.

    Wednesday, March 11, 2009

    Great Time To Be a Web Programmer

    If you don't know client-side Web programming (HTML, CSS, and Javascript) already, it should be the next technology you learn! I'm pretty sure that 2009 starts the golden era of these technologies, and this post explains why. Asides from making my point, I highlight some cool and very useful pieces of technology along the way.

    Overview
    My argument goes along the following lines: Javascript has evolved into a mature language, with good frameworks. Browsers got faster at Javascript, and better at standard compliance. Major Web sites offer easy access to their data through APIs. Applications and widgets based on Web technologies can be easily integrated into the desktop, or other Web applications. Last, but definitely not least, generous free hosting is available, and can be set up quickly.

    Read on, and find out what technologies I'm referring to.

    The Platform Is Ready
    Javascript got off to a really bad start. Starting from the language's name itself, and continuing with the horribly slow and buggy browser implementations, Javascript got a bad reputation.

    However, today's Javascript is a well-understood and pretty productive language. Libraries like Dojo, Prototype/scriptaculous, and jQuery abstract browser incompatibilities away, and insulate programmers from the less inspired DOM APIs. The HTML5 draft, which is adopted pretty quickly (compared to the time it took to get CSS2 in) by the leading quality browsers, specs out many goodies, such as offline Web applications, push notifications, and better graphics.

    Equally important, browsers are in a Javascript speed race, and the winners are us. Between Safari 4, Google Chrome, and Firefox 3.1, we should have fast Javascript execution on all major operating systems before the end of 2009.

    Integration Opportunities Abound
    Integration comes in two flavors. First, you might want to use data from other sources, like Google, Facebook, and Twitter. Second, your idea may not be suitable for an entire Web application, and might fare better on the desktop, or as a widget. There are great news coming from both fronts.

    JSONP is an easy way to get data, despite the cross-domain restriction policy, and major companies have been taking it seriously. Google's search API and Twitter's API have JSONP support. Yahoo's Query Language goes one step further and lets you get other sites' content wrapped up in nice JSONP. Did I mention Dojo's seamless support for Google's search API?
    If you want to integrate your application with your user's desktop, you have Google Gears and Mozilla Prism today, and HTML5 in the future.

    Applications that don't need a lot of screen space can be packged effectively as widgets. Widgets are supported natively in Mac OS by Dashboard, and in Vista's Sidebar. For a more cross-platform solution, you should check out Google Gadgets, which work the same on the Web, in the Mac OS dashboard , in Linux, and in Windows.

    Oh, and one more thing. Google's gadgets also work in their productivity suite - in Gmail, in Spreadsheets, and in Google Sites. So you could impress your boss by building a dashboard with important numbers straight into their Gmail.

    REST Decouples Client From Server
    Remember ugly long URLs? REST (Representational State Transfer) is a collection of design principles which yields the opposite of those long URLs. It matters because, once your client-server API obeys REST, your client is not dependent on your server implementation.

    Using REST works out very well with the approach of p
    ushing most of the application logic to the client-side Javascript code. An argument for why most of your code should be client-side follows.

    If you're looking for free hosting (covered in the next section), the server code will not be in Javascript, but rather a server-side language, like Ruby, Python, or Java. Choosing a server language narrows down your platform choice (for example, at the moment, Google's App Engine only works with Python). If you're looking for free hosting, you want to be able to port your code quickly to whichever platform offers better free quotas at the moment.

    Using REST designs with JSON input and output gives you "standardized" servers that are easy to interact with, and easy to code. On the client side, for example, Dojo automates the data exchange for you. On the server side, Rails scaffolds have built-in REST/JSON support, or you can pick up ready-made App Engine code.

    Hosting Can Be Free
    Web applications are very easy to access, but the servers are a pain to setup. Furthermore, hosting costs money - even if you're on Amazon EC2, there's a non-zero amount of money that you have to pay. Most of us, programmers, don't like to pay for stuff.

    Fortunately, there's the Google App Engine, and it has a free tier which is pretty much equivalent to running your own server. "Pretty much" covers everything except storage, which is currently capped at 1Gb.

    If you prefer gems to snakes, like me, check out Heroku for Rails hosting. Heroku's beta platform is free, and they promised a free tier on their production version. Their free tier may not end up to be as generous as Google's, but you can always downgrade to Python if your application becomes successful. Update: Google's App Engine can run Java now, which leads to support for Ruby and other languages. This post has more details.

    Conclusion
    I hope I have convinced you to make a priority from learning HTML, CSS, and Javascript this year. If not, here's "one more thing" - you can build hosted solutions for small companies (50 people or less) with zero infrastructure cost. Google Apps, together with the App Engine gives you SSO (single sign-on), and Gadgets can be used to integrate into Gmail.

    Thanks for reading this far, and I hope you have found this post to be helpful!