Sunday, June 24, 2012

Build your CRM in an afternoon with Google Scripts

We're expanding our private beta at, and in my dual frontend dev/account manager role I need a time-saving, flexible CRM. I've tried Salesforce and Highrise in the past, but neither one was a pleasure to use - I burnt out on manual data entry pretty quick. Our new solution was quick to build and solves all of my current problems.

As a startup that's still figuring things out, there are two main things we need from a CRM:
1. Flexibility - I need to choose (and constantly revise) which user info is stored and displayed. On a day-to-day basis we want to know when beta testers last logged in. But next week we might decide to send our beta testers fruit baskets, and suddenly physical mailing addresses will be a very important field.
2. Automation - Most of the data we need is gathered from various web endpoints:
 -  for an estimate of monthly actives users
 - an internal api  for most recent login date, current events/day,...
 - ... etc ...

Another tool worth mentioning is It goes a step further than Salesforce and Highrise because it collects a bunch of data for you, but honestly I'm not interested in our signups' profiles on vimeo, Google+ or Foursquare... it seems a bit creepy. If you sign up, I want to know (a) what is your business about and (b) what analytics solutions are you using right now, so that we can better address your analytics needs.

So what to do?

A few weeks ago my friends at IronSpread got me thinking about a new approach: keep a list of leads in a spreadsheet, and use scripts to automate data collection about those leads.

Flexibility? ✓ It's a spreadsheet.
Automation? ✓ You can do almost anything with Google Apps Scripts.

Google Apps Script for Google Docs is like Visual Basic macros for Excel. Except you get all kinds of magic. The most useful components for us are:

1. Cell manipulation:
  sets the B1 cell to have text 'Toasters'.

2. Fetching external data:
   var response = UrlFetchApp.fetch('');
    allows us to retrieve info from our internal APIs,, and our beta tester's websites.

Now whenever someone signs up with an, the spreadsheet CRM goes ahead and reads
 - the business's website title meta-tag (so we know what their business does)
 - estimated monthly active users
 - what analytics solutions they currently have in place
This helps us write well-suited introductions to each customer, and gives us a great feel for who's interested in beta testing.

Google Apps Script also gives you access to Gmail, which seems promising for a CRM, though I haven't tried using it yet.

If you want to get started and build your own, go to, create a new spreadsheet, and click Tools : Script editor. Later you can run your script using Tools : Script manager. The documentation is also useful.

Friday, June 15, 2012

1 Year Since Leaving MIT

I left MIT one year ago - I had one year of coursework left as an undergrad in Aerospace Engineering. After [a long story] I co-founded with @ivolo, @ianstormtaylor and @calvinfo. I'm incredibly happy with my decision, and I want to share some reasons why the decision to leave was right for me.

First: my family, and especially my fiancee(!) have been incredibly supportive. It's WAY EASIER when they smile, listen and help you through the rough times of starting a company. I'd have given up long ago if not for them. Not to say it was a perfectly smooth ride: on first meeting Erika's parents, I told them "I'm gonna give this entrepreneurship thing a try." Her parents deserve some serious credit... I'm surprised the conversation didn't end in laughter right there.

My favorite professors have mostly gotten over their shock. I was back for MIT's graduation ceremony last week and got to say hi to a few of them. They were mildly surprised to see that I was not, in fact, broke, hungry, and homeless. Our progress delighted them, although they expect me to finish up at MIT eventually. No worries, these profs aren't going anywhere.

It's also fascinating to observe the difference between leaving school with purpose (let's start a company!!), and getting suddenly shoved out of school right after this odd thing called "graduation". Today I live with my co-founders on Russian Hill in San Francisco. It's a grand thing, living with your college roommates in an amazing apartment, managing your own time and working on something you chose out of your own interest.

At the same time, many of our friends from MIT/RISD and high school are now suddenly a part of the adult working world. And whoa is it a shock. A rare few of them are overjoyed in their realizations "Wow! I'm paid so much I can buy a house, like, right now!", but many of them are now soul-searching for a job that actually fits their credentials. Some are re-training in radically different fields (humanities --> computer science) and others are going back to school in... oh, computer science. This time around though, they're training with intent. Leaving school with a purpose feels a lot better than leaving without a purpose.

So, while I don't have any reason to specifically recommend leaving school early, I do recommend leaving school with a purpose. For me that happened a year early.

Wednesday, May 30, 2012

Building a Movie Showtimes API

I built to demonstrate an easier way to find nearby movie showtimes than Fandango or Google Movies. (More importantly, it generates demo data for showing off

It turns out that movie showtime data is difficult to get and the APIs cost a lot of money to use: you have to call FandangoWest World Media, or Tribune Media Services just to discuss access to the API. Obviously I was not going to do this just to throw together a side project.

Enter Yahoo! Query Language, which allows you to convert any web content into an API by defining a parser. Your client-side javascript makes JSONP requests to YQL, it retrieves the web content, applies your parser, and returns formatted XML or JSON. It lets you create an API where there was none before.

A Few Caveats about YQL

YQL has complicated request limits. Executing one showtimes query uses about 1 million execution units (50 million maximum per query.) There's also a limit to 10k or 100k requests per day.

Also consider what happens every time you need some showtimes: TheReelBox fires a JSONP to YQL, which requests the appropriate Fandango page, which is then parsed in YQL, converted to JSON and returned to TheReelBox. It's crazy slow. To speed things up I needed a sort of "API Caching Layer". Firebase ended up fitting the bill perfectly. I'll write about that another time.

The online interface for YQL has lots of issues. It's freaking impossible to debug YQL parser problems (you kinda just stare at the code and hope for the best...), and their online console deleted all my files occasionally... but hey, it generally works.

Reverse Engineering the Ticket URL
Fandango provides urls to affiliate advertisers (apply at for purchasing tickets, but you can't link to the exact movie+showtime+theater, so the user has to re-select which time and theater they want. I wanted links on TheReelBox to take users to the final purchase page, and this is where things got interesting.

First, I discovered that Google Movies and have direct links to the Fandango ticket purchase page - this is special among affiliates.

Here's an example referral URL:

The base url is static and simple, but the GET parameters are interesting.
row_count: this changes for every movie, theater and showtime, so I assumed it's a unique reference to the ticket/showtime the user wants to buy.
tid: according to this is a "Terminal ID" used for card transactions.
mid: for internal Fandango links this changes by movie, so I figured it's a "movie id". Confusingly, claims this is a "Merchant ID"... dunno.
wssaffid: this parameter is standard in affiliate links, and appears to be a unique ID for each affiliate.
wssac: this parameter is also standard in affiliate links, no clue what it represents.

Can we put together the necessary information to create our own direct checkout links?

It turns out that row_count, tid, and mid are all used by Fandango in their internal urls, so we can snatch these parameters in our YQL parser.

Through the affiliate program you can look at your generated affiliate links to find the wssaffid, and wssac numbers.

So, boom! Yes we can link directly to the purchase page with a little work. Here's an example with TheReelBox affiliate info plugged in:

Now, I had no idea whether this was working or not, and then someone bought a ticket. The commission came through perfectly.

Finally, since you're probably curious: TheReelBox users have clicked through to Fandango and purchased $256.00 worth of tickets, generating $2.10 in commission - 10 cents a pop. Luckily, Fandango won't mail a check until commission hits $50.00. Currently I estimate that the check will arrive on February 9, 2018... I'm stoked :)

This is a follow-up to questions asked on HN when TheReelBox launched

New discussion is at

Wednesday, March 7, 2012

Using the Heap Profiler in Chrome Dev Tools

At we have a one-page app used to explore analytics data. Good memory management is a very important us, and the Heap Profiler in Chrome developer tools is the perfect tool for the job.... it's just a bit hard to use.

My main use case for the Heap Profiler goes like this:
(1) Find the object that I'm interested in... but how do you track down specific objects in this mess of memory references?
(2) Check the object's memory usage... this is where you have to go learn about shallow and retained size. Extremely helpful diagrams can be found here.
(3) Try deleting all references to the object.
(4) Make sure the object is really gone... but how do you know if something actually got deleted?!?! The total memory usage for the whole tab is an ok indicator for large objects... but memory usage jumps around and with smaller objects you're out of luck without the profiler.

This post explains how we use the profiler to (1) Find the object of interest and (4) make sure the object got deleted. Some of the details are specific to apps using Backbone.js.

Four main views
The Heap Profiler has four main views Summary, Containment, Dominators and Comparison. The Summary, Containment and Dominators views are useful for tracking down specific objects, and the Dominators and Comparison views are useful for checking that things have actually been deleted. Here's how we use each one.

Summary view
This view is good for tracking down specific objects based on their "type" (constructor name), because it shows objects in memory grouped by their constructor name.

For example, Dates are grouped under "Date" and Backbone.js objects are grouped under "child" because of naming in the backbone view constructor. In the minified version of Backbone 1.7.1 the constructor names change, so Backbone objects appear under "d" and the constructors appear under "q".

Here's a screenshot of one of our distribution plot objects "d @94543" found in the summary view:

Containment view
This view is good for analyzing objects that are referenced in the global namespace... basically anything you put on the global window variable.

For example if you we window.seg and we want to see what it's keeping around in memory, we open Containment -> the first DOMWindow -> seg. Here's a screenshot of finding window.plot1 "d @94543" in the Containment view.

Dominators Tree view
This view is a good way to verify that your references are all properly contained (no unexpected references hanging around), and that deleting things is really working.

The view shows a tree of nodes in memory that "dominate" other nodes. From wikipedia: a node d dominates a node n if every path from the start node to n must go through d. This means that by deleting the dominator node d you remove all references to any dominated nodes n. So in effect, this tree shows you all the low-hanging fruit: if you delete d, you delete n. If you expected to see a particular object being dominated, but don't, then you probably have a hanging reference to it somewhere!

For our distribution charts I see a node "d @94543" that contains a bunch of other nodes. From clicking on each node I can figure out what they are. For example, the first node has a retaining path "(GC roots)@3[751]._->w.zoom->b[0].on{literals_or_bindings}{content}.d3" and that node's children are "", "...d3.val", "...d3.el" etc. So now I piece together that the first dominated node was the "this.d3" object inside a plot view, and the original dominator "d @94543" is a distribution plot object, and furthermore, if I delete "d @94543", then all of the dominated nodes will be cleaned up as well, since because they are dominated there are no more references to them anywhere else. When I close and delete the reference to the distribution plot view, the node "d @94543" disappears from the dominators tree and memory usage drops by 3MB, so cleanup seems to work as expected.

Comparison view
This view is the best way to verify that deletion is working properly.

The view allows you to see the diff of two memory snapshots, showing you the delta in reference counts, freed memory, etc. The tricky/weird thing here is that in order to see what changed from Snapshot 1 to Snapshot 2, you want to first click "Snapshot 2" in the left hand column and then compare to "Snapshot 1". When we delete objects in our code, I also see a whole bunch of deleted DOM elements, array objects, strings, closures, and numbers. Here's a screenshot of the comparison view after a deletion:

Here are some links to other useful resources about the heap profiler:

Monday, January 2, 2012

How fast could you cross the country (without killing yourself)?

Crossing the United States on a train is an incredible experience. North Dakota fields ooze an evil early-morning fog that will make your spine shiver. The Montana Rockies tower suddenly from the endless plains. When you slip by at 40,000 ft in an aluminum tube you are missing this magic.

So why not take the train? Because it takes 3 days @ $208 one-way to get from Boston's South Station to Seattle's King St. Station, a trip of about 3,000 miles. I would take the train every time if it took less than a day and still didn't cost a fortune.

So less than a day would be cool, but.... just how fast could you cross the country (without killing yourself)? The question is NOT about technological limitations, those change every day. The question is really about the "without killing yourself" part.

So for the technology, assume something like a vacuum tube with a maglev train, unrestricted by air resistance and without physical contact points to a physical track. The hypothetical train's deceleration could transfer energy back out, so the whole system would be extremely energy efficient.... low variable costs per trip but extremely high fixed costs to build the system.

What about physiological limitations? Martin Voshell has put together a fascinating history of US Colonel Stapp's experiments on acceleration and the human body (section 2.1). In 1954 Colonel Stapp survived 46.2 g's, decelerating "eyeballs out" from 632 mph to 0 mph in 1.4 seconds. This a bit extreme for civilian transport, especially since this was a momentary peak and not sustained. A 1960 technical note from NASA explains that pilots in centrifuges tolerated accelerations of 17 g's "eyeballs in" and 12 g's "eyeballs out". According to NASA the Space Shuttle has a maximum acceleration was 3 g's, which Wikipedia claims is "largely for astronaut comfort."

Given that "eyeballs in" (accelerating while facing forward) is more tolerable, let's work with those numbers. Assume that the seats on our futuristic maglev vacuum-tube train can rotate 180 degrees so that deceleration is also "eyeballs in." Reasonable accelerations seem to be 3 g's for comfort, or 17 g's for "tolerable"/"not dead, yet... just a fleshwound!"

Let's assume our maglev can sustain maximum acceleration halfway across the country, then decelerate the other half, which is as fast as you can go. Here's a plot of cross-country travel time vs. sustained acceleration.

Constantly accelerating/decelerating at 3 g's will get you from Boston to Seattle in just under 15 minutes. If instead you want to be crushed into your seat and blacked out at 17 g's, you can get to Seattle in about 7 minutes. North Dakota would blow by in about 16 seconds... probably defeating the whole eerie fog experience. You can find the original plot & equation in metric units here (good work desmos, though an embeddable version and the ability to label the axes and annotate the plot would be awesome!)

Another interesting by-product of accelerating continuously halfway to your destination is that the time required to get to the destination scales with the square root of the distance.... so travel times don't vary as much as you would expect. This table shows one-way travel times for a 3 g acceleration:

Destination Time    Distance 
Beijing 20 min 11,000 km
Seattle 12 min 4,000 km
Chicago 7 min 1,400 km
New York 3 min 360 km

You can find the original plot and equation in metric units here (time in minutes vs. distance in kilometers).

As far as human limitations go, rail transit clearly has a lot of room for improvement. A ~350x reduction in travel time for cross-country rail trips is possible, purely from a physiological perspective :)

For a much more thorough, and brutally honest analysis of urban transportation:

Monday, January 17, 2011

Incubomber - Where Bomb Ideas Come to Life

The Incubomber is a mini-incubator in my dorm room at MIT.

How does that work?
I lofted my bed, we shoved in two long tables that seat 5 people total, brought in 8 LCD screens for four people, and Colin works in 6th space: the closet. We threw up some posters, got a ton of canned food (entrepreneurship requires sacrifices - cheap food!), and invited a bunch of awesome people to stop by and hang out.

What do you do?
With 5 or 6 motivated people in the same room, work gets done. Crazy fast. And amazing ideas flow like people in Grand Central. At the moment, various people are programming on three different projects:

  • Anonymerit: anonymous posting of pieces of writing forces users to judge by content not authority.
  • Bookxor: rethinking how textbooks are produced, not just how they are distributed.
  • TunesList: a collaborative playlist for the Incubomber, though yall will shit your pants for this later.
So you just do software?
Nah, we like software, but we do some hardware stuff too. Ilya built an Arduino Twitter display to show our latest @Incubomber tweets. Erika and I are working on a heart rate sensing bracelet that connects via bluetooth to your smartphone. Tons of cool applications. what do you think? We love to hear new ideas.

Cool, can I check this out?
Hell yeah, shoot us an email at and we'll navigate you towards the Bat Cave. If you like what you see, come back again. And again. It is a tight space, so if you're super pumped about this, we encourage you to put together a mini-incubator of your own.

Saturday, December 18, 2010

UAV Tested in an Underground Parking Garage

At the end of the Spring 2010 semester, Mark and I built (and rebuilt many, many times) a small UAV. The UAV was controlled with two Arduino microcontrollers and a whole suite of sensors for measuring orientation, airspeed, and altitude. Eventually the craft was able to take off, maintain steady level flight, then descend and land gracefully! Pretty awesome project. Thanks to Prof. Dave Darmofal in the Aero-astro department at MIT for approving funding for our little (but extremely time consuming project).

Also, we mostly tested the UAV in the parking garage underneath the Stata building at MIT. Flying inside has some benefits like no wind, but some major drawbacks... watch the video for some pretty spectacular crashes.

Here's the Project Proposal.
Here's the Final Report.