The HOW of Gnip: Keep it Simple Stupid!

Keep it simple

Our designs are rooted in leveraging the most prolific standards on the web today. A big mess of proprietary goop wouldn’t help anyone. HTTP servers have evolved to scale like mad when static data is concerned. So, we set out to host static flat files with commodity HTTP servers (nginx); that layer scales from here to the end of time. Scale trouble comes into the picture when you’re doing a lot of backflips around data at request time. To mitigate this kind of trouble, our API is static as well (to the chagrin of even some of us at Gnip). Gnip doesn’t do any interesting math when data is consumed from it (the read); we serve pre-constructed flat files for each request, which allows us to leverage several standard caches. The Publish side (the write) is where we do some cartwheels to normalize and distill inbound data, and get it into static structures/files for later consumption.

Being concerned only with message flow and notification, we were able to avoid the nastiest aspect of system scaling; the database. Databases suck; all of them do. “Pick your poison” is your only option. While I’m basking in the glory of not having a database in our midst, I’m not naive enough to think we’ll never need one in the main processing flow, but just let me enjoy it for now. To be clear, we do use Amazon’s SimpleDB to store non-critical path info such as user account information, but that info’s rarely accessed and easily hangs in memory. The main reason we can avoid a database right now is that we only advertise availability of 60 minutes worth of transient events. We don’t have programmatic access to big date ranges for example. SQL queries are no-where to be found; jealous?

I want to take a moment and give props to the crew at Digg. With all the folks we’ve interacted with over the past few months on the Publisher side, Digg has nailed the dynamic/flexible web service API; kudos for completeness, consistency, and scalability! Real pros.

From Q’s to NAM

We went through one large backend re-design along the way (so far). We started with a queuing model for data replication across nodes in the system to ensure that if you put a gun to the head of nodeA, that the system would march along its merry way. This gave us a queuing view of the world. This was all well and good, but we wound up with a fairly complex topography of services listening to the bus for this or that; it got complex. Furthermore, when we moved to Ec2 (more on that below), the lack of multi-cast support meant we’d have to do some super dirty tricks to make listeners aware of event publishers going up and down; it was getting too kludgey.

After some investigation and prototyping, we settled on TerraCotta (a NAM solution) for node replication at the memory level. It kept the app layer simple, and, at least on the surface, should be tunable when things hit the fan. We’ve done minimal lock tuning thus far, but are happy with what we see. The prospect of just writing our app and thinking of it as a single thing, rather than “how does all this state get replicated across n number of nodes” was soooo appealing.

Since going live last week, we’ve found some hot-spots with TerraCotta data replication when we bring up new nodes, so our first real tuning exercise has begun!

Ec2

One day we needed seven machines to do load testing. I turned around and asked my traditional hosting provider for “seven machines” and was told “we’ll need 7-10 days to acquire the hardware.” The next day we started prototyping on Ec2. Gnip’s traffic can increase/decrease by millions of requests in minutes (machine-to-machine interaction… no end-user click traffic patterns here), and our ability to manage those fluctuations in a cost effective manner is crucial. With Ec2, we can spin-up/tear-down a nearly arbitrary number of machines without anyone in the food chain having to run to the store to buy more hardware. I was sold!

Moving to Ec2 wasn’t free. We had to overcome lame dynamic IP registration limitations, abandon a multi-cast component solution we’d been counting on (to be fair, Ec2 wasn’t the exclusive impetus for this change though; things were getting complex), and figure out how to run a system sans local persistent storage. We overcame all of these issues and are very happy on Ec2. Rather than rewrite a post around our Ec2 experience however, I’ll just point you to a post on my personal blog if you want more detail.

Without going into pain staking app detail, that’s the gist of how we’ve built our current system.