Guide to the Twitter API – Part 1 of 3: An Introduction to Twitter’s APIs

You may find yourself wondering . . . “What’s the best way to access the Twitter data I need?” Well the answer depends on the type and amount of data you are trying to access.  Given that there are multiple options, we have designed a three part series of blog posts that explain the differences between the coverage the general public can access and the coverage available through Twitter’s resyndication agreement with Gnip. Let’s dive in . .. 

Understanding Twitter’s Public APIs . . . You Mean There is More than One?

In fact, there are three Twitter APIs: the REST API, the Streaming API, and the Search API. Within the world of social media monitoring and social media analytics, we need to focus primarily on the latter two.

  1. Search API - The Twitter Search API is a dedicated API for running searches against the index of recent Tweets
  2. Streaming API – The Twitter Streaming API allows high-throughput, near-realtime access to various subsets of Twitter data (eg. 1% random sampling of Tweets, filtering for up to 400 keywords, etc.)

Whether you get your Twitter data from the Search API, the Streaming API, or through Gnip, only public statuses are available (and NOT protected Tweets). Additionally, before Tweets are made available to both of these APIs and Gnip, Twitter applies a quality filter to weed out spam.

So now that you have a general understanding of Twitter’s APIs . . . stay tuned for Part 2, where we will take a deeper dive into understanding Twitter’s Search API, coming next week…

 

Letter From The New Guy

Not too long ago Gnip celebrated its third birthday.  I am celebrating my one week anniversary with the company today.  To say a lot happened before my time at Gnip would be the ultimate understatement, and yet it is easy for me to see the results produced from those three years of effort.  Some of those results include:

The Product

Gnip’s social media API offering is the clear leader in the industry.  Gnip is delivering over a half a billion social media activities daily from dozens of sources.  That certainly sounds impressive, but how can I be so confident Gnip is the leader?  Because the most important social media monitoring companies rely on our services to deliver results to their customers every single day. For example, Gnip currently works with 8 of the top 9 enterprise social media monitoring companies, and the rate we are adding enterprise focused companies is accelerating.

The Partners

Another obvious result is the strong partnerships that have been cultivated.  Some of our partnerships such as Twitter and Klout were well publicized when the agreements were put in place.  However, having strong strategic partners takes a lot more than just a signed agreement.  It takes a lot of dedication, investment, and hard work by both parties in order to deliver on the full promise of the agreement.  It is obvious to me that Gnip has amazing partnerships that run deep and are built upon a foundation of mutual trust and respect.

The People

The talent level at Gnip is mind blowing, but it isn’t the skills of the people that have stood out the most for me so far.  It is the dedication of each individual to doing the right thing for our customers and our partners that has made the biggest impression.  When it comes to gathering and delivering social media data, there are a lot of shortcuts that can be taken in order to save time, money, and effort.  Unfortunately, these shortcuts can often come at the expense of publishers, customers, or both.  The team at Gnip has no interest in shortcuts and that comes across in every individual discussion and in every meeting.  If I were going to describe this value in one word, the word would be “integrity”.

In my new role as President & COO, I’m responsible for helping the company grow quickly and smoothly while maintaining the great values that have been established from the company’s inception.  The growth has already started and I couldn’t be more pleased with the talent of the people who have recently joined the organization including: Bill Adkins, Seth McGuire, Charles Ince, and Brad Bokal who have all joined Gnip within the last week.  And, we are hiring more! In fact, it is worth highlighting one particular open position for a Customer Support Engineer.  I’m hard pressed to think of a higher impact role at our company because we consider supporting our customers to be such an important priority.  If you have 2+ years of coding experience including working with RESTful Web APIs and you love delivering over-the-top customer service, Gnip offers a rare opportunity to work in an environment where your skills will be truly appreciated.  Apply today!

I look forward to helping Gnip grow on top of a strong foundation of product, partners, and people.  If you have any questions, I can be reached at chris [at] gnip.com.

Swiss Army Knives: cURL & tidy

Iterating quickly is what makes modern software initiatives work, and the mantra applies to everything in the stack. From planning your work, to builds, things have to move fast, and feedback loops need to be short and sweet. In the realm of REST[-like] API integration, writing an application to visually validate the API you’re interacting with is overkill. At the end of the day, web services boil down to HTTP requests which are rapidly tested with a tight little application called cURL. You can test just about anything with cURL (yes, including HTTP streaming/Comet/long-poll interactions), and its configurability is endless. You’ll have to read the man page to get all the bells and whistles, but I’ll provide a few samples of common Gnip use cases here. At the end of this post I’ll clue you into cURL’s indispensable cohort in web service slaying, ‘tidy.’

cURL power

cURL can generate custom HTTP client requests with any HTTP method you’d like. ProTip: the biggest gotcha I’ve seen trip up most people is leaving the URL unquoted. Many URLs don’t need quotes when being fed to cURL, but many do, and you should just get in the habit of quoting every one, otherwise you’ll spend time debugging your driver error for far too long. There are tons of great cURL tutorials out on the network; I won’t try to recreate those here.

POSTing

Some APIs want data POSTed to them. There are two forms of this.

Inline

curl -v -d "some=data" "http://blah.com/cool/api"

From File

curl -v -d @filename "http://blah.com/cool/api"

In either case, cURL defaults the content-type to the ubiquitous “application/x-www-form-urlencoded”. While this is often the correct thing to do, by default, there are a couple of things to keep in mind: one, this assumes that the data you’re inlining, or that is in your file, is indeed formatted as such (e.g. key=value pairs). two, when the API you’re working with does NOT want data in this format, you need to explicitly override the content-type header like so.

curl -v -d "someotherkindofdata" "http://blah.com/cool/api" --header "Content-Type: foo"

Authentication

Passing HTTP-basic authentication credentials along is easy.

curl -v -uUSERNAME[:PASSWORD] "http://blah.com/cool/api"

You can inline the password, but keep in mind your password will be cached in your shell history logs.

Show Me Everything

You’ll notice I’m using the “-v” option on all of my requests. “-v” allows me to see all the HTTP-level interaction (method, headers, etc), with the exception of a request POST body, which is crucial for debugging interaction issues. You’ll also need to use “-v” to watch streaming data fly by.

Crossing the Streams (cURL + tidy)

Most web services these days spew XML formatted data, and it is often not whitespace formatted such that a human can read it easily. Enter tidy. If you pipe your cURL output to tidy, all of life’s problems will melt away like a fallen ice-cream scoop on a hot summer sidewalk.

cURL’d web service API without tidy

curl -v "http://rss.clipmarks.com/tags/flower/"
...
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet href="/style/rss/rss_feed.xsl" type="text/xsl" media="screen"?><?xml-stylesheet href="/style/rss/rss_feed.css" type="text/css" media="screen" ?><rss versi
on="2.0"><channel><title>Clipmarks | Flower Clips</title><link>http://clipmarks.com/tags/flower/</link><feedUrl>http://rss.clipmarks.com/tags/flower/</feedUrl><ttl>15</ttl
><description>Clip, tag and save information that's important to you. Bookmarks save entire pages...Clipmarks save the specific content that matters to you!</description><
language>en-us</language><item><title>Flower Shop in Parsippany NJ</title><link>http://clipmarks.com/clipmark/CAD213A7-0392-4F1D-A7BB-19195D3467FD/</link><description>&lt;
b&gt;clipped by:&lt;/b&gt; &lt;a href="http://clipmarks.com/clipper/dunguschariang/"&gt;dunguschariang&lt;/a&gt;&lt;br&gt;&lt;b&gt;clipper's remarks:&lt;/b&gt;  Send Dishg
ardens in New Jersey, NJ with the top rated FTD florist in Parsippany Avas specializes in Fruit Baskets, Gourmet Baskets, Dishgardens and Floral Arrangments for every Holi
day. Family Owned and Opperated for over 30 years. &lt;br&gt;&lt;div border="2" style="margin-top: 10px; border:#000000 1px solid;" width="90%"&gt;&lt;div style="backgroun
d-color:"&gt;&lt;div align="center" width="100%" style="padding:4px;margin-bottom:4px;background-color:#666666;overflow:hidden;"&gt;&lt;span style="color:#FFFFFF;f
...

cURL’d web service API with tidy

curl -v "http://rss.clipmarks.com/tags/flower/" | tidy -xml -utf8 -i
...
<?xml version="1.0" encoding="utf-8"?>
<?xml-stylesheet href="/style/rss/rss_feed.xsl" type="text/xsl" media="screen"?>
<?xml-stylesheet href="/style/rss/rss_feed.css" type="text/css" media="screen" ?>
<rss version="2.0">
   <channel>
     <title>Clipmarks | Flower Clips</title>
     <link>http://clipmarks.com/tags/flower/</link>
     <feedUrl>http://rss.clipmarks.com/tags/flower/</feedUrl>
     <ttl>15</ttl>
     <description>Clip, tag and save information that's important to
       you. Bookmarks save entire pages...Clipmarks save the specific
       content that matters to you!</description>
     <language>en-us</language>
     <item>
       <title>Flower Shop in Parsippany NJ</title>
       <link>

http://clipmarks.com/clipmark/CAD213A7-0392-4F1D-A7BB-19195D3467FD/</link>

       <description>&lt;b&gt;clipped by:&lt;/b&gt; &lt;a
...

I know which one you’d prefer. So what’s going on? We’re piping the output to tidy and telling tidy to treat the document as XML (use XML structural parsing rules), treat encodings as UTF8 (so it doesn’t barf on non-latin character sets), and finally “-i” indicates that you want it indented (pretty printed essentially).

Right Tools for the Job

If you spend a lot of time whacking through the web service API forest, be sure you have a sharp machete. cURL and tidy make for a very sharp machete. Test driving a web service API before you start laying down code is essential. These tools allow you to create tight feedback loops at the integration level before you lay any code down; saving everyone time, energy and money.

Migrating to the Twitter Streaming API: A Primer

Some context:

Long, long ago, in a galaxy far, far away, Twitter provided a firehose of data to a few of partners and the world was happy.  These startups were awash in real-time data and they got spoiled, some might say, by the embarrassment of riches that came through the real-time feed.  Over time, numerous factors caused Twitter to cease offering the firehose.  There was much wailing and gnashing of teeth on that day, I can tell you!

At roughly the same time, Twitter bought real-time search company Summize and began offering to everyone access to what is now known as the Search API.  Unlike Twitter’s existing REST API, which was based around usernames, the Search API enabled companies to query for recent data about a specific keyword.  Because of the nature of polling, companies had to contend with latency (the time between when someone performs an action and when an API consumer learns about it) and Twitter had to deal with a constantly-growing number of developers connected to an inherently inefficient interface.

Last year, Twitter announced that they were developing the spiritual successor to the firehose — a real-time stream that could be filtered on a per-customer basis and provide the real-time, zero latency results people wanted.  By August of last year, alpha customers had access to various components of the firehose (spritzer, the gardenhose, track, birddog, etc) and provided feedback that helped shape and solidify Twitter’s Streaming API.

A month ago Twitter Engineer John Kalucki (@jkalucki) posted on the Twitter API Announcements group that “High-Volume and Repeated Queries Should Migrate to Streaming API“.  In the post, he detailed several reasons why the move is beneficial to developers.  Two weeks later, another Twitter developer announced a new error code, 420, to let developers identify when they are getting rate limited by the Search API.  Thus, both the carrot and the stick have been laid out.

The streaming API is going to be a boon for companies who collect keyword-relevant content from the Twitter stream, but it does require some work on the part of developers.  In this post, we’ll help explain who will benefit from using Twitter’s new Streaming API and some ways to make the migration easier.

Question 1:  Do I need to make the switch?

Let me answer your question with another question — Do you have a predictable set of keywords that you habitually query?  If you don’t, keep using the Search API.  If you do, get thee to the Streaming API.

Examples:

  • Use the Streaming API any time you are tracking a keyword over time or sending notifications /  summaries to a subscriber.
  • Use the Streaming API if you need to get *all* the tweets about a specific keyword.
  • Use the Search API for visualization and search tools where a user enters a non-predictable search query for a one-time view of results.
  • What if you offer a configurable blog-based search widget? You may have gotten away with beating up the Search API so far, but I’d suggest setting up a centralized data store and using it as your first look-up location when loading content — it’s bad karma to force a data provider to act as your edge cache.

Question 2: Why should I make the switch?

  • First and foremost, you’ll get relevant tweets significantly faster.  Linearly polling an API or RSS feed for a given set of keywords automatically creates latency which increases at a linear rate.  Assuming one query per second, the average latency for 1,000 keywords is a little over eight minutes; the average latency for 100,000 keywords is almost 14 hours!  With the Streaming API, you get near-real-time (usually within one second) results, regardless of the number of keywords you track.
  • With traditional API polling, each query returns N results regardless of whether any results are new since your last request.  This puts the onus of deduping squarely on your shoulders.  This sounds like it should be simple — cache the last N resultIDs in memory and ignore anything that’s been seen before.  At scale, high-frequency keywords will consume the cache and low frequency keywords quickly age out.  This means you’ll invariably have to hit the disk and begin thrashing your database. Thankfully, Twitter has already obviated much of this in the Search API with an optional “since_id” query parameter, but plenty of folks either ignore the option or have never read the docs and end up with serious deduplication work.  With Twitter’s Streaming API, you get a stream of tweets with very little duplication.
  • You will no longer be able to get full fidelity (aka all the tweets for a given keyword) from the Search API.  Twitter is placing increased weight on relevance, which means that, among other things, the Search API’s results will no longer be chronologically ordered.  This is great news from a user-facing functionality perspective, but it also means that if you query the Search API for a given keyword every N seconds, you’re no longer guaranteed to receive the new tweets each time.
  • We all complain about the limited backwards view of Twitter’s search corpus.  On any given day, you’ll have access to somewhere between seven and 14 days worth of historical data (somewhere between one quarter to one half billion tweets), which is of limited value when trying to discover historical trends.  Additionally, for high volume keywords (think Obama or iPhone or Toyota), you may only have access to an hour of historical data, due to the limited number of results accessible through Twitter’s paging system.  While there is no direct correlation between the number of queries against a database and the amount of data that can be indexed, there IS a direct correlation between devoting resources to handle ever-growing query demands and not having resources to work on growing the index.  As persistent queries move to the Streaming API, Twitter will be able to devote more resources to growing the index of data available via the Search API (see Question 4, below).
  • Lastly, you don’t really have a choice.  While Twitter has not yet begun to heavily enforce rate limiting (Gnip’s customers currently see few errors at 3,600 queries per hour), you should expect the Search API’s performance profile to eventually align with the REST API (currently 150 queries per hour, reportedly moving to 1,500 in the near future).

Question 3: Will I have to change my API integration?

Twitter’s Streaming API uses streaming HTTP

  • With traditional HTTP requests, you initiate a connection to a web server, the server sends results and the connection is closed.  With streaming HTTP, the connection is maintained and new data gets sent over a single long-held response.  It’s not unusual to see a Streaming API connection last for two or three days before it gets reset.
  • That said, you’ll need to reset the connection every time you change keywords.  With the Streaming API , you upload the entire set of keywords when establishing a connection.  If you have a large number of keywords, it can take several minutes to upload all of them and during the duration you won’t get any streaming results.  The way to work around this is to initiate a second Streaming API connection, then terminate the original connection once the new one starts receiving data.  In order to adhere to Twitter’s request that you not initiate a connection more than once every couple of minutes, highly volatile rule sets will need to batch changes into two minute chunks.
  • You’ll need to decouple data collection from data processing.  If you fall behind in reading data from the stream, there is no way to go back and get it (barring making a request from the Search API).  The best way to ensure that you are always able to keep up with the flow of streaming data is to place incoming data into a separate process for transformation, indexing and other work.  As a bonus, decoupling enables you to more accurately measure the size of your backlog.

Streaming API consumers need to perform more filtering on their end

  • Twitter’s Streaming API only accepts single-term rules; no more complex queries.  Say goodbye to ANDs, ORs and NOTs.  This means that if you previously hit the Search API looking for “Avatar Movie -Game”, you’ve got some serious filtering to do on your end.  From now on, you’ll add to the Streaming API one or more of the required keywords (Avatar and/or Movie) and filter out from the results anything without both keywords and containing the word “Game”.
  • You may have previously relied on the query terms you sent to Twitter’s Search API to help you route the results internally, but now the onus is 100% on you.  Think of it this way: Twitter is sending you a personalized firehose based upon your one-word rules.  Twitter’s schema doesn’t include a <keyword> element, so you don’t know which of your keywords are contained in a given Tweet.  You’ll have to inspect the content of the tweet in order to route appropriately.
  • And remember, duplicates are the exception, not the rule, with the Streaming API, so if a given tweet matches multiple keywords, you’ll still only receive it once.  It’s important that you don’t terminate your filtering algo on your first keyword or filter match; test against every keyword, every time.

Throttling is performed differently

  • Twitter throttles their Search API by IP address based upon the number of queries per second.  In a world of real-time streaming results, this whole concept is moot.  Instead, throttling is defined by the number of keywords a given account can track and the overall percentage of the firehose you can receive.
  • The default access to the Streaming API is 200 keywords; just plug in your username and password and off you go.  Currently, Twitter offers approved customers access to 10,000 keywords (restricted track) and 200,000 keywords (partner track).  If you need to track more than 200,000 keywords, Twitter may bind “partner track” access to multiple accounts, giving you access to 400,000 keywords or even more.
  • In addition to keyword-based streams, Twitter makes available several specific-use streams, including the link stream (All tweets with a URL) and the retweet stream (all retweets).  There are also various levels of userid-based streams (follow, shadow and birddog) and the overall firehose (spritzer, gardenhose and firehose), but they are outside the bounds of this post.
  • The best place to begin your quest for increased Streaming API is an email to api@twitter.com — briefly describe your company and use case along with the requested access levels. (This process will likely change for coming Commercial Accounts.)
  • Twitter’s Streaming API is throttled at the overall stream level. Imagine that you’ve decided to try to get as many tweets as you can using track.  I know, I know, who would do such a thing?  Not you, certainly.  But imagine that you did — you entered 200 stop words, like “and”, “or”, “the” and “it” in order to get a ton of tweets flowing to you.  You would be sorely disappointed, because twitter enforces a secondary throttle, a percentage of firehose available to each access level.  The higher the access level (partner track vs. restricted track vs. default track), the greater the percentage you can consume.  Once you reach that amount, you will be momentarily throttled and all matching tweets will be dropped on the floor.  No soup for you!  You should monitor this by watching for “limit” notifications.  If you find yourself regularly receiving these, either tighten up your keywords are request greater access from Twitter.

Start tracking deletes

  • Twitter sends deletion notices down the pipe when a user deletes one of their own tweets.  While Twitter does not enforce adoption of this feature, please do the right thing and implement it.  When a user deletes a tweet, they want it stricken from the public record.  Remember, “it ain’t complete if you don’t delete.”  We just made that up.  Just now.  We’re pretty excited about it.

Question 4: What if I want historical data too?


Twitter’s Streaming API is forward-looking, so you’ll only get new tweets when you add a new keyword.  Depending on your use case you may need some historical data to kick things off.  If so, you’ll want to make one simultaneous query to the Search API.  This means that you’ll need to maintain two integrations with Twitter APIs (three, if you’re taking advantage of Twitter’s REST API for tracking specific users), but the benefit is historical data + low-latency / high-reliability future data.

And as described before, the general migration to the Streaming API should result in deeper results from the Search API, but even now you can get around 1,500 results for a keyword if you get acquainted with the “page” query parameter.

Questions 5: What if I need more help?

Twitter resources:

Streaming HTTP resources:

Gnip help:

  • Ask questions in the comments below and we’ll respond inline
  • Send email to eric@gnip.com to ask the Gnip team direct questions

What's Up.

A few weeks have past since making some major product direction/staffing/technology-stack changes at Gnip. Most of the dust has settled and here’s an update.

What Changed Externally

api.gnip.com is alive, well, and fully supported. From a product standpoint we’re now also pursuing a decentralized data access model to broaden our offering. The original centralized product continues to serve its customers well, but it doesn’t fit all the use cases we want to nail. It turns out that while many folks want to be completely hands-off WRT how their data is collected (“just get me the data”), they still want full transparency into, and control of, the process. That transparency and control is on its way via Gnip Data Collectors that customers configure through an easy to use GUI.

To summarize, externally, you will soon see additional product offerings/approaches around data movement.

What Changed Internally

A lot. api.gnip.com is a phenomenal message bus that can reliably filter & move data from A to B at insane volumes. In order to achieve this however, we left a few things by the wayside that we realized we couldn’t leave there any longer. Customer demand, and internal Product direction needs (obviously coupled with customer needs) were such that we needed to approach the product offering from a different technical angle.

GUI & Data

We neglected a non-trivial tier of our customer base by almost exclusively focusing on the REST API to the system. Without the constraint of a GUI, technical/architectural/implementation decisions that come with building software were blinded by “the backend.” As a result, we literally cut our data off from the GUI tier. Getting data into the GUI was like raising the Titanic; doable, but hard and time consuming. Too hard for what we needed to do as a business. We’d bolted the UI framework onto the side, and customized how everything moved in/out of the core platform to the GUI layer. We weren’t able to keep up with Product needs on the GUI side.

Statistics

Similar to GUI, getting statistics out of the system in a consumer friendly manner was too hard. Business has become accustomed to running SQL queries to collect information/statistics. While one can bolt SQL interfaces onto customized systems, you have to ask yourself whether or not you really want to? What if you started with something that natively spoke SQL?

So…

We introduced a stack that supports a decentralized data collection approach, as well as off-the-shelf GUI, statistics collection/display, and SQL interface; “Cloud” instances, running Linux (obviously), MySQL, and Rails. We have prototypes up and running internally, and things are going great.

Product Details

I’ve been vague here on purpose. We’re still honing all the features, capabilities, and market opportunities in front of us, and I don’t want to commit to them right now.

The People

I want to end on a personal note. My mind was blown by the people we decided to “let go” in this process; all of them incredibly high quality.

All I can say here is that it’s all in the people. You build teams that meet the needs of the business. For the sand that shifted, Eric and I are to blame. We undoubtedly burned bridges with amazing people during this process, and that is excruciating. Those no longer with us are great, and all of them have either already jumped into new projects/companies, or are weighing their options. The best of luck to you, and I hope to work with you again someday.

HOW-TO: Twitter Search Publisher

There has been some confusion around how to leverage Gnip’s Twitter Search (“twitter-search”) Publisher. We have work to do in order to clarify this use case from a usability/documentation standpoint, but in the meantime hopefully the following clarifies things a bit.

First off, “twitter-search” is a Polled Publisher which means it is subject to high latencies, as well as gaps in coverage. Secondly, we overload the “keyword” rule type in Filters in order to provide a mechanism for you to enter your http://search.twitter.com compatible queries (see http://search.twitter.com/operators for more information). Any query you can run on http://search.twitter.com, can be added to your Gnip filter as a “keyword” rule.

For example, if you search Twitter for “Boulder, CO” (including the quotes), Twitter considers that a literal, case-insensitive, phrase search; and so will Gnip. “Boulder, CO” (excluding the quotes), yields an OR search on Twitter; and hence the same in Gnip. If you search for “cars AND trucks” you get Boolean search operator behavior in Twitter, and subsequently in Gnip as well.

In short, we pass through the literal queries/strings that you hand Gnip, straight on through to Twitter. The “keywords” are opaque to Gnip. The only trick is in ensuring your “keywords” are entered into Gnip appropriately.

Through Gnip’s web interface, you can add comma separated keywords to a Filter. This is usually straightforward, however in the twitter-search Publisher case, it takes extra care to get the results you want, especially when you want to include commas or quotes in your queries. As a result, the format of the keywords entered in a twitter-search Publisher Filter must conform to csv quoting to ensure your queries get executed properly.

Through Gnip’s REST interface, you encapsulate the keywords within XML <rule> elements, so the csv quoting rules can be ignored.

For some further examples of how to add twitter-search keywords, see the Gnip API documentation.

As a final note, the overload of “keyword” rule types in Filters is something we’re experimenting with and is subject to change.

Numbers + Architecture

We’ve been busy over the past several months working hard on what we consider a fundamental piece of infrastructure that the network has been lacking for quite some time. From “ping server for APIs” to “message bus”, we’ve been called a lot of things; and we are actually all of them rolled into one. I want to provide some insight into what our backend architecture looks like as systems like this generally don’t get a lot of fanfare, they just have to “work.” Another title for this blog post could have been “The Glamorous Life of a Plumbing Company.”

First, some production numbers.

  • 99.9%: the Gnip service has 99.9% up-time.
  • 0: we have had zero Amazon Ec2 instances fail.
  • 10: ten Ec2 instances, of various sizes, run the core, redundant, message bus infrastructure.
  • 2.5m: 2.5 million unique activities are HTTP POSTed (pushed) into Gnip’s Publisher front door each day.
  • 2.8m: 2.8 million activities are HTTP POSTed (pushed) out Gnip’s Consumer back door each day.
  • 2.4m: 2.4 million activities are HTTP GETed (polled) from Gnip’s Consumer back door each day.
  • $0: no money has been spent on framework licenses (unless you include “AWS”).

Second, our approach.

Simplicity wins. These production transaction rate numbers, while solid, are not earth shattering. We have however, achieved much higher rates in load tests. We optimized for activity retrieval (outbound) as opposed to delivery into Gnip (inbound). That means every outbound POST/GET, is moving static data off of disk; no math gets done. Every inbound activity results in processing to ensure proper Filtration and distribution; we do the “hard” work on delivery.

We view our core system as handling ephemeral data. This has allowed us, thus far, to avoid having a database in the environment. That means we don’t have to deal with traditional database bottlenecks. To be sure, we have other challenges as a result, but we decided to take on those as opposed to have the “database maintenance and administration” ball and chain perpetually attached. So, in order to share contentious state across multiple VMs, across multiple machine instances, we use shared memory in the form of TerraCotta. I’d say TerraCotta is “easy” for “simple” apps, but challenges emerge when you start dealing with very large data sets in memory (multiple giga-bytes). We’re investing real energy in tuning our object graph, access patterns, and object types to keep things working as Gnip usage increases. For example, we’re in the midst of experimenting with pageable TerraCotta structures that ensure smaller chunks of memory can be paged into “cold” nodes.

When I look at the architecture we started with, compared to where we are now, there are no radical changes. We chose to start clustered, so we could easily add capacity later, and that has worked really well. We’ve had to tune things along the way (split various processes to their own nodes when CPU contention got too high, adjust object graphs to optimize for shared memory models, adjust HTTP timeout settings, and the like), but our core has held strong.

Our Stack

  • nginx – HTTP server, load balancing
  • JRE 1.6 – Core logic, REST Interface
  • TerraCotta – shared memory for clustering/redundancy
  • ejabberd – inbound XMPP server
  • Ruby – data importing, cluster management
  • Python – data importing

High-Level Core Diagram

Gnip Core Architecture Diagram

Gnip owes all of this to our team & our customers; thanks!

Software Evolution

Those of us who have been around for awhile constantly joke about how “I remember building that 10 years ago” everytime some big “new” trend emerges. It’s always a lesson in market readiness and timing for a given idea. The flurry around Google Chrome has rekindled the conversation around distributed apps. Most folks are tied up in the concept of a “new browser,” but Chrome is actually another crack at the age old “distrbuted/server-side application” problem; albeit an apparent good one. The real news in Chrome (I’ll avoid the V8 vs. TraceMonkey conversation for now) is native Google Gears support.

My favorite kind of technology is the kind that quietly gets built, then one day you wake up and it’s changed everything. Google Gears has that potential and if Chrome winds up with meaningful distribution (or Firefox adopts Gears) web apps as we know them will finally have mark-up-level access to local resources (read “offline functionality”). This kind of evolution is long overdue.

Another lacking component on the network is the age-old, CS101, notion of event-driven architectures. HTTP GET dominates web traffic, and poor ‘ol HTTP POST is rarely used. Publish and subscribe models are all but unused on the network today, and Gnip aims to change that. We see a world that is PUSH driven rather than PULL. The web has come a looooong way on GET, but apps are desperate for traditional flow paradigms such as local processor event loops. Our goal is to do this in a protocol agnostic manner (e.g. REST/HTTP POST, XMPP, perhaps some distributed queuing model)

Watching today’s web apps poll eachother to death is hard. With each new product that integrates serviceX, the latency of serviceX’s events propegating through the ecosystem degrades, and everyone loses. This is a broken model that if left unresolved, will drive our web apps back into the dark ages once all the web service endpoints are overburdened to the point of being uninteresting.

We’ve seen fabulous adoption of our API since launching a couple of months ago. We hope that more Data Producers and Data Consumers leverage it going forward.

Garbage In, Garbage Out

Gnip is an intermediary service for message flow across disparate network endpoints. Standing in the middle allows for a variety of value adds (Data Producers can “publish once, distribute to many,” Data Consumers can enjoy single service interaction rather than one-off’ing over and over again), but the quality of data that Data Producers push into the system is fundamental.

Only As Good As The Sum Of Our Parts

Gnip doesn’t control the quality of the data being published to it. Whether it comes in the form of XMPP messages, RSS, or ATOM, there are many issues that can come into play that can affect the data a Data Consumer receives.

  • Bad transport/delivery – The source XMPP, RSS, ATOM, or REST, feed can go down. When this happens for a given Publisher, that source has vanished and Gnip doesn’t receive messages for that Publisher. We’re only as good as the data coming in. While Gnip can consume data from XMPP, RSS, ATOM, and other sources, our preferred inbound message delivery method is via our REST API. Firing off messages to Gnip directly, and not through yet another layer, minimizes delivery issues.
  • Bad data – As any aggregator (Friend Feed, Social Thing, MoveableType Activity Streams…) can attest, the data coming across XMPP, RSS, and ATOM feeds today is a mess. From bad/illegal formatting, to bad/illegal data escaping, nearly every activity feed has unique issues that have to be handled on a case by case basis. There will be bugs. We will fix them as they arise. Once again, these issues can be minimized if Data Producers deliver messages directly to Gnip via our REST API.
  • Bad policy – This one’s interesting. Gnip makes certain assumptions about the kind of data it receives. In our current implementation we advertise to Data Consumers that Data Producers push all public, per user, change notifications generated within their systems, to Gnip. This usually corresponds to the existing public API policies for said Data Producers. We will eventually offer finely tuned, Data Producer controlled, data policies, but for today’s public facing Gnip service, we do not want to see Data Producers creating publishing policies specific to Gnip. Doing so confuses the middle-ware dynamic we’re trying to create with our current product, and subsequently muddies the water for everyone. Imagine a Data Consumer interacting with a Data Producer directly under one policy, then interacting with Gnip under another policy; confusing. Again, we will, perhaps earlier than we think, cater to unique data policies on a per Data Producer basis, but, we’re not there yet.

While addressing all of these issues is part of our vision, they’re not all resolved out of the gate.