Official Google Buzz Firehose Added to Gnip’s Social Media API

Today we’re excited to announce the integration of the Google Buzz firehose into Gnip’s social media data offering. Google Buzz data has been available via Gnip for some time, but today Gnip became one of the first official providers of the Google Buzz firehose.

The Google Buzz firehose is a stream of all public Buzz posts (excluding Twitter tweets) from all Google Buzz users. If you’re interested in the Google Buzz firehose, here are some things to know:

  • Google delivers it via Pubsubhubbub. If you don’t want to consume it via Pubsubhubbub, Gnip makes it available in any of our supported delivery methods: Polling (HTTP GET), Streaming HTTP (Comet), or Outbound HTTP Post (Webhooks).
  • The format of the Firehose is XML Activity Streams. Gnip loves Activity Streams and we’re excited to see Google continue to push this standard forward.
  • Google Buzz activities are Geo-enabled. If the end user attaches a geolocation on a Buzz post (either from a mobile Google Buzz client or through an import from another geo-enabled service), that location will be included in the Buzz activity.

We’re excited to bring the Google Buzz firehose to the Social Media Monitoring and Business Intelligence community through the power of the Gnip platform.

Here’s how to access the Google Buzz firehose. If you’re already a Gnip customer, just log in to your Gnip account and with 3 clicks you can have the Buzz firehose flowing into your system. If you’re not yet using Gnip and you’d like to try out the Buzz firehose to get a sense of volume, latency, and other key metrics, grab a free 3 day trial at http://try.gnip.com and check it out along with the 100 or so other feeds available through Gnip’s social media API.

Expanding Gnip's Facebook Graph API Support

One of our most requested features has long been Facebook support. While customers have had beta access for awhile now, today we’re officially announcing support for several new Facebook Graph API feeds. As with the other feeds available through Gnip, Facebook data is available in Activity Streams format (as well as original if you so desire), and you can choose your own delivery method (polling, webhook POSTing, or streaming). Gnip integrates with Facebook on your behalf, in a fully transparent manner, in order to feed you the Facebook data you’ve been longing for.

As with most services, Facebook’s APIs are also in constant flux. Integrating with Gnip shields you from the ever shifting sands of service integration. You don’t have to worry about authentication implementation changes or delivery method shifts.

Use-case Highlight

Discovery is hard. If you’re monitoring a brand or keyword for popularity (positive or negative sentiment), it’s challenging to keep track of fan pages that crop up without notice. With Gnip, you can receive real-time notification when one of your search terms is found within a fan page. Discover when a community is forming around a given topic, product, or brand before others do.

We currently support the following endpoints, and will be adding more based on customer demand.

  • Keyword Search – Search over all public objects in the Facebook social graph.
  • Lookup Fan Pages by Keyword – Look up IDs for Fan Pages with titles containing your search terms.
  • Fan Page Feed (with or without comments) – Receive wall posts from a list of Facebook Fan Pages you define.
  • Fan Page Posts (by page owner, without comments) – Receive wall posts from a list of Facebook Fan Pages you define. Only shows wall posts made by the page owner.
  • Fan Page Photos (without comments) – Get photos for a list of Facebook Fan Pages.
  • Fan Page Info – Get information including fan count, mission, and products for a list of Fan Pages.

Give Facebook via Gnip a try (http://try.gnip.com), and let us know what you think info@gnip.com

Of Client-Server Communication

We’ve recently been having some interesting conversations, both internally and with customers, about the challenges inherent in client-server software interaction, aka Web Services or Web APIs. The relatively baked state of web browsers and servers has shielded us from most of the issues that come with getting computers to talk to other computers.

It didn’t happen over-night, but today’s web browsing world rides on top of a well vetted pipeline of technology to give us good browsing (client-side) experiences. However, there are a lot of assumptions and moving parts behind our browser windows that get uncovered when working with web services (servers). There are skeletons in the closet unfortunately.

End-users’ web browsing demands eventually forced ports 80 and 443 (SSL) open across all firewalls and ISPs and we now take their availability for granted. When was the last time you heard someone ask “is port 80 open?” It’s probably been awhile. By 2000, server-side HTTP implementations (web servers) started solidifying and at the HTTP-level client and server tier there was relatively little incompatibility. Expectations around socket timeouts and HTTP protocol exchanges were clear, and both sides of the connection adhered to those expectations.

Enter the world of web-services/APIs.

We’ve been enjoying the stable client-server interaction that web browsing has provided over the past 15 years, but web services/APIs thrust the ugly realities that lurk beneath into view. When we access modern web services through lower-level software (e.g. something other than the browser), we have to make assumptions and implementation/configuration choices that the browser otherwise makes for us. Among them…

  • port to use for the socket connection
    • the browser assumes you always want ’80’ (HTTP) or ‘443’ (HTTPS)
    • the browser provides built-in encryption handling of HTTPS/SSL
  • URL parsing
    • the browser uses static rules for interpreting and parsing URLs
  • HTTP request methods
    • browsers inherently know when to use GET vs. POST
  • HTTP POST bodies.
    • browsers pre-define how POST bodies are structured, and never deviate from this methodology
  • HTTP header negotiation (this is the big one).
    • browsers handle all of the following scenarios out-of-the-box
    • Request
      • compression support (e.g. gzip)
      • connection duration types (e.g. keep-alive)
      • authentication (basic/oauth/other)
      • user-agent specification
    • Response
      • chunked responses
      • content-types. the browser has a pre-defined set of content types that it knows how to handle internally.
      • content-encoding. the browser knows how to handle various encoding types (e.g. gzip compression), and does so by default
      • authentication (basic/oauth/other)
  • HTTP Response body formats/character sets/encodings
    • browsers juggle the combination between content-encoding, content-type, and charset handling to ensure their international audience can see the information as its author intended.

Web browsers have the luxury of being able to lock down all of the above variables and not worry about changes in these assumptions. Having built browsers (Netscape/Firefox) in the past for a living, it’s still a very difficult task but at least the problem is constrained (e.g. ensure the end user can view the content within the browser). Web service consumers have to understand, and make decisions around, each of those points. Getting just one of them wrong can lead to issues in your application. These issues can range from being connectivity- or content handling-related to service authentication and can lead to long guessing games off “what went wrong?”

To further complicate the API interaction pipeline, many IT departments prevent abnormal connection activity from occurring. This means that while your application may be “doing the right thing” (TM), a system that sits between your application and the API with which it is trying to interact may prevent the exchange from occurring as you intended.

What To Do?

First off, you need to be versed not only in the documentation of the API you’re trying to use. Documentation is often outdated and doesn’t reflect actual implementations or account for bugs and behavioral nuances inherent in any API, so you also need to engage with its developer community/forums. From there, you need to ensure your HTTP client accounts for the assumptions I outline above and adheres to the API you’re interacting with. If you’re experiencing issues you’ll need to ensure your code is establishing the connection successfully, receiving the data it’s expecting, and parsing the data correctly. Never underestimate using a packet sniffer to view the raw HTTP exchange between your client and the server; debugging HTTP libraries at the code-level (even with logging) often don’t yield the truth behind what’s being sent to the server and received.

The Power of cURL

This is an entire blog post in and of itself, but the swiss army knife of any web service developer is cURL. In the right hands, cURL allows you to easily construct HTTP requests to test interaction with a web service. Don’t underestimate the translation of your cURL test to your software however.

So You Want Some Social Data

If your product or service needs social data in today’s API marketplace, there are a few things you need to consider in order to most effectively consume said data.

 

I need all the data

First, you should double-check your needs. Data consumers often think they need “all the data,” when in fact they don’t. You may need “all the data” for a given set of entities (e.g. keywords, or users) on a particular service, but don’t confuse that with needing “all the data” a service generates. When it comes to high-volume services (such as Twitter), consuming “all of the data” actually amounts to resource intensive engineering exercises on your end. There are often non-trivial scaling challenges involved when handling large data-sets. Do some math and determine whether or not statistical sampling will give you all you need; the answer is usually “yes.” If the answer is “no” be ready for an uphill (technical, financial, or business model) battle with service providers; they don’t necessarily want all of their data floating around out there.
Social data APIs are generally designed around prohibiting “all of the data” being accessed, either technically, or through terms of service agreements. However, they usually provide great access to narrow sets of data. Consider whether you need “100% of the data” for a relatively narrow slice of information; most social data APIs support this use case quite well.

 

Ingestion

 

Connectivity

There are three general styles that you’ll wind up using to access an API, all of them HTTP based: inbound-POST; event driven (e.g. PubSubHubbub/WebHooks), GET; polling, or GET/POST; streaming. Each of these has its pros and cons. I’m avoiding XMPP in this post only because it is infrequently used and hasn’t seen widespread adoption (yet). Each style requires a different level of operational and programmatic understanding.

 

Authentication/Authorization

APIs usually have publicly available versions (usually limited in their capabilities), as well as versions that require registration for subsequent authenticated connections. The authC and authZ semantics around APIs range from simple, to complex. You’ll need to understand the access characteristics around the specific services you want to access. Some require hands-on, human, authorization-level justification processes to be followed in order to have the “right level of access” granted to you and your product. Some are simple automated online registration forms that directly yield the account credentials necessary for API access.
HTTP-Basic authentication, not surprisingly, is the predominate authentication scheme used, and authorization levels are conveniently tied to the account by the service provider. OAuth (proper and 2-legged) is gaining steam however. You’ll also find API-keys (URL params or HTTP header based) are still widely used.

 

Processing

How you process data once you receive it is certainly affected by which connection style you use. Note, that most APIs don’t give you an option in how you connect to them; the provider decides for you. Processing data in the same step as receiving it can cause bottlenecks in your system, and ultimately put you on bad terms with the API provider you’re connecting to. An analogy would be drinking from the proverbial firehose. If you connect the firehose to your mouth, you might get a gulp or two down before you’re overwhelmed by the amount of water actually coming at you. You’ll either cause the firehose to backup on you, or you’ll start leaking water all over the place. Either way, you won’t be able to keep up with the amount of water coming at you. If your, average, ability to process data is slower than the rate at which it arrives, you’ll have a queueing challenge to contend with. Consider offline, or out-of-band, processing of data as it becomes available. For example, write it to disk or a database and have parallelized worker threads/processes parse/handle it from there. The point is, don’t process it in the moment in this case.
Many APIs don’t produce enough data to warrant out-of-band processing, so often inline processing is just fine. It all depends on what operations you’re trying to perform, the speed at which your technology stack can accomplish those operations, and the rate at which data arrives.

 

Reporting

If you don’t care about reporting initially, you will in short order. How much data are you receiving? What are peak volume periods? Which of the things you’re looking for are generating the most results?
API integrations inherently bind your software to someone else’s. Understanding how that relationship is functioning at any given moment is crucial to your day to day operations.

 

Monitoring

Reporting’s close sibling is monitoring. Understanding when an integration has gone south is just as important as knowing when your product is having issues; they’re one and the same. Integrating with an API means you’re dependent on someone else’s software, and that software can have any number of issues. From bugs, to planned upgrades or API changes, you’ll need to know when certain things change, and take appropriate action.

 

Web services/APIs are usually incredibly easy to “sample,” but truly integrating and operationalizing them is another, more challenging, process.

Pushing and Polling Data Differences in Approach on the Gnip platform

Obviously we have some understanding on the concepts of pushing and polling of data from service endpoints since we basically founded a company on the premise that the world needed a middleware push data service.    Over the last year we have had a lot of success with the push model, but we also learned that for many reasons we also need to work with services via a polling approach.   For this reason our latest v2.1 includes the Gnip Service Polling feature so that we can work with any service using push, poll or a mixed approach.

Now, the really great thing for users of the Gnip platform is that how Gnip collects data is mostly abstracted away.   Every end user developer or company has the option to tell Gnip where to push data that you have set up filters or have a subscription.   We also realize not everyone has an IT setup to handle push so we have always provided the option for HTTP GET support that lets people grab data from a Gnip generated URL for your filters.

One place where the way Gnip collects data can make a difference, at this time, for our users is the expected latency of data.  Latency here refers to the time between the activity happening (i.e. Bob posted a photo, Susie made a comment, etc) and the time it hits the Gnip platform to be delivered to our awaiting users.     Here are some basic expectation setting thoughts.

PUSH services: When we have push services the latency experience is usually under 60 seconds, but we know that this is not always the case sense sometimes the services can back-up during heavy usage and latency can spike to minutes or even hours.   Still, when the services that push to us are running normal it is reasonable to expect 60 second latency or better and this is consistent for both the Community and Standard Edition of the Gnip platform.

POLLED services:   When Gnip is using our polling service to collect data the latency can vary from service to service based on a few factors

a) How often we hit an endpoint (say 5 times per second)

b) How many rules we have to schedule for execution against the endpoint (say over 70 million on YouTube)

c) How often we execute a specific rule (i.e. every 10 minutes).     Right now with the Community edition of the Gnip platform we are setting rule execution by default at 10 minute intervals and people need to have this in mind with their expectation for data flow from any given publisher.

Expectations for POLLING in the Community Edition: So I am sure some people who just read the above stopped and said “Why 10 minutes?”  Well we chose to focus on “breadth of data ” as the initial use case for polling.   Also, the 10 minute interval is for the Community edition (aka: the free version).   We have the complete ability to turn the dial and use the smarts built into the polling service feature we can execute the right rules faster (i.e. every 60 seconds or faster for popular terms and every 10, 20, etc minutes or more for less popular ones).    The key issue here is that for very prolific posting people or very common keyword rules (i.e. “obama”, “http”, “google”) there can be more posts that exist in the 10 minute default time-frame then we can collect in a single poll from the service endpoint.

For now the default expectation for our Community edition platform users should be a 10 minute execution interval for all rules when using any data publisher that is polled, which is consistent with the experience during our v2.1 Beta.    If your project or company needs something a bit more snappy with the data publishers that are polled then contact us at info@gnip.com or contact me directly at shane@gnip.com as these use cases require the Standard Edition of the Gnip platform.

Current pushed services on the platform include:  WordPress, Identi.ca, Intense Debate, Twitter, Seesmic,  Digg, and Delicious

Current polled services on the platform include:   Clipmarks, Dailymotion, deviantART, diigo, Flickr, Flixster, Fotolog, Friendfeed, Gamespot, Hulu, iLike, Multiply, Photobucket, Plurk, reddit, SlideShare, Smugmug, StumbleUpon, Tumblr, Vimeo, Webshots, Xanga, and YouTube

Numbers + Architecture

We’ve been busy over the past several months working hard on what we consider a fundamental piece of infrastructure that the network has been lacking for quite some time. From “ping server for APIs” to “message bus”, we’ve been called a lot of things; and we are actually all of them rolled into one. I want to provide some insight into what our backend architecture looks like as systems like this generally don’t get a lot of fanfare, they just have to “work.” Another title for this blog post could have been “The Glamorous Life of a Plumbing Company.”

First, some production numbers.

  • 99.9%: the Gnip service has 99.9% up-time.
  • 0: we have had zero Amazon Ec2 instances fail.
  • 10: ten Ec2 instances, of various sizes, run the core, redundant, message bus infrastructure.
  • 2.5m: 2.5 million unique activities are HTTP POSTed (pushed) into Gnip’s Publisher front door each day.
  • 2.8m: 2.8 million activities are HTTP POSTed (pushed) out Gnip’s Consumer back door each day.
  • 2.4m: 2.4 million activities are HTTP GETed (polled) from Gnip’s Consumer back door each day.
  • $0: no money has been spent on framework licenses (unless you include “AWS”).

Second, our approach.

Simplicity wins. These production transaction rate numbers, while solid, are not earth shattering. We have however, achieved much higher rates in load tests. We optimized for activity retrieval (outbound) as opposed to delivery into Gnip (inbound). That means every outbound POST/GET, is moving static data off of disk; no math gets done. Every inbound activity results in processing to ensure proper Filtration and distribution; we do the “hard” work on delivery.

We view our core system as handling ephemeral data. This has allowed us, thus far, to avoid having a database in the environment. That means we don’t have to deal with traditional database bottlenecks. To be sure, we have other challenges as a result, but we decided to take on those as opposed to have the “database maintenance and administration” ball and chain perpetually attached. So, in order to share contentious state across multiple VMs, across multiple machine instances, we use shared memory in the form of TerraCotta. I’d say TerraCotta is “easy” for “simple” apps, but challenges emerge when you start dealing with very large data sets in memory (multiple giga-bytes). We’re investing real energy in tuning our object graph, access patterns, and object types to keep things working as Gnip usage increases. For example, we’re in the midst of experimenting with pageable TerraCotta structures that ensure smaller chunks of memory can be paged into “cold” nodes.

When I look at the architecture we started with, compared to where we are now, there are no radical changes. We chose to start clustered, so we could easily add capacity later, and that has worked really well. We’ve had to tune things along the way (split various processes to their own nodes when CPU contention got too high, adjust object graphs to optimize for shared memory models, adjust HTTP timeout settings, and the like), but our core has held strong.

Our Stack

  • nginx – HTTP server, load balancing
  • JRE 1.6 – Core logic, REST Interface
  • TerraCotta – shared memory for clustering/redundancy
  • ejabberd – inbound XMPP server
  • Ruby – data importing, cluster management
  • Python – data importing

High-Level Core Diagram

Gnip Core Architecture Diagram

Gnip owes all of this to our team & our customers; thanks!

Software Evolution

Those of us who have been around for awhile constantly joke about how “I remember building that 10 years ago” everytime some big “new” trend emerges. It’s always a lesson in market readiness and timing for a given idea. The flurry around Google Chrome has rekindled the conversation around distributed apps. Most folks are tied up in the concept of a “new browser,” but Chrome is actually another crack at the age old “distrbuted/server-side application” problem; albeit an apparent good one. The real news in Chrome (I’ll avoid the V8 vs. TraceMonkey conversation for now) is native Google Gears support.

My favorite kind of technology is the kind that quietly gets built, then one day you wake up and it’s changed everything. Google Gears has that potential and if Chrome winds up with meaningful distribution (or Firefox adopts Gears) web apps as we know them will finally have mark-up-level access to local resources (read “offline functionality”). This kind of evolution is long overdue.

Another lacking component on the network is the age-old, CS101, notion of event-driven architectures. HTTP GET dominates web traffic, and poor ‘ol HTTP POST is rarely used. Publish and subscribe models are all but unused on the network today, and Gnip aims to change that. We see a world that is PUSH driven rather than PULL. The web has come a looooong way on GET, but apps are desperate for traditional flow paradigms such as local processor event loops. Our goal is to do this in a protocol agnostic manner (e.g. REST/HTTP POST, XMPP, perhaps some distributed queuing model)

Watching today’s web apps poll eachother to death is hard. With each new product that integrates serviceX, the latency of serviceX’s events propegating through the ecosystem degrades, and everyone loses. This is a broken model that if left unresolved, will drive our web apps back into the dark ages once all the web service endpoints are overburdened to the point of being uninteresting.

We’ve seen fabulous adoption of our API since launching a couple of months ago. We hope that more Data Producers and Data Consumers leverage it going forward.