Activity Streams

Gnip pledges allegiance to Activity Streams.

Consuming data from APIs with heterogeneous response formats is a pain. From basic format differences (XML vs JSON) to the semantics around structure and element meaning (custom XML structure, Atom, RSS), if you’re consuming data from multiple APIs, you have to handle each API’s responses differently. Gnip minimizes this pain by normalizing data from across services into Activity Streams. Activity Streams allows you to consistently digest responses from many services, using a single parsing routine in your code; no more special casing.

Gnip’s history with Activity Streams runs long and deep. We contributed to one of the first service/activity/verb mapping proposals, and have been implementing aspects of Activity Streams over the past couple of years. Over the past several months Activity Streams has gained enough traction that the decision for it to be Gnip’s canonical normalization format was only natural. We’ve flipped the switch and are proud to be part of such a useful standard.

The Activity Streams initiative is in the process of getting its JSON version together, so for now, we offer the XML version. As JSON crystalizes, we’ll offer that as well.

Data Standards?

Today’s general data standards are akin to yesterday’s HTML/CSS browser support standards. The first rev of Gecko (not to be confused w/ the original Mosaic/Navigator rendering engine) at Netscape was truly standards compliant in that it did not provide backwards compatibility for the years of web content that had been built up; that idea made it an Alpha or two into the release cycle, until “quirks-mode” became status quo. The abyss of broken data that machines, and humans, generate, eclipsed web pages back then, and it’s an ever present issue in the ATOM/RSS/XML available today.

Gnip, along with social data aggregators like Plaxo and FriendFeed, has a unique view of the data world. While ugly to us, we normalize data to make our Customers’ lives better. Consumer facing aggregators (Plaxo/FF) beautify the picture for their display layers. Gnip beautifies the picture for it’s data consumption API. Cleaning up the mess that exists on the network today has been an eye opening process. When our data producers (publishers) PUSH data in Gnip XML, life is great. We’re able to work closely with said producers to ensure properly structured, formatted, encoded, and escaped data comes into the system. When, data comes into the system through any other means (e.g. XMPP feeds, RSS/ATOM polling) it’s a rats nest of unstructured, cobbled-together, ill-formated, and poorly-encoded/escaped data.

XML has provided self describing formats and structure, but it ends there. Thousands of pounds of wounded data shows up on Gnip’s doorstep each day, and that’s where Gnip’s normalization heavy lifting work comes into play. I thought I’d share some of the more common bustage we see, along with a little commentary around the category of problem

  • ![CDATA[]] is akin to void* and is way overused. The result is magical custom parsing of something that someone couldn’t fit into some higher-level structure.

    • If you’re back-dooring data/functions into an otherwise “content” payload, you should revisit your overall model. Just like void*, CDATA usually suggests an opaque box you’re trying to jam through the system.
  • Character limited message bodies (e.g. microblogging services) wind up providing data to Gnip that has escaped HTML sequences chopped in half, leaving the data consumer (Gnip in this case) guessing at what to do with a broken encoding. If I give you “&a”, you have to decide whether to consider it literally, expand it to “&”, or to drop it. None of which was intended by the user that generated the original content, they just typed ‘&’ into a text field somewhere.

    • Facebook has taken a swing at how to categorize “body”/”message” sizes which is nice, but clients need to do a better job truncating by taking downstream encoding/decoding/expansion realities into consideration.
  • Document bodies that have been escaped/encoded multiple times, subsequently leave us deciphering how many times to run them through the un-escape/decode channel.

    • _Lazy_. Pay attention to how you’re treating data, and be consistent.
  • Illegal characters in XML attribute/element values.

    • _LAZY_. Pay attention.
  • Custom extensions to “standard” formats (XMPP, RSS, ATOM). You think you’re doing the right thing by “extending” the format to do what you want, but you often wind up throwing a wrench in downstream processing. Widely used libs don’t understand your extensions, and much of the time, the extension wasn’t well constructed to begin with.

    • Sort of akin to CDATA, however, legitimate use cases exist for this. Keep in mind that by doing this, there are many libraries in the ecosystem that will not understand what you’ve done. You have to be confident that your data consumers are something you can control and ensure they’re using a lib/extension that can handle your stuff. Avoid extensions, or if you have to use them, get it right.
  • Namespace case-sensitivity/insensitivity assumptions differ from service to service.

    • Case-sensitivity rules were polluted with the advent of MS-DOS, and have been propagated over the years by end-user expectations. Inconsistency stinks, but this one’s around forever.
  • UTF-8, ASCII encoding bugs/misuse/misunderstanding. Often data claims to be encoded one way, when in fact it was encoded differently.

    • Understand your tool chain, and who’s modifying what, and when. Ensure consistency from top to bottom. Take the time to get it right.
  • UTF-16… don’t go there.

    • uh huh.
  • Libraries in the field to handle all of the above each make their own inconsistent assumptions.

    • It’s conceivable to me that Gnip winds up stating the art in XML processing libs, whether by doing it ourselves, or contributing to existing code trees. Lots of good work out there, none of it great.

You’re probably wondering about the quality of the XML structure itself. By volume, the bulk of data that comes into Gnip validates out of the box. Shocking, but true. As you could probably guess, most of our energy is spent resolving the above data quality issues. The unfortunate reality for Gnip is that the “edge” cases consume lots of cycles. As a Gnip consumer, you get to draft off of our efforts, and we’re happy to do it in order to make your lives better.

If everyone would clean up their data by the end of the day, that’d be great. Thanks.

G[oogle]Data API & Standards

We’re in the throws of re-visioning gnip.xsd and that’s led to pondering Google’s Data API. If you haven’t noticed, at the interface level (not at a service level), there is a high degree of overlap between Gnip’s API, and Google’s Data API. We both chose REST as the primary interface, and data moves through as XML. Google decided to support both RSS and ATOM, while Gnip has constructed it’s own XML. From a system efficiency standpoint, our own boiled down schema makes sense. We’re a message aggregator, and message transmission and processing have to be done at scale (RSS & ATOM are heavy). That said, we’ll be offering ATOM and RSS based formats in the future, as our internal view of data doesn’t always match how folks want to consume it.

As for adopting the Google Data API, we have other priorities at the moment. A GData interface to Gnip as a service definitely has its appeal. I could see Gnip using it as the stepping stone to accessing Gnip activities as RSS/ATOM. Selfishly, Gnip could leverage GData’s convenience libs, and any time you can aggregate use of convenience libraries, everyone wins.

Garbage In, Garbage Out

Gnip is an intermediary service for message flow across disparate network endpoints. Standing in the middle allows for a variety of value adds (Data Producers can “publish once, distribute to many,” Data Consumers can enjoy single service interaction rather than one-off’ing over and over again), but the quality of data that Data Producers push into the system is fundamental.

Only As Good As The Sum Of Our Parts

Gnip doesn’t control the quality of the data being published to it. Whether it comes in the form of XMPP messages, RSS, or ATOM, there are many issues that can come into play that can affect the data a Data Consumer receives.

  • Bad transport/delivery – The source XMPP, RSS, ATOM, or REST, feed can go down. When this happens for a given Publisher, that source has vanished and Gnip doesn’t receive messages for that Publisher. We’re only as good as the data coming in. While Gnip can consume data from XMPP, RSS, ATOM, and other sources, our preferred inbound message delivery method is via our REST API. Firing off messages to Gnip directly, and not through yet another layer, minimizes delivery issues.
  • Bad data – As any aggregator (Friend Feed, Social Thing, MoveableType Activity Streams…) can attest, the data coming across XMPP, RSS, and ATOM feeds today is a mess. From bad/illegal formatting, to bad/illegal data escaping, nearly every activity feed has unique issues that have to be handled on a case by case basis. There will be bugs. We will fix them as they arise. Once again, these issues can be minimized if Data Producers deliver messages directly to Gnip via our REST API.
  • Bad policy – This one’s interesting. Gnip makes certain assumptions about the kind of data it receives. In our current implementation we advertise to Data Consumers that Data Producers push all public, per user, change notifications generated within their systems, to Gnip. This usually corresponds to the existing public API policies for said Data Producers. We will eventually offer finely tuned, Data Producer controlled, data policies, but for today’s public facing Gnip service, we do not want to see Data Producers creating publishing policies specific to Gnip. Doing so confuses the middle-ware dynamic we’re trying to create with our current product, and subsequently muddies the water for everyone. Imagine a Data Consumer interacting with a Data Producer directly under one policy, then interacting with Gnip under another policy; confusing. Again, we will, perhaps earlier than we think, cater to unique data policies on a per Data Producer basis, but, we’re not there yet.

While addressing all of these issues is part of our vision, they’re not all resolved out of the gate.

That Twitter Thing

Oh, crap, Eric’s gone and written another long post…

Since we publicly launched Gnip last week, we’ve been asked numerous times if we can integrate with Twitter or somehow help Twitter with the scaling issues they are facing.  We can, but we depend on Twitter giving us access to their XMPP feed.

We are huge fans of Twitter so we’re patiently waiting for that access.  In the mean time, the questions we’ve received have prompted us to explain two things: (1) How we would benefit Twitter and anyone who wants access to Twitter data and (2) Why – if you are a web service – it’s worth integrating now with Gnip rather than waiting either for (a) Gnip to integrate with Twitter or (b) you to get as popular as Twitter and have scale issues.

Let’s address the first issue: How we would benefit Twitter and anyone that wants to integrate with Twitter data.

Twitter has found that XMPP doesn’t scale for them and as a result, people are forced to poll their API *a lot* to get updates for their users.  MyBlogLog has over 25,000 Twitter users that they throw against the Twitter API every 15 minutes.  This results in nearly 2.5 million queries against the API every day, for maybe 250K updates.  Now add millions of pings from Plaxo and SocialThing and Lijit and heaven forbid Yahoo starts beating up their API…

If Twitter starts pushing updates to us, via our dead simple API or Atom or their XMPP server, we can immediately reduce by an order of magnitude the number of requests that some very large sites are making against their API.  At the same time, we reduce the latency between when someone Tweets and when it shows up on consuming sites like Plaxo.  From 15 minutes or more to 60 seconds or less.

We expect that Twitter has their collective heads down and are working around the clock to buttress their infrastructure, and it’s unlikely that they’re going to do anything optional until that’s sorted out.  Unfortunately, “integrate with Gnip” probably falls into the optional category. We expect, however, that at some point Twitter will start opening up their data to more partners once they feel like they have their arms around their infrastructure.

If you run a web service and integrate with Gnip today, you’ll automatically be able to integrate with Twitter data once they give us access.  Presumably you won’t have to wait in line to get direct Twitter integration.  In addition, you’ll have immediate access to all of the other data providers that we integrate with. Such as  Delicious, Flickr, Magnolia, Get Satisfaction, Intense Debate and Six Apart.  For example, only took Brightkite 15 minutes to integrate our API and start pushing data to our partners via us.

Now for the second topic.  Why – if you are a web service – it’s worth integrating with Gnip now rather than waiting either for (a) Gnip to integrate with Twitter or (b) you to get as popular as Twitter and have scale issues.

All things considered, it’s best not to end up in Twitter’s position.  They have a ton of passionate users (I’m one of them) who want reliable service and don’t have infinite patience.  The old startup cliche of “these are problems we’d like to have” is carp.

You don’t want to be in the position where your business suddenly takes off and your infrastructure falls over because people are banging your APIs to death.  You don’t want your most passionate users calling for mass exodus.  It’s better to take a few minutes to start pushing notifications to Gnip now than when you’re doing 20-hour days rebooting servers.

You also don’t want to be in the position that your company takes off and you suddenly get throttled by an API provider.  Nothing is worse than have to pull data sources because you’ve over-polled and the host decides to turn off the spigot.  Start pulling notifications from Gnip and feel secure that you’re only asking for data when there’s something new.

I still use Twitter every day.  Don’t try to kid me; I know you still do too.  Let them get on with their work and rest assured that we’ll integrate with them the instant we get the okay from them.

The WHAT of Gnip: Changing APIs from Pull to Push

A few months ago a handful of folks came together and took a practical look at the state of “web services” on the network today. As an industry we’ve enjoyed the explosion of web APIs over the past several years, but it’s been “every man for himself,” and we’ve been left with hundreds of web APIs being consumed in random ways (random protocols and formats). There have been a few cracks at standardizing some of this, but most have been left in spec form with, at best, fragmented implementations, and most have been too high level to provide anything more than good bedtime reading. We set out to build something; not write a story.

For a great overview of the situation Gnip is plunging into, checkout Nik Cubrilovic’s post on techcrunchIT; “The New Datastream Aggregators, FriendFeed and Standards.”.

Our first service is the culmination of lots of work by smart, pragmatic, people. From day one we’ve had excellent partners helping us along the way; from early integrations with our API, to discussing specifications and standards to follow (or not to follow; what you chose not to do is often more important than what you chose to do). While we aspire to solve all of the challenges in the data portability space, we’re a small team biting off small chunks along a path. We are going to need the support, feedback, and assistance of the broader data portability (formal & informal) community in order to succeed. Now that we’ve finally launched, we’ll be in “release early, release often” mode to ensure tight feedback loops around our products.

Enough; what did we build!?!

For those who want to cut to the chase, here’s our API doc.

We built a system that connects Data Consumers to Data Publishers in a low-latency, highly-scalable standards-based way. Data can be pushed or pulled into Gnip (via XMPP, Atom, RSS, REST) and it can be pushed or pulled out of Gnip (currently only via REST, but the rest to follow). This release of Gnip is focused on propagating user generated activity events from point A to point B. Activity XML provides a terse format for Data Publishers to distribute their user’s activities. Collections XML provides a simple way for Data Consumers to only receive information about the users they care about. This release is about “change notification,” and a subsequent release will include the actual data along with the event.

 

As a Consumer, whether your application model is event- or polling-based Gnip can get you near-realtime activity information about the users you care about. Our goal is a maximum 60 second latency for any activity that occurs on the network. While the time our service implementation takes to drive activities from end to end is measured in milliseconds, we need some room to breathe.

Data can come in to Gnip via many formats, but it is XSLT’d into a normalized Activity XML format which makes consuming activity events (e.g. “Joe dugg a news story at 10am”) from a wide array of Publishers a breeze. Along the way we started cringing at the verb/activity overlap between various Publishers; did Jane “tweet” or “post”, they’re kinda the same thing? After sitting down with Chris Messina, it became clear that everyone else was cringing too. A verb/activity normalization table has been started, and Gnip is going to distill the cornucopia of activities into a common, community derived, format in order to make consumption even easier.

Data Publishers now have a central clearinghouse to push data when events on their services occur. Gnip manages the relationship with Data Consumers, and figures out which protocols and formats they want to play with. It will take awhile for the system to reach equilibrium with Gnip, but once it does, API balance will be reached; Publishers will notify Gnip when things happen, and Gnip will fan-out those events to an arbitrary number of Consumers in real-time (no throttling, no rate limiting).

Gnip is centralized. After much consternation, we resolved to start out with a centralized model. Not necessarily because we think it’s the best path, but because it is the best path to get something started. Imagine the internet as a clustered application; decentralization is fundamental (DNS comes to mind). That said, we needed a starting point and now we have one. A conversation with Chris Saad highlighted some work Paul Jones (among others) had done around a standard mechanism for change notification discovery and subscription; getpingd. Getpingd describes a mechanism for distributed change notification. The Subscription side of getpingd feels like a no-brainer for Gnip to support, but I’m not sure how to consider the Discovery end of it. In some sense, I see Gnip (assuming getpingd’s discovery model is implemented) as a getpingd node in the graph. We have lots to consider in the federated/distributed model.

Gnip is a classic chicken-and-egg scenario, we need Publishers & Consumers to be interesting. If your service produces events that you want others on the network to consume, we’d love to see you as a Publisher in Gnip; pushing events into the system for wide consumption. If your service relies on events created by users on other applications, we’d love to see you as a Consumer in Gnip.

We’ve started out with convenience libraries for perl, php, java, python, and ruby. Rather than maintain these ourselves, we plan on publishing them to respective language community code sites/repositories.

That’s what we’ve built in a nutshell. I’ll soon blog about exactly how we’ve built it.

The WHY of Gnip: Stop Building What Everyone Else is Building

Let me say this up front:

I have a tendency to ramble. Why use a sentence when a paragraph will suffice, right? As a result, I limit myself to 100 word posts on my sporadically updated personal blog. I’ll follow suit here, with only occasional excursions into longer territory. This is one such post.

I’ll try not to ramble too much…

Data portability, the ability to create content on one web site and derive value from it on other sites and applications, has become one of the defining characteristics of what is commonly referred to as “Web 2.0″. An emerging class of services are taking advantage of this data to create entirely new products, including social aggregators (Plaxo Pulse, MyBlogLog, FriendFeed), social search (Lijit, Delver) and communications dashboards (Fuser, Orgoo, Digsby). Each of these services is predicated on the belief that user-generated content is the raw material upon which great companies can be built.

Data portability, via RSS or ATOM or XMPP or open APIs is neither difficult nor complex. These are known problems with straightforward solutions and open standards. But each connection between two services (e.g. MyBlogLog and Flickr or Plaxo and Digg) is a custom integration, requiring at least one of the parties to set up a custom channel to access, process and ultimately make use of the transferred data. As companies seek to create robust solutions built upon dozens or even hundreds of data feeds, engineers face an exponentially growing problem of building and maintaining these custom communication channels. Simply put, data portability is a big hassle.

Crucially, data portability has become the cost of entry for these services. It is not enough for a social aggregator to claim the most sources or a social search company the biggest pool of data. The leaders in this space are focused on filtering and presenting data in useful ways; out of a billion pieces of data, they seek to connect you with the appropriate information at the appropriate time. All of the work building and maintaining back-end data portability services comes at the cost of building better front-end features that draw and satisfy users.

That’s where Gnip comes in. We’re dedicated to making data portability suck less, by reducing the effort required to collect and manage the data upon which these awesome new services are being created. Gnip aims to simplify the process of aggregating, standardizing and maintaining large pools of data, ultimately making he process as simple as uploading a list of your users.

Our first service is a solution to a key problem facing data portability implementations (Jud will give you the details in just a moment). We at Gnip believe in direct solutions to painful problems, and as a result, our first service isn’t fancy. But it’s quick to integrate, it scales like a monster and it uses a variety of web standards; we believe we’ve solved this particular problem pretty well. Over the coming months we’ll roll out additional direct solutions to painful problems, and before long we’ll have a bona fide platform for pushing data around the web.

We’re incredibly excited by the bounty that Web 2.0 has created. We are living with an embarrassment of riches in terms of shared information and experiences. But it’s overwhelming. I personally believe that Web 3.0 will herald a return to the individual — story, picture, friend, experience — because in aggregate, that which has great meaning often becomes meaningless. So it’s up to these awesome new services to take the Web 2.0 bounty and find for each of us those few things that will fundamentally enhance our lives. To give us something meaningful.

I hope that we at Gnip can build a foundation that enables these awesome new services to focus all of their attention on making great things. We’ll happily lay plumbing, mix concrete and smelt tin to see that happen.