Google+ Now Available from Gnip

Gnip is excited to announce the addition of Google+ to its repertoire of social data sources. Built on top of the Google+ Search API, Gnip’s stream allows its customers to consume realtime social media data from Google’s fast-growing social networking service. Using Gnip’s stream, customers can poll Google+ for public posts and comments matching the terms and phrases relevant to their business and client needs.

Google+ is an emerging player in the social networking space that is a great pairing with the Twitter, Facebook, and other microblog content currently offered by Gnip. If you are looking for volume, Google+ quickly became the third largest social networking platform within a week of its public launch and some are projecting it to emerge as the world’s second largest social network within the next twelve months. Looking to consume content from social network influencers? Google+ is where they are! (even former Facebook President Sean Parker says so).

By working with Gnip along with a stream of Google+ data (and the availability of an abundance of other social data sources), you’ll have access to a normalized data format, unwound URLs, and data deduplication. Existing Gnip customers can seamlessly add Google+ to their Gnip Data Collectors (all you need is a Google API Key). New to Gnip? Let us help you design the right solution for your social data needs, contact sales@gnip.com.

Customer Spotlight – MutualMind

 
Like many startups seeking to enter and capitalize on the rising social media marketplace, timing is everything. MutualMind was no exception: getting their enterprise social media management product to market in a timely manner was crucial to the success of their business. MutualMind provides an enterprise social media intelligence and management system that monitors, analyzes, and promotes brands on social networks and helps increase social media ROI. The platform enables customers to listen to discussion on the social web, gauge sentiment, track competitors, identify and engage with influencers, and use resulting insights to improve their overall brand strategy.

“Through their social media API, Gnip helped us push our product to market six months ahead of schedule, enabling us to capitalize on the social media intelligence space. This allowed MutualMind to focus on the core value it adds by providing advanced analytics, seamless engagement, and enterprise-grade social management capabilities.”

- Babar Bhatti
CEO, MutualMind

By selecting Gnip as their data delivery partner, MutualMind was able to get their product to market six months ahead of schedule. Today, MutualMind processes tens of millions of data activities per month using multiple sources from Gnip including premium Twitter data, YouTube, Flickr, and more.
 
Get the full detail, read the success story here.

Letter From The New Guy

Not too long ago Gnip celebrated its third birthday.  I am celebrating my one week anniversary with the company today.  To say a lot happened before my time at Gnip would be the ultimate understatement, and yet it is easy for me to see the results produced from those three years of effort.  Some of those results include:

The Product

Gnip’s social media API offering is the clear leader in the industry.  Gnip is delivering over a half a billion social media activities daily from dozens of sources.  That certainly sounds impressive, but how can I be so confident Gnip is the leader?  Because the most important social media monitoring companies rely on our services to deliver results to their customers every single day. For example, Gnip currently works with 8 of the top 9 enterprise social media monitoring companies, and the rate we are adding enterprise focused companies is accelerating.

The Partners

Another obvious result is the strong partnerships that have been cultivated.  Some of our partnerships such as Twitter and Klout were well publicized when the agreements were put in place.  However, having strong strategic partners takes a lot more than just a signed agreement.  It takes a lot of dedication, investment, and hard work by both parties in order to deliver on the full promise of the agreement.  It is obvious to me that Gnip has amazing partnerships that run deep and are built upon a foundation of mutual trust and respect.

The People

The talent level at Gnip is mind blowing, but it isn’t the skills of the people that have stood out the most for me so far.  It is the dedication of each individual to doing the right thing for our customers and our partners that has made the biggest impression.  When it comes to gathering and delivering social media data, there are a lot of shortcuts that can be taken in order to save time, money, and effort.  Unfortunately, these shortcuts can often come at the expense of publishers, customers, or both.  The team at Gnip has no interest in shortcuts and that comes across in every individual discussion and in every meeting.  If I were going to describe this value in one word, the word would be “integrity”.

In my new role as President & COO, I’m responsible for helping the company grow quickly and smoothly while maintaining the great values that have been established from the company’s inception.  The growth has already started and I couldn’t be more pleased with the talent of the people who have recently joined the organization including: Bill Adkins, Seth McGuire, Charles Ince, and Brad Bokal who have all joined Gnip within the last week.  And, we are hiring more! In fact, it is worth highlighting one particular open position for a Customer Support Engineer.  I’m hard pressed to think of a higher impact role at our company because we consider supporting our customers to be such an important priority.  If you have 2+ years of coding experience including working with RESTful Web APIs and you love delivering over-the-top customer service, Gnip offers a rare opportunity to work in an environment where your skills will be truly appreciated.  Apply today!

I look forward to helping Gnip grow on top of a strong foundation of product, partners, and people.  If you have any questions, I can be reached at chris [at] gnip.com.

What Facebook Data is Available from Gnip's Social Media API?

Facebook is among the most in-demand (but also among the most challenging) social media sources to access. Most Facebook conversation data is private and so it’s not accessible via Facebook’s API or any of Gnip’s feeds. Facebook data availability is also pretty confusing to understand and the rules keep changing. So, let’s clarify what kinds of Facebook information we can offer through our social media API today. 

What Facebook Data is Available from Gnip?
Within the realm of publicly accessible data only, we provide:

  • User page content: status updates, wall posts, comments
  • Fan page content: wall posts, comments (probably more than you’ll find from any service), “Like” count, historical data up to 90 days


How Can You Get the Data?

Instead of a firehose of Facebook data, you enter parameters indicating what you want to find:

  • Keyword search
    You provide a list of keywords. We’ll return public mentions of those keywords.
  • Username search
    You provide a list of usernames. We’ll return publicly accessible posts generated by those users.
  • Fan page search
    You provide a list of fan pages. We’ll return publicly available posts and comments on those fan pages.

    While these lists are vastly simplified, we hope they’ll clarify what kinds of Facebook data most businesses can access legally, and exactly what Facebook data Gnip provides.

    Oh, and one last thing. We’re sometimes asked how we feel about Facebook’s privacy policies. At Gnip, we don’t make the rules — we just play by them. Our job is to facilitate access to the social data that publishers (like Facebook) officially make accessible to our customers.

    Best wishes to you with your Facebook data collection! If you think we might be able to help, please drop us a note.

    Social Media in Natural Disasters

    Gnip is located in Boulder, CO, and we’re unfortunately experiencing a spate of serious wildfires as we wind Summer down. Social media has been a crucial source of information for the community here over the past week as we have collectively Tweeted, Flickred, YouTubed and Facebooked our experiences. Mashups depicting the fires and associated social media quickly started emerging after the fires started. VisionLink (a Gnip customer) produced the most useful aggregated map of official boundary & placemark data, coupled with social media delivered by Gnip (click the “Feeds” section along the left-side to toggle social media); screenshot below.

    Visionlink Gnip Social Media Map

    With Gnip, they started displaying geo-located Tweets, then added Flickr photos with the flip of a switch. No new messy integrations that required learning a new API with all of it’s rate limiting, formatting, and delivery protocol nuances. Simple selection of data sources they deemed relevant to informing a community reacting, real-time, to a disaster.

    It was great to see a firm focus on their core value proposition (official disaster relief data), and quickly integrate relevant social media without all the fuss.

    Our thoughts are with everyone who was impacted by the fires.

    Clusters & Silos

    Gnip is nearing its one-year anniversary of our 2.0 product. We reset our direction several months ago. As part of that shift, we completely changed our architecture. I thought I’d write about that experience a bit.

    Gnip 1.0

    Our initial implementation is best referred to as a clustered, non-relational DB (aka NoSQL), data aggregation service. We built/ran this product for about a year and a half. The system was comprised of a centralized cluster of machines that divvy’d up load, centralized streams of publisher data, and then fanned that data out to many customers. Publishers did not like this approach as it obfuscated the ultimate consumer of their data; they wanted transparency. Our initial motivation for this architecture was around alleviating load pain on the Publishers. “Real-time” APIs were the rage, and having detrimental impact to real-time delivery was in-part due to load on the Publisher’s API. A single stream of data to Gnip, and allowing Gnip to deal w/ the fan-out via a system built for such demand, was part of the solution we sold. We thought we could charge Publishers for alleviating their load pain. Boy were we wrong on that count. While Publishers love to complain about the load on their API, effectively none of them wanted to do anything about it. Some smartly built caching proxies, and others built homegrown notification-like/PubSub solutions (SIP, SUP, PubSubHubBub). However, most simply horizontally scaled and threw money at the problem. Twitter has shinned a light on streaming HTTP (or whatever you want to call it… there are so many monikers), which is “as good as it gets” (leaving proto buffers and compressed HTTP streams as simply optimizations to the model). I digress. The 1.0 platform was a fantastic engineering feat, ahead of its time, and unfortunately a thorn in Publisher’s sides. As a data integration middle-man, Gnip couldn’t afford to have antagonistic relations with data sources.

    Gnip 2.0

    Literally overnight, we walked away from further construction on our 1.0 platform. We had paying customers on it however, so we operated it for several months before ultimately shutting it down; after migrating everyone we could to 2.0. Gnip 2.0 un-intuitively departed from a clustered environment, and instead started providing a consuming customer with explicit, transparent, integrations with Publishers, all via standalone instances of the software running on standalone virtualized hardware instances (Ec2). Whereas 1.0 would sometimes leverage Gnip-owned authentication/app credentials to the benefit of many consuming customers, 2.0 was architected explicitly not to support this. For each 2.0 instance a customer runs, they use credentials they obtain themselves, from the Publisher, to configure the instances. Publishers have full transparency into, and control of, who’s using their data.

    The result is an architecture that doesn’t leverage certain data structures an engineer would naturally wish to use. That said, an unexpected operational benefit has fallen out of the 2.0 system. Self-healing, zero SPOF (single point of failure), clusters aside (I’d argue they’re actually relatively few of them out there), the reality with clusters is that they’re actually hard to build in a fault tolerant manner, and SPOFs find their way in. From there, you have all of your customers leveraged against a big SPOF. If something cracks in the system, all of your customers feel that pain. On the flip side, silo’d instances rarely suffer from systemic failure. Sure operational issues arise, but you can treat each case uniquely and react accordingly. The circumstances in which all of your customers feel pain simultaneously are very few and far between. So, the cost of not leveraging hardware/software that we’re generally inclined to want to architect for and leverage, is indeed higher, but a simplified system has it’s benefits to be sure.

    We now find ourselves promoting Publisher integration best practices, and they advocate our usage. Two such significant architectures built under the same roof has been a fascinating thing to experience. The pros and cons to each are many. Where you wind up in your system is an interesting function of what your propensity is technically, as well as what the business constraints are. One size never fits all.

    Hidden Engineering Gotchas Behind Polling

    I just spent a couple of days optimizing a customer’s data collection on a Gnip instance, for a specific social media data source API. It had been awhile since I’d done this level of tuning, and it reminded me of just how many variables must be considered when optimally polling source API for data.

    Requests Per Second (RPS) Limits

    Most services have a rate limit that an given IP address (or API key/login) cannot break. If you hit an endpoint too hard, the API backs you off and/or blocks you. Don’t confuse RPS with concurrent connections however; they’re measured differently and each has its own limitations for a given API. In this particular case I was able to parallelize three requests because the total response time per request was ~3 seconds. The result was that a given IP address was not violating the API’s RPS limitations. Had the API been measuring concurrent connections, that would have been a different story.

    Document/Page/Result-set Size

    Impacting my ability to parallelize my requests was the document size I was requesting of the API. Smaller document sizes (e.g. 10 activities instead of 1000) meant faster response times, which when parallelized, run the risk of violating the RPS limits. On the other hand, larger document sizes take more time to get; whether because they’re simply bigger and take longer to transfer over the wire, or because the API you’re accessing is taking a long time to assemble the document on the backend.

    Cycle Time

    The particular API I was working with was a “keyword” based API, meaning that I was polling for search terms/keywords. In Gnip parlance we call these “terms” or “keywords,” “rules” in order to generalize the terminology. A rule-set’s “cycle time” is how long it takes a Gnip Data Collector to poll for a given rule-set once. For example, if a rule-set size is 1,000, and the API’s RPS limit is 1, that rule-set’s cycle time would be 1,000 seconds; every 1k seconds, each rule in the set has been polled. Obviously, the cycle time would increase if the server took longer than a second to respond to each requests.

    Skipping (missing data)

    A given rule “skips” data during polling (meaning, you will miss data because you’re not covering enough ground) when one of the following conditions is true. ARU (activity update rate) is the rate at which activities/events occur on the given rule (e.g. the number of times per second someone uploads a picture with the tag “foo”)

    • ARU is greater than the RPS limit (RPS represented as 1/RPS) multiplied by the document size.
    • ARU is greater than the rule-set’s cycle time

    In order to optimally collect the data you need, in a timely manner, you have to balance all of these variables, and adjust them based on the activity update rate for the rule-set you’re interested in. While the variables make for engaging engineering exercises, do you want to spend time sorting these out, or spend time working on the core business issues you’re trying to solve? Gnip provides visibility into these variables to ensure data is most effectively collected.

    How to Select a Social Media Data Provider

    If you’re looking for social media data, you’ve got a lot of options: social media monitoring companies provide end-user brand tracking tools, some businesses provide deep-dive analyses of social data, other companies provide a reputation scores for individual users, and still other services specialize in geographic social media display, to name just a few. 

    Some organizations ultimately decide to build internal tools for social media data analysis. Then they must decide between outsourcing the social data collection bit so they can focus their efforts on analyzing and visualizing the data, or building everything — including API connections to each individual publisher — internally. Establishing and maintaining those API connections over time can be costly. If your team has the money and resources to build your own social media integrations, then go for it!

    But if you’re shopping for raw social media data, you should consider a social media API – that is, a single API that aggregates raw data from dozens of different social media publishers – instead of making connections to each one of those dozens of social media APIs individually. And in the social media API market, there is only a small handful of companies for you to choose from. We are one of them and we would love to work with you. But we know that you’ll probably want to shop your options before making a decision, so we’d like to offer our advice to help you understand some of the most important factors in selecting a social media API provider.

    Here are some good questions for you to ask every social media API solution you consider (including your own internal engineers, if you’re considering hiring them for the job):

    Are your data collection methods in compliance with all social media publishers’ terms of use?

    –> Here’s why it matters: by working with a company that violates any publisher’s terms of use, you risk unstable (or sudden loss of) access to violated publisher’s data — not to mention the potential legal consequences of using black market data in your product. Conversely, if you work with a company that has a strong relationship with the social media publishers, our experience shows that you not only get stable, reliable data access, but you just might get rewarded with *extra* data access every now and then. (In case you’re wondering, Gnip’s methods are in compliance with each of our social media publishers’ terms of use.)

    Do you provide results and allow parameter modifications via API, and do you maintain those API connections over time?

    –> In our experience, establishing a single API connection to collect data from a single publisher isn’t hard. But! Establishing many API connections to various social media publishers and – this is key – maintaining those connections over time is really quite a chore. So much so, we made a whole long list of API-related difficulties associated with that integration work, based on our own experiences. Make sure that whoever you work with understands the ongoing work involved and is prepared to maintain your access to all of the social media APIs you care about over time.

    How many data sources do you provide access to?

    –> Even if you only want access to Twitter and Facebook today, it’s a good idea to think ahead. How much incremental work will be involved for you to integrate additional sources a few months down the line? Our own answer to this question is this: using Gnip’s social media API, once you’re set up to receive your first feed from Gnip via API, it takes about 1 minute for you to configure Gnip to send you data from a 2nd feed. Ten minutes later, you’re collecting data from 10 different feeds, all at no extra charge. Since you can configure Gnip to send all of your data in one format, you only need to create one parser and all the data you want gets streamed into your product. You can even start getting data from a new social media source, decide it’s not useful for your product, and replace it with a different feed from a different source, all in a matter of seconds. We’re pretty proud that we’ve made it so fast and simple for you to receive data from new sources… (blush)… and we hope you’ll find it to be useful, too.

    What format is your data delivered in?

    –> Ten different social media sources might provide data in 10 different formats. And that means you have to write 10 different parsers to get all the data into your product. Gnip allows you to normalize all the social media data you want into one single format — Activity Streams — so you can collect all your results via one API and feed them into your product with just one parser.

    Hope this helps! If you’ve got additional questions to suggest for our list, don’t hesitate to drop us a note. We’d love to hear from you.

    I want '*'

    My favorite requirement from customers is the “I want all the data, from all the sources, for all of history, and for all of future” one. You’re never going to get it, from anyone, so reset your expectations. A few constructs fall out of this request.

    Two Types of ‘feeds’

    Firehoses

    These are aggregate sources of data for a given publisher. They may, or may not, be a complete representation of that publisher’s data set. Everyone wants firehoses, but truth be told, there are very few of them in the wild, and those that do exist are of less “valuable” data. Consider access to firehoses to be at statistically relevant access levels, rather than truly “complete” sets of data.

    Seeded Feeds

    These encompass the majority of data sources, and they require that you know what you’re looking for. Be it a keyword, a tag, a user name, a user id, or a geo-location.

    In either case you need to know what it is you’re after. Blind, unfettered, access to a given publisher’s feed is a rarity and actually isn’t all that interesting in the end; you just think it is because someone else had the product idea first (e.g. the publisher you want all the data from… e.g. Twitter).

    Historical Access

    Storing and indexing lots of data is conceptually simple, yet hard to implement at scale; just ask any of the big-three search engines. You can stuff as much data as possible into a database, and “search” it offline, in order to meet most historical data access requirements, but weaving that into a variably accessed consumer application isn’t always easy. While storage costs are generally nil for today’s highly compressible data, the operational management costs of your locally stored data aren’t.

    “Real-time” Access

    Processing data in a manner other than which it originated causes an impedance miss-match. Stream-to-offline processing implies that you’ll have gaps in data due to queuing problems. Offline-to-stream suggests the same. Offline-to-offline and stream-to-stream are generally easy to get your head and code around, but be wary of overloading stream processing with too much work as it then starts to feel like stream-to-offline. Once you enter that world, you need to solve parallel processing problems; in real-time.

    Regardless of access pattern, you can only introspect and access the data you initially seeded your sources with. If your seed was wrong, for example you used the wrong set of users or keywords, processing the data doesn’t matter. Full circle to garbage in, garbage out.

    If you find yourself asking for the introductory requirement with your team, and/or a vendor, I suggest you actually don’t have the focus on your product or idea that you’ll ultimately need in order to be successful. Batten down the hatches, and get crisp about precisely what it is you want to build, and precisely what data you need to do so. If you can do that, you will have a shot at success.