Guide to the Twitter API – Part 3 of 3: An Overview of Twitter’s Streaming API

The Twitter Streaming API is designed to deliver limited volumes of data via two main types of realtime data streams: sampled streams and filtered streams. Many users like to use the Streaming API because the streaming nature of the data delivery means that the data is delivered closer to realtime than it is from the Search API (which I wrote about last week). But the Streaming API wasn’t designed to deliver full coverage results and so has some key limitations for enterprise customers. Let’s review the two types of data streams accessible from the Streaming API.The first type of stream is “sampled streams.” Sampled streams deliver a random sampling of Tweets at a statistically valid percentage of the full 100% Firehose. The free access level to the sampled stream is called the “Spritzer” and Twitter has it currently set to approximately 1% of the full 100% Firehose. (You may have also heard of the “Gardenhose,” or a randomly sampled 10% stream. Twitter used to provide some increased access levels to businesses, but announced last November that they’re not granting increased access to any new companies and gradually transitioning their current Gardenhose-level customers to Spritzer or to commercial agreements with resyndication partners like Gnip.)

The second type of data stream is “filtered streams.” Filtered streams deliver all the Tweets that match a filter you select (eg. keywords, usernames, or geographical boundaries). This can be very useful for developers or businesses that need limited access to specific Tweets.

Because the Streaming API is not designed for enterprise access, however, Twitter imposes some restrictions on its filtered streams that are important to understand. First, the volume of Tweets accessible through these streams is limited so that it will never exceed a certain percentage of the full Firehose. (This percentage is not publicly shared by Twitter.) As a result, only low-volume queries can reliably be accommodated. Second, Twitter imposes a query limit: currently, users can query for a maximum of 400 keywords and only a limited number of usernames. This is a significant challenge for many businesses. Third, Boolean operators are not supported by the Streaming API like they are by the Search API (and by Gnip’s API). And finally, there is no guarantee that Twitter’s access levels will remain unchanged in the future. Enterprises that need guaranteed access to data over time should understand that building a business on any free, public APIs can be risky.

The Search API and Streaming API are great ways to gather a sampling of social media data from Twitter. We’re clearly fans over here at Gnip; we actually offer Search API access through our Enterprise Data Collector. And here’s one more cool benefit of using Twitter’s free public APIs: those APIs don’t prohibit display of the Tweets you receive to the general public like premium Twitter feeds from Gnip and other resyndication partners do.

But whether you’re using the Search API or the Streaming API, keep in mind that those feeds simply aren’t designed for enterprise access. And as a result, you’re using the same data sets available to anyone with a computer, your coverage is unlikely to be complete, and Twitter reserves the right change the data accessibility or Terms of Use for those APIs at any time.

If your business dictates a need for full coverage data, more complex queries, an agreement that ensures continued access to data over time, or enterprise-level customer support, then we recommend getting in touch with a premium social media data provider like Gnip. Our complementary premium Twitter products include Power Track for data filtered by keyword or other parameters, and Decahose and Halfhose for randomly sampled data streams (10% and 50%, respectively). If you’d like to learn more, we’d love to hear from you at sales@gnip.com or 888.777.7405.

Guide to the Twitter API – Part 2 of 3: An Overview of Twitter’s Search API

The Twitter Search API can theoretically provide full coverage of ongoing streams of Tweets. That means it can, in theory, deliver 100% of Tweets that match the search terms you specify almost in realtime. But in reality, the Search API is not intended and does not fully support the repeated constant searches that would be required to deliver 100% coverage.Twitter has indicated that the Search API is primarily intended to help end users surface interesting and relevant Tweets that are happening now. Since the Search API is a polling-based API, the rate limits that Twitter has in place impact the ability to get full coverage streams for monitoring and analytics use cases.  To get data from the Search API, your system may repeatedly ask Twitter’s servers for the most recent results that match one of your search queries. On each request, Twitter returns a limited number of results to the request (for example “latest 100 Tweets”). If there have been more than 100 Tweets created about a search query since the last time you sent the request, some of the matching Tweets will be lost.

So . . . can you just make requests for results more frequently? Well, yes, you can, but the total number or requests you’re allowed to make per unit time is constrained by Twitter’s rate limits. Some queries are so popular (hello “Justin Bieber”) that it can be impossible to make enough requests to Twitter for that query alone to keep up with this stream.  And this is only the beginning of the problem as no monitoring or analytics vendor is interested in just one term; many have hundreds to thousands of brands or products to monitor.

Let’s consider a couple examples to clarify.  First, say you want all Tweets mentioning “Coca Cola” and only that one term. There might be fewer than 100 matching Tweets per second usually — but if there’s a spike (say that term becomes a trending topic after a Super Bowl commercial), then there will likely be more than 100 per second. If because of Twitter’s rate limits, you’re only allowed to send one request per second, you will have missed some of the Tweets generated at the most critical moment of all.

Now, let’s be realistic: you’re probably not tracking just one term. Most of our customers are interested in tracking somewhere between dozens and hundreds of thousands of terms. If you add 999 more terms to your list, then you’ll only be checking for Tweets matching “Coca Cola” once every 1,000 seconds. And in 1,000 seconds, there could easily be more than 100 Tweets mentioning your keyword, even on an average day. (Keep in mind that there are over a billion Tweets per week nowadays.) So, in this scenario, you could easily miss Tweets if you’re using the Twitter Search API. It’s also worth bearing in mind that the Tweets you do receive won’t arrive in realtime because you’re only querying for the Tweets every 1,000 seconds.

Because of these issues related to the monitoring use cases, data collection strategies relying exclusively on the Search API will frequently deliver poor coverage of Twitter data. Also, be forewarned, if you are working with a monitoring or analytics vendor who claims full Twitter coverage but is using the Search API exclusively, you’re being misled.

Although coverage is not complete, one great thing about the Twitter Search API is the complex operator capabilities it supports, such as Boolean queries and geo filtering. Although the coverage is limited, some people opt to use the Search API to collect a sampling of Tweets that match their search terms because it supports Boolean operators and geo parameters. Because these filtering features have been so well liked, Gnip has replicated many of them in our own premium Twitter API (made even more powerful by the full coverage and unique data enrichments we offer).

So, to recap, the Twitter Search API offers great operator support but you should know that you’ll generally only see a portion of the total Tweets that match your keywords and your data might arrive with some delay. To simplify access to the Twitter Search API, consider trying out Gnip’s Enterprise Data Collector; our “Keyword Notices” feed retrieves, normalizes, and deduplicates data delivered through the Search API. We can also stream it to you so you don’t have to poll for your results. (“Gnip” reverses the “ping,” get it?)

But the only way to ensure you receive full coverage of Tweets that match your filtering criteria is to work with a premium data provider (like us! blush…) for full coverage Twitter firehose filtering. (See our Power Track feed if you’d like for more info on that.)

Stay tuned for Part 3, our overview of Twitter’s Streaming API coming next week…

Guide to the Twitter API – Part 1 of 3: An Introduction to Twitter’s APIs

You may find yourself wondering . . . “What’s the best way to access the Twitter data I need?” Well the answer depends on the type and amount of data you are trying to access.  Given that there are multiple options, we have designed a three part series of blog posts that explain the differences between the coverage the general public can access and the coverage available through Twitter’s resyndication agreement with Gnip. Let’s dive in . .. 

Understanding Twitter’s Public APIs . . . You Mean There is More than One?

In fact, there are three Twitter APIs: the REST API, the Streaming API, and the Search API. Within the world of social media monitoring and social media analytics, we need to focus primarily on the latter two.

  1. Search API - The Twitter Search API is a dedicated API for running searches against the index of recent Tweets
  2. Streaming API – The Twitter Streaming API allows high-throughput, near-realtime access to various subsets of Twitter data (eg. 1% random sampling of Tweets, filtering for up to 400 keywords, etc.)

Whether you get your Twitter data from the Search API, the Streaming API, or through Gnip, only public statuses are available (and NOT protected Tweets). Additionally, before Tweets are made available to both of these APIs and Gnip, Twitter applies a quality filter to weed out spam.

So now that you have a general understanding of Twitter’s APIs . . . stay tuned for Part 2, where we will take a deeper dive into understanding Twitter’s Search API, coming next week…

 

Gnip Client Libraries from Our Customers

Our customers rock. When they develop code to start using Gnip, they often share their libraries with us so that they might be useful to future Gnip customers as well. Although Gnip doesn’t currently officially support any client libraries for access to our social media API, we do like to highlight and bring attention to some of our customers who choose to share their work.

In particular, here are a few Gnip client libraries that happy customers have developed and shared with us. We’ll be posting them in our Power Track documentation and you can also find them linked here:

Java
by Zauber
https://github.com/zaubersoftware/gnip4j

Python
by General Sentiment
https://github.com/vkris/gnip-python/blob/master/streamingClient.py

If you’ve developed a library for access to Gnip data and you’d like to share it with us at Gnip and other Gnip customers, then drop us a note at info@gnip.com. We’d love to hear from you.

30 Social Data Applications that Will Change Our World

Social media is popular — no surprise there. And as a result, there’s a huge amount of social media data in the world and every day the pool of data grows… not just a little bit, but enormously. For instance, just recently our partner Twitter blogged about their business growth and the numbers are staggering.

This social conversation data is valuable. Someday it will yield insights worth many millions, perhaps billions, of dollars for businesses. But the analyses and insights are only barely beginning to take shape. We hear from social media analytics companies every day and we see lots of interesting applications of this data. So… how can social media data be used? Here’s a partial list of social data applications that I believe will begin to take shape over the next decade or so:

  1. Product development direction
  2. Product feedback
  3. Customer service performance feedback
  4. Customer communications
  5. Stock market prediction
  6. Domestic/political mood analysis
  7. Societal trend analysis
  8. Offline marketing campaign impact measurement
  9. Word-of-mouth marketing campaign analysis
  10. URL virality analysis
  11. News virality analysis
  12. Domestic economic health indicator
  13. Linguistic analysis
  14. Educational achievement metric by time and locale
  15. Personal scheduling: see when your friends are busy
  16. Event planning: see when big events will happen in your community
  17. Online marketing
  18. Sales mapping & identification
  19. Consumer behavior analysis
  20. Internet safety implementation
  21. Counter-terrorism probabilistic analysis
  22. Disaster relief communication, mapping, and analysis
  23. Product development opportunity identification
  24. Competitive analysis
  25. Recruiting tools
  26. Connector, Maven, and Salesperson identification (to borrow Malcolm Gladwell’s terms)
  27. Cross-platform consumer alerting services
  28. Brand monitoring
  29. Business accountability ratings
  30. Product and service reviews

All of these projects can be built on public social media conversation data that’s legally and practically accessible. All of the necessary data is (or is on the roadmap to be) accessible via Gnip. But access to the data is only step one — the next step is building great algorithms and applications to draw insights from that data. We leave that part to our customers.

So, here’s to the analysts who are working with huge social data sets to bring social data analyses and insights to fruition and ultimately make the barrage of public data that surrounds us increasingly useful. Here at Gnip we’re grateful for your efforts and eager to find out what you learn.

What It's Like to Work at Gnip

Gnip is hiring. So you might wonder… what’s it really like to work at Gnip?

For me, working at Gnip means working with people I trust and like who bring good ideas, logical thought, hard work, and diverse experiences to the conversation. It means working to grow a product I believe is good for the world; a product that facilitates access to valuable information.

It’s the feeling of energy and responsibility that come with working at a company that’s seminal in our industry and growing quickly. It’s the spirit of hard work and encouragement, but also fun, that our team collectively strives for.

It’s the startup factor that allows for automatic approval for any tool that might make it easier for me to get my job done, whether that’s purchasing software or midday coffee at The Cup. It’s being encouraged to innovate and having the latitude to solve problems in new ways.

So come join us. Our door is open and we’d love to hear from you.

25 Million Free Tweets on Power Track

Last week we announced Twitter firehose filtering. This week we’re celebrating the news with Free Tweets for all. Sign up by February 28th to enjoy no licensing fees on your first 25 million Tweets in your first 60 days using Power Track. 

Power Track offers powerful filtering of the Twitter firehose, guaranteeing 100% Tweet delivery. For instance, filter by keyword or username to access all Tweets that match the criteria you care about and have all of the matching results delivered to you in realtime via API. Power Track supports Boolean operators, can match your filtering criteria even within expanded URLs, and has no query volume or traffic limitations, helping you access all of the data you want. And it’s only available from Gnip, currently the only authorized distributor of Twitter data via API.

The licensing fee for Power Track is $.10 per 1,000 Tweets, but we’re waiving that fee for the first 25 million Tweets in 60 days for Power Track customers who sign up by February 28th. 1-year agreement and Gnip data collector fee still required.

Learn More or Contact Us to start testing Power Track for firehose filtering. Cheers!