The Due Diligence of Sharing Social Media Facts

by | Jul 30, 2010

I’m going to make a bet: at some point today, you ran across some new study, survey or research report about social media. If you spent any time browsing your feeds, grazing tweets or even just reading Mashable, you probably couldn’t avoid it. To say we are bombarded with research about social media is an understatement – most weeks, we’re pummeled by it. Thing is, not all of it is good. In fact, some of it is downright cringeworthy. At the very least, even properly executed research likely has some caveats, modifiers or other design details that don’t necessarily fit in 140 characters. So what is a responsible Twitter user to do when confronted with a new, potentially interesting piece of data? What is a responsible business to do when considering these stats and how they impact their marketing? I see a lot of data in my day job with Edison Research. When Jason Keath suggested I tackle this subject, I gotta admit – I struggled. The last thing I want to do is turn this space into a statistics lecture, which would probably be as fatally tedious to me as it would be to you. So, instead, I’m going to walk you through a little of the basic thought process I use whenever I see a new study on Twitter or blogged about. Here are the five things I look for before I’ll even retweet or blog about data, let alone use it for decision support:

1. Where’s the source?

I almost never retweet someone else’s coverage of a new study or survey unless they a) add something new in terms of analysis, and b) link to the original data. For instance, this week a gajillion people retweeted coverage of the Forrester location-based services (LBS) report that revealed only 1% of Americans regularly use such services. I wanted to – I really did – but I didn’t, because I couldn’t provide a link to the original report. True, you can go to Forrester’s web site to read a summary of the topic, but all of the details about this report lie behind Forrester’s pay wall, so I can’t apply this little process. It might be a very credible study (in fact, if it is based on a significant national sample, it probably is), but because I can’t verify the four things below, I’ll set it aside until I can. In this specific case, I want to see the actual question (see number four) before I make up my mind about what, exactly, they are referring to as a “location-based service.” After all, our definitions may differ, right? Again, I’m not disparaging this particular report, but when I retweet or pass along data to you, I do so because I find the stats believable – which means going to the source, 100% of the time.

2. Where is the methodology statement?

Any credible piece of social media data should have a methodology statement prominently displayed that indicates exactly who was interviewed, how the data was collected, and when the survey was conducted. If this cannot be found (or is somehow indecipherable), then ask for it. If the response is anything but clear and direct, take a pass. Anybody can put up a free web survey, get 100 early adopters/happy database denizens to take it, and call it a “survey” – but you don’t have to read it. ALL research has limitations, caveats and imperfections. Knowing exactly who was asked, how they were asked and when they were asked gives you the ability to process the information, consider these limitations, and make it fit with other data, observations and experiences. Credible research should make both the strengths AND flaws of the study crystal clear. By the way, finding out when a study was conducted isn’t only about the age of the data. If the time-frame of the survey is too long, it introduces a longitudinal bias into the data – after all, things can change rapidly in the world of social media. So this survey, which indicated that “only” 12% of Twitter updates are about brands, may have been based on a sample of 1800 tweets, but those tweets were pulled over a six month period. At some point, that stops being an 1800-tweet sample and starts being 9 different 200-tweet samples, mushed together.

3. Who was surveyed?

Once you have the methods statement, your next step is to figure out exactly who or what was sampled to generate the data. Sample size is important, yes, but even more important is who was surveyed, and, more crucially, who wasn’t. There are basically two kinds of samples: representative, and the other kind. The strict definition of a random, representative sample is essentially that any member of a given population (whether that’s the US population, or Twitter users or Ford owners) has an equal, non-zero chance of being picked. Now, that doesn’t mean that everyone surveyed actually takes the survey, but the big point is that they have the same chance to take the survey as anyone else. This sort of survey costs a bit more money than the typical bootstrapped social media startup has to spend on such things, so you’ll rarely see this beast in the social media wild. Non-representative surveys aren’t by definition bad – they just aren’t something you can project from. Often, the problem with these surveys is not in the data itself, but in how the data is reported. For example, lots of folks tweeted about this story that ran in the New York Times about tracking America’s mood through Twitter. In this instance, the “sample” was comprised not of people, but of tweets: researchers coded 300 million status updates by mood and time of day to project when Americans are happy or cranky. Now, 300 million is a gigantic sample. No issues there. Had the study – and the New York Times – referred to their results as “Tracking The Mood Of Twitter Users” we’d have no problem. However, over 90% of Americans do not use Twitter – and we even have data that suggests Twitter users are more optimistic about some things (specifically the economy) than the general population, which hints at some real differences between the Twitterati and…most of the country. If you are one of the vast majority of Americans not posting updates to Twitter, you had a zero percent chance of having input to this study of America’s mood – and I don’t know about you, but that would put me in a bad mood. A non-representative sample (which describes the vast majority of the data you’ll encounter) is always on stable footing when it refers to its sample as “respondents” or “people surveyed,” and not “people” or “Americans.” Well-executed studies of this sort may not allow you to characterize “everybody,” but they may have something valuable to say about the people who took the survey, at least.

4. What was the exact question?

Here’s another one of my pet peeves: data reporting that takes shortcuts with the actual question wording. To rip another example from the headlines, a few days ago there was an article about a survey purporting to show that no one would pay for Twitter if the service charged its users. So, to use my little process here, I first consulted the source, which turned out to be the highly credible Annenberg Center’s Digital Future study for 2010. However, the actual question referred to people who had used a free service “like Twitter” (which they report as 49% of online Americans) and that zero percent of these users would pay to use a similar service. Now, it should be pretty obvious given Twitter’s own self-reported user numbers that 49% of online Americans are not on Twitter specifically, so obviously the presence of “similar services” weighs heavily here as a potentially confounding variable. To be clear, there’s nothing wrong with the survey – it’s a great one. The issue is in how it’s tweeted: “No One Will Pay For Twitter!” If you had a sizable, representative sample of Twitter users specifically, you might see something different. Otherwise, it’s easy to see how Twitter users that would pay for the service could possibly get lost as a rounding error. Without knowing how many actual Twitter users are in that dataset (and thus being able to figure the margin of error on that particular subset), I’d be reluctant to characterize this finding as specific to Twitter in 140 characters. Doesn’t mean it isn’t valid – it just means that you need a little more information before you can be sure. Anyway, nobody ever wants to pay for anything they currently get for free, so this is a pretty tough survey question to bank on.

5. How does it compare to data you trust?

Usually, when you see data that seems to run counter to other data you trust, it’s probably because the two studies are apples and oranges. Either the sample is different, or the question is different, and that’s usually enough to explain the discrepancy. Occasionally, you’ll get something that’s counter-intuitive because it’s just flat out wrong. For instance, years ago I once saw the results of a self-selected online survey that showed a certain style of rock music appeared to be extremely popular with women – in fact, the ratio of women to men in the sample was 4:1. Sadly, the company that did this study saw nothing wrong with reporting this, but what early online researchers at the time knew was that women then were about 4x as likely to complete self-selected online surveys as men, and the authors of this study didn’t weight the data accordingly. The statistical term for that is #fail. In any case, if you get a research result that doesn’t seem to square with some other data you’ve previously accepted, look very carefully for differences in sample design or the exact question. They’re probably there. When you know that, you can file the information away and consider it accordingly. If you can’t find a substantive difference in method, but the data still doesn’t square, do what I do – ignore the data. There’s no shortage of data. Finally, I’ll leave you with a bonus thought. I see a lot of data tweeted (like the aforementioned Forrester LBS study) that disparages sites, services or trends because the numbers are small. Snapshots are just that – static descriptors of a moving target. If every one of those 1% sent me a dollar, this would likely be my last blog post. Seek out tracking/trend data where you can, so you can see not only where a number is, but where it’s been before you hazard a guess as to where it’s going. One percent of America may sound like a small number, but you wouldn’t want them over for dinner. Useful? As I said, I didn’t want this to be a scholarly dissertation on probabilistic sampling, but rather a practical thought process for you to use when you encounter new data. You might have other criteria (who paid for the study, for instance) which are useful to you, so this isn’t meant to be an exclusive list. That said, I’ll answer anything I can here in the comments, so fire away!]]>

PSSST…

Our email newsletter is the industry’s best-kept secret.

Fast-track your audience building in just 5 minutes a week.