6 Steps To A Successful Social Media Survey

by | Aug 23, 2010

A couple of weeks ago I wrote about how to vet the deluge of social media surveys and studies you encounter every day for accuracy, representativeness and credibility. You now know the difference between saying “35% of Americans” and “35% of respondents to this survey.” You also know the latter is almost always what you are getting from these reports. Congratulations – you’re now ready to learn how to generate your own social media survey data. There are a lot of self-service survey tools out there which you could use. For example, if you’ve looked into this sort of thing, you’ve no doubt encountered “The Monkey.” The thing is, writing, fielding and analyzing market research is a professional skill, just like practicing law, or medicine. Your degree of need determines your willingness to DIY. If you needed surgery, or were facing murder charges, I doubt very much you’d use “Surgery Monkey” or “Death Row Monkey.” That’s pretty much all I have to say about that. If your needs are less dire, however, and you’ve ever wanted to field your own DIY web poll, customer survey or other opinion research, here is your six-step guide to “defeating the monkey” and making the best survey it can be.

Step Zero: The Bad News

Before we start with the nuts and bolts, you have to come to terms with this: the self-selected, Twitter-promoted, free-for-all survey you typically create with DIY web tools is not going to be very good on a lot of levels. Sorry, but that’s just the way it is – despite the plethora of free or nearly-free research tools out there to create surveys, quality data is worth what you pay for it. What you are going to get from this exercise is not going to be projectable to the general population – it’s probably not even going to be projectable to your friends. Without being able to model the responses of persons who didn’t take your survey, or being able to even come close to proper representative sampling of your target population, you can’t draw conclusions about people who didn’t respond. That means your 500-person, non-representative sample of Twitter users will literally tell you nothing about the 99,999,500 Twitter users you didn’t talk to. Zip. That isn’t a function of sample size – it’s a function of sample quality. You can, however, use it to draw conclusions about the people you did talk to – in other words, the people who took your survey. So, your ultimate goals are to sharply define the population you are going to study, and to get as much of that population as humanly possible to take your survey. OK – that was the hardest step – changing your mindset. The rest is pretty straightforward 🙂

Step One: You need boundaries

Your best chance at usable data is to have a pretty good handle on the population you’d like to survey. Your self-selected web poll will do a pretty lousy job representing the general population, or even people who use a given service or site, unless you have a way to contact or at least sample every member of that population. If you can’t do that (and, unless you are ponying up some cash here, you probably can’t), then your best bet is to try and contact/sample a discretely bounded population – like your listener/user/customer database. If you survey your email database, you at least have a sense of exactly how large your population is, whether it’s 1000 persons or 100,000, and can thus accurately gauge the success of your survey. If you get 500 responses to a random survey you post on Twitter, I submit you know nothing. If, however, you get 500 responses to a survey promoted to your 20,000-person email database, then we can at least treat those responses as reflective of something of value – the people in your database who will respond to your solicitations. Get a decent response from those people, and you’ve got something.

Step Two: Define your goal

Really, since you are going to be prevailing on the good graces of the Interwebs to take this survey, your prime directive is to keep the survey as brutally short as possible. That’s why the “goal” in the title of this step is singular, not plural. It’s better to fire off multiple, mercifully short surveys in the service of differing goals than it is to field one honkin’ omnibus survey that only 50 people complete. So, if you aren’t really going to do anything with questions like “company size” or “how many hours per day do you use the Internet,” leave ’em out. Your survey should focus only on the actionable information you absolutely need, and the minimum classification data (demographics/firmographics/psychographics) you need to make sense of it. No more.

Step Three: Work from general to specific

OK – this is a big topic, but it’s super important. Writing a good questionnaire is an art as much as it is a science. One basic tenet is that your sequence of questioning, like that of a good trial lawyer, shouldn’t lead your witness by introducing facts not in evidence. This means that if you ask a specific question early on about Foursquare, don’t be surprised if a later question that asks respondents to name web services they use reports a high percentage of Foursquare users – you’ve planted the suggestion. Start with the most general, basic questions first, and leave questions about specifics until the very end. If you are fielding a survey about social media usage, you might think about it like this: start by asking respondents open-ended questions (fill-in-the-blank, not multiple choice) about which social media services they use, then ask consistent questions about the services the respondents named, and finally, close by asking specific questions about specific services, named or not. Of course, using open-ended questions (in which the respondents write in their answers, instead of selecting from a list of options) creates extra work for you. For each such question you ask, you are going to have to go in to each completed survey and code every open-ended response, normalizing “FB” into “Facebook” and “Nano” into “Apple iPod Nano” so that you can then perform a quantitative analysis on the results. Yes, it’s work. That’s what makes it worth doing. The alternative is to ask multiple-choice questions, but know that the danger with these is failing to provide enough options. A question about which Mp3 player a respondent uses might get away with a handful of choices: A) An Apple iPod, B) A Microsoft Zune, C) A Sansa branded player, etc. A question with a greater potential range of answers, however, will be problematic – if your choices aren’t extensive enough to cover the vast majority of possible answers, you’re likely to either get a lot of unusable “Other” responses, or worse – the Stockholm Syndrome-inflicted condition whereby the beleaguered respondent tires of hunting for the “best” answer, and just selects the first choice on the page. That, my friends, is useless. So give your respondents lots of choices on multiple choice questions, or use open-ended (fill-in-the-blank) questions where you have doubts. End your survey with demographic/classification questions. These questions (age/gender/income/etc.) work best at the end because they are simple and quick – and, as researchers have determined over the years, placing them at the end of the survey is really a best practice for getting people to finish your survey, which is pretty important. It’s best if you keep your classification system consistent with the same gold standard we pros use: the U.S. Census (or whatever your country of origin is). That means mirroring these demographic and firmographic classification schemes exactly. Final best practice: don’t ask people their age, or even to select which “age category” they fall into. Ask for their birthday instead. Ranges are useless to you, really (you can’t calculate means or medians using ranges) and simply asking for the age results in more goofy responses than just asking for birthday. Take my word for it.

Step Four: Pay the people

Provide an incentive for respondents to complete your survey, no matter how short. Your goal is to increase your “hit rate” (we call it “incidence”) by having the highest possible percentage of your target population respond to the survey. Remember – the quality of your data is worth what you pay for it. When we do online surveys of any length, we often provide a cash incentive to every respondent. You may not have the budget for that, but you should carve out something – even a chance for one or a few respondents to win a prize is better than nothing. In this sense, you are a direct marketer. Your goal is to get the highest response rate possible to your “offer.” You ARE selling something here – you’re selling the benefits to your respondents of taking the survey 🙂 So make it as attractive an offer as you can.

Step Five: Pilot and test

If you are surveying an email database, you are in great shape for this step. This gives you an opportunity to test a few different questions and/or incentives with 100 people at a time to see if there are any real winners or losers in your approach. If you are struggling with question wordings, for example, just send 2-3 versions to different batches of people and see if there are any significant differentials. You can also offer one batch an Amazon gift certificate, and another batch a chance for some other prize to see what your differential response is. If you have a good list, this step is kinda mandatory.

Step Six: Report your data – warts and all

This means being clear that the results pertain only to those who responded to the survey. When we are doing work at my day job that requires representative sampling, we will make sure that the data is weighted to some accepted constant (Census data, credible third-party information, etc.) but you probably don’t have anything credible to weight your data to in the first place, so just report it as is. In other words, don’t say “56% of Plancast users are male.” Say “60% of the respondents to this survey were female, but 56% of Plancast users in this survey were male.” It isn’t kosher to project outwards from your data, but you can point out interesting contrasts and disparities within your own dataset. Finally, a pet peeve. A while back I saw the results of a survey published by a pretty well-known social media “strategy guru” that summarized the results of a web survey to his subscriber database. The sample for this survey was under 100 people; yet, the results were reported to two decimal places (e.g., “45.72% of respondents were male.”) Unless your sample is well into six figures, round your data off to the nearest whole number. Publishing results to two decimal places on a survey with a 10% margin of error is a rookie move. I hope this has been helpful! I won’t claim that this is the final word on the subject, but I can tell you that as part of the team behind one of America’s biggest survey projects – the National Election Exit Polls – I can at least steer you away from the potholes. Questions? Comments? Other great tips? Fire away below – I’m happy to help.]]>

PSSST…

Our email newsletter is the industry’s best-kept secret.

Fast-track your audience building in just 5 minutes a week.