Best Practices: Improving Data Quality in Online Quant Surveys

When it comes to online surveys, data quality is king. With the influx of new ways to obtain respondents for your research projects, we must be constantly improving the ways that we ensure the data quality is always paramount. While nothing replaces an experienced, knowledgeable Research Analyst manually combing through the data, there are a number of additional tactics that we incorporate here at Lab42 that provides us and our clients with peace of mind that the data we collect is both valid and insightful.

Use Open Ended Questions

  • Using open ended questions is one of the best ways to track poor quality respondents and responses. While it takes more time on the backend to manually review all of these answers, it also allows the researcher objectivity to flag and remove any respondents who answer the question in a way that either doesn’t make sense for the question or with gibberish.
    • We tend to incorporate 2 types of verbatim questions into our survey:
      • Unstructured open ended questions: These are more subjective to the individual respondent, and we typically set a minimum word count to really ensure that the responses are more thought out (i.e. What is the best part about shopping online?)
      • Structured open ended questions – These types of questions typically have one allowable answer and are more basic, objective questions (What is the name of our current President?), but the respondent must actually type the answer into a textbox as opposed to choosing from a predetermined list.

Attention Filters

  • Using attention filters is a fairly basic approach where you clearly instruct respondents to select a specific option – for example, you would have a question that reads – “Select Option A for this question”. You can use single select (radio box, dropdown), multi-select (checkbox), and even grid questions to accomplish this.

Red Herring Questions

  • Using red herring questions is also a fairly common approach to weeding out poor quality respondents. These are questions that have answer options that either do not exist or do not make sense for the question.
    • For example, you can set up a checkbox question that reads – Which of the following diaper brands have you used in the past 6 months? – and have a few fake options scattered throughout real options. People that select the fake options can then be terminated from the survey.

Speed Traps

  • Some surveys are longer than others, and some respondents are simply faster survey takers. However, setting up a speed trap for respondents who are too fast is a good way to eliminate poor quality data from your survey. If your survey is expected to take 15-20 minutes to complete, and someone completes it within 4 minutes, you can be fairly confident that the respondent did not answer the questions thoughtfully.

Inconsistent Logic Checks

  • Using inconsistent logic checks within a survey helps to make sure that the respondents’ answers actually make physical sense. For example, if a respondent says that they are 18-24 years old and hold a PhD, we can be fairly confident that they are not telling the truth. Similarly, if a respondent answers in one question that they purchased a mobile phone in the past 3 months and in a subsequent question answers that the phone model they purchased wasn’t even available during those 3 months, we know to flag that respondent as having inconsistent data.

There is no one-size-fits-all approach for weeding out poor quality respondents, and as researchers, we should give respondents the benefit of the doubt that they are actually trying to answer the questions truthfully. Using a combination of these tactics within your online surveys will help to ensure that your data is of highest quality – and it will also save you money in incentives that you do not have to deliver to those poor quality respondents.

Jon Pirc

Jon has spent his professional career as an entrepreneur and is constantly looking to disrupt traditional industries by using new technologies. After working at Sandbox Industries as a ‘Founder in Residence’, Jon founded Lab42 in 2010 as a way to make research more accessible to smaller companies. Jon has a Bachelor’s of Science in Psychology from Northern Illinois University.

Previous
Previous

USING RESEARCH TO UNDERSTAND YOUR CUSTOMERS’ JOURNEYS

Next
Next

The decline of Netflix: The writing was on the wall