Methods to Identify Inattentive Respondents

Summary by Adam J. Berinsky, Michele F. Margolis and Michael W. Sances

Customer SurveyThe Internet has dramatically expanded the ability of researchers to survey ordinary citizens about politics. Not only are respondents easier to reach online, but Internet surveys can be “self-administered” to respondents, obviating the need for a human to read questions and record answers. While exciting, the increased use of online surveys also brings a new threat to data quality: shirker respondents.

Anyone who has randomly clicked through an Internet survey without carefully reading the questions is guilty of being a shirker. Most of us have probably been guilty of this at some point, perhaps in the course of getting a free pizza or access to a news article. Yet this random answering can be devastating to a research project.

While inattentive respondents have long been a problem, systematic methods for detecting shirking on surveys are relatively new. In our paper, “Separating the Shirkers from the Workers? Making Sure Respondents Pay Attention on Self-Administered Surveys”, we discuss Screener questions as a method for unmasking shirker respondents. These questions – which are drawn from research in psychology on measuring attention – work by instructing respondents to choose particular answers on key questions. For instance, one of our questions reads as follows:

We would like to get a sense of your general preferences.

Most modern theories of decision making recognize that decisions do not take place in a vacuum. Individual preferences and knowledge, along with situational variables can greatly impact the decision process. To demonstrate that you’ve read this much, just go ahead and select both red and green among the alternatives below, no matter what your favorite color is. Yes, ignore the question below and select both of those options.

What is your favorite color?

Someone speeding through the survey might skip to the question and click their favorite color, or perhaps click a random color. Only those who read the full question closely will know to pick both “red” and “green” as their choices. Researchers can then see who did not answer the question appropriately and identify whose other survey responses may be suspect.

By including multiple Screener questions on several online surveys, we were able to answer some key questions about inattention and its consequences. First, how large is the problem of inattentive respondents? Unfortunately, we find it to be quite substantial: the proportion of respondents passing the Screener ranges from about 60 to 70% depending on the question. That means that up to 40% of respondents fail to read survey questions carefully.

Second, are shirker respondents harmful to data quality? We find that they are. We asked respondents a battery of questions about their policy preferences. On two of the questions, higher responses corresponded to the more liberal position, whereas a third question was coded so higher responses reflected more conservative positions. Respondents who failed the Screener were significantly less likely to notice that the response scale flipped, and as a result appeared much less ideologically coherent. We similarly find stronger experimental effects among the subsample of individuals who passed the Screener.

Third, are certain groups more or less likely to pay attention? This last question is especially important because it has implications for what researchers should do with inattentive respondents. Throwing these respondents out of the sample might improve data quality; yet if attention is related to other characteristics, the resulting sample will include a disproportionate number of respondents with demographic traits associated with higher levels of attention. We found that women and older respondents were more likely to pay attention, whereas black respondents were less likely. We therefore encourage researchers to be transparent by presenting empirical results for both attentive and inattentive respondents and considering how differences between the subsamples may affect the results.

New technologies have enabled researchers to collect opinion data easily and cheaply, yet they also raise the risk of shirker respondents. Luckily, Screeners present a useful tool for identifying inattentive respondents.

 

About the Authors:  Adam J. Berinsky is a professor in the Department of Political Science at the Massachusetts Institute of Technology, Michele F. Margolis is an Assistant Professor of American Politics at University of Pennsylvania and Michael W. Sances is a Postdoctoral Fellow, Center for the Study of Democratic Institutions, at Vanderbilt University. Their research, “Separating the Shirkers from the Workers? Making Sure Respondents Pay Attention on Self-Administered Surveys” appeared in the July 2014 issue of the American Journal of Political Science.  Replication data for this study is available at: http://thedata.harvard.edu/dvn/dv/ajps.

Speak Your Mind

*

 

The American Journal of Political Science (AJPS) is the flagship journal of the Midwest Political Science Association and is published by Wiley.