An Empirical Validation Study of Popular Survey Methodologies for Sensitive Questions

The forthcoming article “An Empirical Validation Study of Popular Survey Methodologies for Sensitive Questions” by Bryn Rosenfeld, Kosuke Imai and Jacob N. Shapiro is summarized by the authors here:

Posing a direct question is not always the best way to get the real answer on public opinion surveys. When respondents are asked about their racial biases or sexual behaviors, for example, they may well answer in ways they imagine will be more acceptable to the interviewer. And since surveys are generally voluntary, respondents might skip over uncomfortable questions altogether. Policies to address issues as diverse as corruption, public health, and support for radical groups require detailed knowledge of private behaviors and attitudes. How can we get reliable estimates?

Several approaches to soliciting truthful responses have shown promise. Our forthcoming paper in the AJPS is the first direct validation and comparison of their effectiveness. An anti-abortion referendum held during Mississippi’s 2011 General Election provided a unique setting to investigate the merits of these methods. Because Mississippi makes its voter roles public we know who voted and, at the county level, how they voted. We thus know what the estimates from each method should be. Moreover, since we know views on abortion are sensitive and pre-election polls showed signs of socially desirability bias, we expected direct questions to perform poorly.

There are three main methods used to elicit information about sensitive attitudes or behaviors while maintaining confidentiality. In list experiments, respondents receive a list of items and answer only with the total number that apply to them; researchers then compare responses from lists with sensitive items against responses from control lists. Endorsement experiments couch a sensitive behavior or attitude in an innocuous question. An uncontroversial item (in our study, whether the respondent voted for a certain politician) is paired with the sensitive item (that he supported the anti-abortion referendum) for a subset of respondents. Researchers statistically compare responses to paired and unpaired questions to infer respondents’ attitudes.

Finally, in the randomized response technique, the surveyor asks respondents to flip a coin (or roll a dice) to influence their response. In our case, the respondents answered yes if the coin came up heads or they voted for the anti-abortion referendum. Researchers can work backwards from the 50% yes-rate the coin would produce and gauge the average respondents’ answer.

Put differently, the list experiment masks individual responses through aggregation, the endorsement experiment by exploiting respondents’ evaluation bias, and the random response method by adding noise.

We find that all three outperform direct questions in terms of bias – although all sacrifice precision. When we ask the direct question we overestimate voting for the referendum by about 25 percentage points, the bias is 0.249. The bias drops to 0.180 in the list experiment, 0.028 in the endorsement experiment, and 0.015 in randomized response.

Our results confirm that indirect methods can dramatically reduce non-response and social desirability biases over asking directly about sensitive topics. Our findings also suggest researchers should reconsider the randomized response method: a simple version plus a practice question can provide far more accurate information than direct questions with little reduction in precision.

Speak Your Mind

*

 

The American Journal of Political Science (AJPS) is the flagship journal of the Midwest Political Science Association and is published by Wiley.

%d bloggers like this: