No Harm in Checking: Using Factual Manipulation Checks to Assess Attentiveness in Experiments

AJPS Author Summary of “No Harm in Checking: Using Factual Manipulation Checks to Assess Attentiveness in Experiments” by John V. Kane and Jason Barabas

No Harm in Checking: Using Factual Manipulation Checks to Assess Attentiveness in Experiments

There have been two notable trends within the social sciences in recent decades: (1) the use of experiments to test ideas, and (2) utilizing samples collected online (rather than in-person or via telephone). These concurrent trends have greatly expanded opportunities to conduct high-quality research, but they raise an important concern: are the people taking these online experiments actually paying attention?

This question is vitally important. Fielding experimental studies is costly and requires substantial preparation, and the validity of an experiment’s outcome hinges upon respondents’ willingness to attend to the information they are given. Specifically, if subjects are not paying attention to the study, this will likely bias experimental effects toward zero, potentially leaving the researcher to conclude that the underlying theory is wrong and/or that the design of the experiment was defective.

Researchers can gain leverage on this problem by including a so-called “manipulation check” (MC). MCs can be used to confirm whether an experimental treatment succeeded in affecting the key causal variable of interest or, more generally, whether respondents were attentive to information featured in a survey. However, in practice, researchers rarely report having implemented an MC in their experiments. Moreover, even when MCs are used, they differ markedly in terms of form, function, and placement within the study.

Our article attempts to clarify how MCs can be used in experimental research. Based upon content analyses of published experiments, we identify three main categories of MCs. We then highlight the merits of one such category—factual manipulation checks (FMCs). FMCs ask respondents factual questions about content featured in an experiment which, unlike Instructional Manipulation Checks (IMCs) and (what we refer to as) Subjective Manipulation Checks (SMCs), enables researchers to identify individuals who were (in)attentive to content in the experimental portion of a study. Such information can help researchers understand the reasons underlying their experimental findings. For example, if a researcher found no significant effects for the experiment, but also found that only a small share of the sample correctly answered the FMC, this would suggest that the result has less to do with the underlying theory, and more to do with respondents’ attentiveness to the key information in the study (or lack thereof).

Replicating a series of published experiments, we then demonstrate how FMCs can be constructed and empirically investigate whether the placement of an FMC (i.e., immediately before versus after the outcome measure) is consequential for (1) treatment effects, and (2) answering the FMC correctly. We find little evidence that placing an FMC before an outcome measure significantly distorts treatment effects. However, we also find no evidence that placing an FMC immediately after an outcome significantly reduces respondents’ ability to answer the FMC correctly. We therefore conclude that researchers stand to benefit from employing FMCs in their studies, and placement of the FMC immediately after the outcome measure appears to be optimal. Such practices will equip researchers with a greater ability to diagnose their experimental findings, accurately assess respondents’ attentiveness to the experiment, and avoid any possibility of biasing treatment effects.

About the Authors: John V. Kane is an Assistant Professor at the Center for Global Affairs at New York University and Jason Barabas is a Professor in the Department of Political Science at Stony Brook University, Social & Behavioral Sciences. Their research, “No Harm in Checking: Using Factual Manipulation Checks to Assess Attentiveness in Experiments (https://doi.org/10.1111/ajps.12396)” is now available in Early View and will appear in a forthcoming issue of the American Journal of Political Science.

Speak Your Mind

*

 

The American Journal of Political Science (AJPS) is the flagship journal of the Midwest Political Science Association and is published by Wiley.