How Conditioning on Posttreatment Variables Can Ruin Your Experiment and What to Do about It

The AJPS Workshop article “How Conditioning on Posttreatment Variables Can Ruin Your Experiment and What to Do about It” (https://doi.org/10.1111/ajps.12357) by Jacob Montgomery, Brendan Nyhan, and Michelle Torres is summarized by the authors below. 

AJPS Workshop - How Conditioning on Posttreatment Variables Can Ruin Your Experiment and What to Do about It

 

Identifying the causal effect of a treatment on an outcome is not a trivial task. It requires knowledge and measurement of all the factors that cause both the treatment and the outcome, which are known as confounders. Political scientists increasingly rely on experimental studies because they allow researchers to obtain unbiased estimates of causal effects without having to identify all such confounders or engaging in complex statistical modeling.

The key characteristic of experiments is that treatment status is randomly assigned. When this is true, the difference between the average outcomes of observations that received a treatment and those that did not is an unbiased estimate of its causal effect.

Of course, this description of experiments is idealized. In the real world, things get messy. Some participants ignore stimuli or fail to receive their assigned treatment. Researchers may also wish to understand the mechanism that produced an experimental effect or to rule out alternative explanations.

Unfortunately, researchers seeking to address these types of concerns often resort to common but problematic practices including dropping participants who fail manipulation checks; controlling for variables measured after the treatment such as potential mediators; or subsetting samples based on variables measured after the treatment is applied, which are known as post-treatment variables. Many applied scholars seem unaware that these post-treatment conditioning practices can ruin experiments and that we should not engage in them.

Though the dangers of post-treatment bias have long been recognized in the fields of statistics, econometrics, and political methodology, there is still significant confusion in the wider discipline about its sources and consequences. In fact, we find that 46.7% of the experimental articles published between 2012 and 2014 in the American Journal of Political Science, American Political Science Review, or Journal of Politics engage in post-treatment conditioning.

As we show in our article, these practices contaminate experimental analyses and distort treatment effect estimates. Post-treatment bias can affect our estimates in any direction and can be of any size. Moreover, there is often no way to provide finite bounds or eliminate it absent strong assumptions that are unlikely to hold in real-world settings. We therefore provide guidance on how to address practical challenges in experimental research without inducing post-treatment bias. Changing our research practices to avoid conditioning on post-treatment variables is one of the most important ways we can improve experimental practice in political science.


About the Authors: Jacob M. Montgomery  is an Associate Professor in the Department of Political Science at Washington University in St. Louis., Brendan Nyhan is a Professor in the Ford School of Public Policy at the University of Michigan, and Michelle Torres is an incoming Assistant Professor in the Department of Political Science at Rice University. Their research “How Conditioning on Posttreatment Variables Can Ruin Your Experiment and What to Do about It” (https://doi.org/10.1111/ajps.12357) appeared in the July 2018 issue of the American Journal of Political Science and is currently available with Free Access.

 

Speak Your Mind

*

 

The American Journal of Political Science (AJPS) is the flagship journal of the Midwest Political Science Association and is published by Wiley.

%d bloggers like this: