AJPS Author Summary: Front-Door Difference-in-Differences Estimators

The forthcoming AJPS article, “Front-Door Difference-in-Differences Estimators”, by Adam N. Glynn and Konstantin Kashin is now available for Early View and is summarized here by its authors: 

AJPS Author Summary: Front-Door Difference-in-Differences EstimatorsHow can we assess the effects of a treatment/program if we have no suitable control units? An absence of suitable controls can occur when a) treatment cannot be withheld due to ethical/political/business reasons, b) treatment is administered at the population level, c) the outcome variable can only be measured for the treated units, or d) the available controls are clearly not comparable to the treated units (not comparable might mean a lack of overlap in some cases or a clear violation of the parallel trends assumption in others).

In this paper we develop a method, front-door difference-in-differences estimators, for estimating (or bounding) treatment effects when comparable control units are not available. The basic idea is that when some treated individuals do not comply with their treatment, we can use these “noncompliers” as proxies for control units. Although such an approach will often lead to biased estimates, we demonstrate that by using the approach twice, we can sometimes correct this bias. In other cases, we demonstrate that we can put bounds on the effect.

As proof of concept, we first demonstrate that we can recover the experimental benchmark from a randomized evaluation of a job-training program. Specifically, we show that using only the treated units from the experiment, we can tightly bracket the experimental estimate using our technique. Note that because the experimental treated units are exchangeable with the experimental control units, this exercise demonstrates that if the treatment had been given to all individuals (instead of randomly assigned), we would have been able to provide tight bounds on the effect.

In a second application, we use Florida voter history files to estimate the effects of a statewide early voting program. Because the estimate does not rely on data from other states (that did not have an early voting program), we don’t have to assume comparability across states (although we do have to make different assumptions that are detailed in the paper). Our results suggest that the program had small positive effects on turnout for at least part of the population. This provides some counter evidence to a recent AJPS paper that found early voting programs to have some negative effects on turnout.

More broadly, the technique developed in this paper should provide a means of evaluating treatments and programs that could previously not be evaluated. It should also provide a means of robustness checking when control units may not be comparable.

Front-Door Difference-in-Differences Estimators is now available for Early View and will appear in a forthcoming issue of the American Journal of Political Science.

 

 

Speak Your Mind

*

 

The American Journal of Political Science (AJPS) is the flagship journal of the Midwest Political Science Association and is published by Wiley.

%d bloggers like this: