The forthcoming article “Adaptive Experimental Design: Prospects and Applications in Political Science” by Molly Offer-Westort, Alexander Coppock and Donald P. Green is summarized by the author(s) below.
In his 1980 essay for the American Statistician, We Need Both Exploratory and Confirmatory, John Tukey asserts that the advancement of science requires both exploratory and confirmatory data analysis to develop a research program from an idea, to a question, to a design, to (hopefully) an answer. Tukey framed these approaches as complementary; we propose that by combining modern machine learning tools with trusted methods for experimental design and analysis, we can integrate confirmatory and exploratory analysis in a coherent, principled way.
Indeed, we already combine exploratory and confirmatory analysis systematically in the social sciences, albeit often informally. When a researcher runs a pilot that tests multiple versions of a treatment intervention and then implements only the most effective version in the main trial, this is integrating exploratory and confirmatory analysis.
In this paper, we provide an introduction for political scientists to adaptive experimental designs, widely used in industry settings, which facilitate a principled approach to selecting among alternative interventions. These designs use a class of algorithm termed multi-armed bandits to dynamically allocate greater assignment probabilities to the most promising interventions. (We note that “best” is determined by the researcher’s objectives, and may be formalized in different ways.)
We highlight the conditions under which such designs outperform conventional static experimental designs. In general, when there is a version of treatment that is clearly the most effective, the algorithm will quickly increase allocation to that arm, facilitating more precise estimation of outcomes under that intervention. This precision, however, comes at the cost of decreasing allocation to suboptimal treatment, resulting in less precise estimation for those arms. On the other hand, if no treatment clearly outperforms the others within the duration of the experiment, we may lose efficiency as the algorithm equivocates across multiple candidates for the “best” treatment.
We demonstrate the design and analysis of an adaptive experiment in a practical application, determining and evaluating the most effective ballot measures for two policies: an increase to the minimum wage and a right-to- work law.
While social scientists may care about learning and evaluating the best version of treatment, they often care as much or more about comparison of this treatment with a control condition. We propose a novel adaptive algorithm specifically tailored to this goal, which allocates an increasing proportion of subjects to both the best arm and the control condition, improving the precision with which we estimate the average treatment effect of the best performing arm.
We apply this control-augmented algorithm to a study on misperceptions of facts, learning which interventions are most effective at inducing survey respondents to provide correct answers to factual questions about economic conditions.
Adaptive designs impose increased complexity on study implementation and analysis, as compared to their conventional static counterparts. However, we demonstrate that they may also reward researchers with considerable payoffs. They are as well of great relevance in the spheres of public policy and health, where researchers may have ethical obligations to minimize subject exposure to ineffective, or even harmful interventions.
About the Author(s): Molly Offer‐Westort is a post‐doctoral fellow at the Stanford Graduate School of Business, Alexander Coppock is Assistant Professor of Political Science at Yale University and Donald P. Green is the J. W. Burgess Professor of Political Science at Columbia University. Their research “Adaptive Experimental Design: Prospects and Applications in Political Science” is now available in Early View and will appear in a forthcoming issue of the American Journal of Political Science.
Speak Your Mind