Paths of Recruitment: Rational Social Prospecting in Petition Canvassing

In the following blog post, the authors summarize their American Journal of Political Science article titled Paths of Recruitment: Rational Social Prospecting in Petition Canvassing”:AJPS - FB Posts BAA

 When U.S. Representative Gwen Moore (D-WI) prepared her 2008 reelection bid as Wisconsin’s first black member of Congress (representing Milwaukee’s 4th Congressional District), her campaign faced the task of gathering nominating paper signatures for submission to Wisconsin’s Government Accountability Board.  While this might have been an opportunity to travel throughout the largely Democratic district performing campaign outreach and mobilization, the canvassers working on Moore’s behalf took a different approach: they went primarily to her most supportive neighborhoods, which also happened to be the part of the congressional district that Moore had represented in the State Senate until 2004.  Unsurprisingly, canvassers focused their attention on majority-black neighborhoods throughout Northwest Milwaukee.  As time passed, the canvassers relied increasingly on signatures gathered from Moore’s core constituency.

The geographically and socially bounded canvassing carried out by Moore’s campaign is suggestive of a broader trend in how political recruiters search for support, and it holds lessons that expand upon prevailing models of political re
cruitment.  Political recruiters do not only seek out supporters who share common attributes and socioeconomic backgrounds. They also act in response to their geographic milieu, and they update their choices in light of experience.

In our paper “Paths of Recruitment: Rational Social Prospecting in Petition Canvassing, we develop these insights while elaborating a new model of political recruitment that draws lessons from the experiences of petition canvassers in multiple geographic and historical contexts. We test our model using original data we gathered in the form of geocoded signatory lists from a 2005-2006 anti-Iraq War initiative in Wisconsin and an 1839 antislavery campaign in New York City.  Examining the sequence of signatures recorded in these petitions, we have been able to reconstruct canvassers’ recruitment methods– whether they walked the petition door-to-door or went to a central location or meeting place to gather signatures – as well as the path they travelled when they did go door-to-door.  We find that canvassers were substantially more likely to go walking in search of signatures in neighborhoods where residents’ demographic characteristics were similar to their own. In the case of middle-class, predominantly white Wisconsin anti-war canvassers, this meant staying in predominantly white and middle class neighborhoods when going door-to-door. Furthermore, the act of canvassing appeared to follow a rational process where canvassers displayed sensitivity to their costs. For example, in areas where canvassers struggled to find signatures, they were more likely to quit searching.

Understanding how political recruiters find supporters for a political candidate or cause is crucial because recruitment determines who participates in politics. If canvasser strategies reach only a limited set of recruits, then swathes of Americans may be less likely to participate.  Our paper sheds new light on the campaign dynamics that feed this inequality.

About the Authors: Clayton Nall is an Assistant Professor of Political Science at Stanford University. Benjamin Schneer is an Assistant Professor in the Department of Political Science at Florida State University. Daniel Carpenter is Allie S. Freed Professor of Government in the Faculty of Arts and Sciences, and Director of Social Sciences at the Radcliffe Institute for Advanced Study at Harvard University. Their paper Paths of Recruitment: Rational Social Prospecting in Petition Canvassing” (/doi/10.1111/ajps.12305) appears in the January 2018 issue of the American Journal of Political Science and will be awarded the AJPS Best Article Award at the 2019 MPSA Conference. 

Both this article and the co-winning AJPS Best Paper Award article When Common Identities Decrease Trust: An Experimental Study of Partisan Women“(/doi/10.1111/ajps.12366) are currently free to access through April 2019. 

When Common Identities Decrease Trust: An Experimental Study of Partisan Women

AJPS - FB Posts- Klar

AJPS Author Summary by Samara Klar of the University of Arizona

With a record number of women running for the 2020 Democratic nomination, questions will no doubt arise as to the likelihood that a Democratic woman might entice female Republican voters to support a woman from the opposing party. Each time that a woman has run for national office (for example, Sarah Palin as a vice presidential candidate in 2008 or Hillary Clinton as a presidential candidate in 2008 and 2016), political spectators asked: Will women voters “cross the aisle” to vote for a woman?

Yet, each time, we have seen no evidence that women from either party are willing to do so. Indeed, there is very little evidence at all that women from the mass public form inter-party alliances based on their shared gender identity. This might seem surprising – particularly to those familiar with the Common In-Group Identity Model.

The Common In-Group Identity Model argues that an overarching identity (in this case, being a woman) can unite two competing groups (in this case, Democrats and Republicans). Social psychologists demonstrated these effects in an array of “minimal group settings” and others have found that it appears to hold true in “real-world” settings as well. Why, then, are Democratic women and Republican women reluctant to support one another based on their shared gender identity? I set out to investigate the conditions under which the Common In-Group Identity Model holds and whether it might (or might not) apply to American women who identify as Democrats and Republicans.

A key condition of the Common In-Group Identity Model is that the members of both rival groups must hold a common understanding of what it means to identify with their overarching shared identity. Without this, it simply is not a shared identity at all.

Based on existing work, I expected that Democratic women and Republican women, in fact, hold very different views of what it means to be a woman. To test this, I asked a bipartisan sample of 3000 American women how well the word feminist describes them on a scale ranging from 1 (Extremely well) to 5 (Not at all). Democratic women overwhelmingly identify themselves as feminists: their mean response was 2.47 (somewhere in between Very Well [2] and Somewhat Well [3]). Republican women, on the other hand, do not view themselves as feminists: their mean response was a 3.8 (closer to Not Very Well [4]).

I also asked these women to describe how a typical Republican woman and a typical Democrat woman might view feminism. Women from both sides of the aisle are astonishingly accurate in their estimates of how co-partisan and opposing partisan women feel about this issue. There is a clear and accurate perception that Democratic women think of themselves as feminist and that Republican women do not. In sum, being “a woman” is not an identity group that Democratic and Republican women can agree on – and they are well aware of this divide.

If Democratic women and Republican women do not share a common understanding of what it means to be a woman, then their gender should not unite them. In fact, as scholars have shown in other settings, they should actually be driven further apart when their gender becomes salient. This is what I set out to test.

With a survey experiment, I randomly assigned a large sample of women to read a vignette about either a woman or a man, who identifies with either their own party or the other party, and who supports either an issue that makes gender salient or one that does not. I then asked respondents to evaluate this fictitious character.

My results show that gender does not unite women from opposing parties but, in fact, increases their mutual distrust when gender is salient. To be more specific, I find that – when gender is salient – women hold more negative views of women from the opposing party than they do of men from the opposing party. When gender was not salient, however, women no longer penalized women more than they penalized men for identifying with the opposing party.

My work helps us to understand why we do not tend to find political solidarity among women who identify with opposing parties: not only do they disagree about politics but they also tend to disagree about their gender identity. Making gender salient thus exacerbates the divide.

I hope this study also helps to add nuance to our collective understanding of identity politics. Demographic identity groups are not homogeneous voting blocs. This lesson is not exclusive to women but should be taken into account when we think through the political behavior of any subset of the American public. To an outsider, it might appear that a group of individuals objectively shares a common identity, but if they do not hold a common understanding of what that identity means to them then they do not share an identity at all. If we wish to understand how identities influence political attitudes and behaviors, we cannot neglect the nuances that exist with identity groups.

About the Author: Samara Klar of the University of Arizona has authored the article “When Common Identities Decrease Trust: An Experimental Study of Partisan Women(doi/10.1111/ajps.12366) which was published in the July 2018 issue of the American Journal of Political Science and will be awarded the AJPS Best Article Award at the 2019 MPSA Conference. 

Both this article and the co-winning AJPS Best Paper Award article “Paths of Recruitment: Rational Social Prospecting in Petition Canvassing” (doi/10.1111/ajps.12305) are currently free to access through April 2019. 

Celebrating Verification, Replication, and Qualitative Research Methods at the AJPS

By Jan Leighley, AJPS Interim Lead Editor

I’ve always recommended to junior faculty that they celebrate each step along the way toward publication: Data collection and analysis—done! Rough draft—done! Final draft—done! Paper submitted for review—done! Revisions in response to first rejection—done! Paper submitted for review a second time—done! In that spirit, I’d like to celebrate one of AJPS’s “firsts” today: the first verification, replication, and publication of a paper using qualitative research methods, “The Disclosure Dilemma: Nuclear Intelligence and International Organizations (” by Allison Carnegie and Austin Carson.

The Disclosure Dilemma: Nuclear Intelligence and International Organizations Allison Carnegie Austin Carson

As with many academic accomplishments, it takes a village—or at least a notable gaggle—to make good things happen. The distant origins of the AJPS replication/verification policy were in Gary King’s 1995 “Replication, Replication” essay, as well as the vigorous efforts of Colin Elman, Diana Kapiszewski, and Skip Lupia as part of the DA-RT initiative that began around 2010 (for more details, including others who were involved in these discussions, see ), and many others in between, especially the editors of the Quarterly Journal of Political Science and Political Analysis. At some point, these journals (and perhaps others?) expected authors to post replication files, but where the files were posted, or if publication was contingent on posting such files, varied. They also continued the replication discussion that King’s (1995) essay began, as a broader group of political scientists (and editors) started to take notice (Elman, Kapiszewski and Lupia 2018).

In 2012, AJPS editor Rick Wilson required that replication files for all accepted papers be posted to the AJPS Dataverse. Then, in 2015, AJPS editor Bill Jacoby announced the new policy that all papers published in AJPS must first be verified prior to publication. He initially worked most closely with the late Tom Carsey (University of North Carolina; Odum Institute) to develop procedures for external replication of quantitative data analyses. Upon satisfaction of the replication requirement, the published article and associated AJPS Dataverse files are awarded “Open Practices” badges as established by the Center for Open Science. Since then, the staff of the Odum Institute and our authors have worked diligently to assure that each paper meets the highest of research standards; as of last week, we had awarded replication badges to 185 AJPS publications.

In 2016, Jacoby worked with Colin Elman (Syracuse University) and Diana Kapiszewski (Georgetown University), co-directors of the Qualitative Data Repository at Syracuse University, to develop more detailed verification guidelines appropriate for qualitative and multi-method research.  This revision of the original verification guidelines acknowledges the diversity of qualitative research traditions, clarifies differences in the verification process necessitated by the distinct features of quantitative and qualitative analyses, and different types of qualitative work. The policy also discusses confidentiality and human subjects protection in greater detail for both types of analysis.

But it is only in our next issue that we will be publishing our first paper (available online today in Early View with free access) that required verification for qualitative data analysis, “The Disclosure Dilemma: Nuclear Intelligence and International Organizations (” by Allison Carnegie and Austin Carson.  I’m excited to see the AJPS move the discipline along in this important way! To celebrate our first verification of qualitative work, I’ve asked Allison and Austin to share a summary of their experience, which will be posted here in the next few weeks.

As part of the efforts of those named here (and those I’ve missed, with apologies), today the AJPS is well-known in academic publishing circles as taking the lead on replication/verification policies—so much so that in May, Sarah Brooks and I will be representing the AJPS at a roundtable on verification/replication policies at the annual meeting of the Consortium of Science Editors (CSE), an association of journal editors from the natural and medical sciences. AJPS will be the one and only social science journal represented at the meeting, where we will  discuss what we have learned, and how better to support authors in this process.

If you have experiences you wish to share about the establishment of the replication/verification policy, or questions you wish to raise, feel free to send them to us at And be sure to celebrate another first!

Cited in post:

King, Gary. 1995. “Replication, Replication.” PS: Political Science and Politics. 28:3, 444-452.

Elman, Colin, Diana Kapiszewski and Arthur Lupia. 2018. “Transparent Social Inquiry: Implications for Political Science.” Annual Review of Political Science 21, 29-47.

AJPS Author Summary: Are Biased Media Bad for Democracy?

AJPS Author Summary of “Are Biased Media Bad for Democracy?by Stephane Wolton

“[N]ews media bias is real. It reduces the quality of journalism, and it fosters distrust among readers and viewers. This is bad for democracy.” (Timothy Carney, New York Times, 2015). It is indeed commonly accepted that media outlets (e.g. newspapers, radio stations, television channels) are ideologically oriented and attempt to manipulate their audience to improve the reputation or electoral chances of their preferred politicians. But if this holds true, aside from the likely detrimental effects of media bias on the quality of journalism, is this bias inevitably bad for democracy?

In my paper, I study a game-theoretical framework to provide one answer to this question. I use a political agency model, in which the electorate faces the problem of both selecting and controlling polarized politicians. I focus on the actions of office-holders, the information available to voters, and the resulting welfare under four different media environments. In the first, a representative voter obtains information from a media outlet that exactly matches her policy preference. I use the term “unbiased” to describe this environment. In the second, the voter receives news reports from two biased media outlets, on the right and the left of the policy spectrum. I define this environment as “balanced” (as in most states in the United States, see here). In the last two cases, the voter’s information comes either from a single right-wing outlet (“right-wing biased environment” as in Italy after Berlusconi’s 1994 electoral victory) or from a single left-wing outlet (“left-wing biased environment” as in Venezuela after the closing down of RCTV in May 2007, in the early years of the Chavez regime).

Two important findings emerge from comparing equilibrium behaviors across these media environments. Not surprisingly, and in line with a large literature, the voter is always less informed with biased news providers (whether the environment is balanced or not) than with an unbiased media outlet. If officeholders’ behavior were to be kept constant, the electorate would necessarily be hurt by biased media. However, my analysis highlights that everything else is not constant across media environments. In many circumstances, politicians behave differently with biased rather than unbiased news providers. Taking into account these equilibrium effects, my paper uncovers conditions under which voters are better off with biased rather than unbiased media. Therefore, the often advanced claim that media bias is bad for democracy needs to be qualified.

My work also holds some implications for empirical analyses of biased media. To measure the impact of media bias, one needs to compare an outcome of interest (say the re-election rate of incumbent politicians) under an unbiased and under a biased media environment. However, the problem researchers face is that they rarely observe a situation with unbiased outlets and they end up using changes in the media environment from balanced to right- or left-wing biased to evaluate the consequences of media bias. My paper shows that (i) unbiased and biased news providers do not provide the same information to voters and (ii) office-holders can behave differently under biased and unbiased news outlets. As a result, estimates obtained using a balanced environment as reference point can over- or under-estimate the impact of biased media.

Returning to the quote used at the beginning of this post, my paper shows that Carney is only partially correct. Media bias does reduce the quality of journalism and foster distrust. However, it is not necessarily bad for democracy. Further, my work suggests that while existing empirical studies of the media measure important quantities, they may not tell us much about the impact of biased news providers vis-a-vis unbiased outlets.

About the Author: Stephane Wolton is an Associate Professor in Political Science in the Department of Government at the London School of Economics. Wolton’s research “ Are Biased Media Bad for Democracy? (” is now available in Early View and will appear in a forthcoming issue of the American Journal of Political Science.

When Should We Use Unit Fixed Effects Regression Models for Causal Inference with Longitudinal Data?

AJPS Author Summary of “When Should We Use Unit Fixed Effects Regression Models for Causal Inference with Longitudinal Data?” by Kosuke Imailn and Song Kim

When Should We Use Unit Fixed Effects Regression Models for Causal Inference with Longitudinal Data?

This paper investigates the causal assumptions of unit fixed effects regression models, which many researchers use as their default methods for the analysis of longitudinal data.  Most importantly, we show that the ability of these models to adjust for unobserved time-invariant confounders comes at the expense of dynamic causal relationships, which are permitted under an alternative selection-on-observables approach. Using the nonparametric directed acyclic graph, we highlight two key causal identification assumptions of unit fixed effects models: past treatments do not directly influence a current outcome, and past outcomes do not affect current treatment. Furthermore, we introduce a new nonparametric matching framework that elucidates how various unit fixed effects models implicitly compare treated and control observations to draw causal inference. By establishing the equivalence between matching and weighted unit fixed effects estimators, this framework enables a diverse set of identification strategies to adjust for unobservables in the absence of dynamic causal relationships between treatment and outcome variables. We illustrate the proposed methodology through its application to the estimation of GATT membership effects on dyadic trade volume.  The open-source software package, wfe: Weighted Linear Fixed Effects Regression Models for Causal Inference, is available at the Comprehensive R Archive Network (CRAN) for implementing the proposed methodology.

While this article examines regression models with unit fixed effects, our related paper proposes a new matching method for causal inference with time-series cross sectional data and show how this method relates to the regression models with both unit and time fixed effects.  The proposed matching method can be implemented through an open-source R package, PanelMatch: Matching Methods for Causal Inference with Time-Series Cross-Sectional Data.

About the Authors: Kosuke Imai is Professor of Government and of Statistics at Harvard University and also an affiliate of the Institute for Quantitative Social Science . Song Kim is Associate Professor of Political Science and a Faculty Affiliate of the Institute for Data, Systems, and Society (IDSS) at Massachusetts Institute of Technology. Their research “When Should We Use Unit Fixed Effects Regression Models for Causal Inference with Longitudinal Data? ( )” is now available in Early View and will appear in a forthcoming issue of the American Journal of Political Science.

Priorities for Preventive Action: Explaining Americans’ Divergent Reactions to 100 Public Risks

AJPS Author Summary of “Priorities for Preventive Action: Explaining Americans’ Divergent Reactions to 100 Public Risks” by Jeffrey A. Friedman

Priorities for Preventive Action: Explaining Americans’ Divergent Reactions to 100 Public Risks

The U.S. government spends over $100 billion per year fighting terrorism, a risk that kills about as many Americans as lightning strikes and accidents involving home appliances. President Trump has said that one of his primary objectives is reducing violent crime, even though this problem is at historic lows nationwide. Meanwhile, the looming threat of climate change could cause vast global harm. Extreme weather induced by global warming may already kill more Americans than terrorists do, yet preventing climate change consistently ranks near the bottom of voters’ policy priorities.

What explains Americans’ divergent reactions to risk? In particular, why do Americans’ priorities for reducing risk often seem so uncorrelated with the danger that those risks objectively present? Many scholars believe the answer to this question is that heuristics, biases, and ignorance cause voters to misperceive risk magnitudes. By contrast, I argue in a forthcoming AJPS article that Americans’ risk priorities reflect value judgments regarding the extent to which some victims deserve more protection than others and the degree to which it is appropriate for government to intervene in different areas of social life.

The paper backs this argument with evidence drawn from a survey of 3,000 Americans, using pairwise comparisons to elicit novel measures of how respondents perceive nine dimensions of 100 life-threatening risks. Unlike many studies which focus on understanding which risks “worry” or “concern” respondents to greater degrees, this survey explicitly distinguished between respondents’ perceptions of how much harm risks caused and respondents’ preferences for how much money the government should spend to mitigate these dangers. This survey produced two main findings.

First, the data show that respondents were well-informed about which risks cause more harm than others. The correlation between perceived and actual mortality across the 100 risks in the study was 0.82 – not perfect, but a far cry from voters’ limited grasp on other kinds of politically-relevant information. The data also show that respondents’ perceptions of how much harm risks cause explained little variation in their policy preferences relative to value judgments about the status of victims and the appropriate role of government. Both of these findings hold regardless of political party, education, and other demographics.

For example, even though respondents assigned terrorism the third-highest priority among risks covered by the survey, they did not see this problem as being particularly deadly. On this measure, terrorism ranked 51st out of 100 risks, around the same level as bicycle accidents and tornadoes. Instead, respondents said that terrorism was exceptionally unfair to its victims (ranked #2, behind only child abuse) and that governments have special obligations to protect citizens from this danger (again #2, behind only nuclear war). This reflects a broader pattern seen throughout the survey data: the main reason that voters support spending government funds to reduce risks is not because they think these problems are especially common, but because they say these problems are especially objectionable.

It is important to take these subjective beliefs seriously both in scholarly analyses and in policy debates. When people disagree in setting policy priorities, they often attribute their opponents’ positions to ignorance or misinformation. Thus many Democrats accuse Republicans of exaggerating the risk of terrorism while downplaying the threat of climate change. Republicans, for their part, often accuse Democrats of inflating the risk of gun violence while ignoring threats to national security. But my study indicates that Republicans and Democrats both hold relatively accurate perceptions of which risks cause more harm than others, and that neither party affords those judgments much weight when considering how to allocate public resources. The key to productive discourse on these issues thus likely lies with understanding voters’ values rather than contesting their factual beliefs.

The article also provides foundations for exploring how public opinion shapes government spending. In some cases – as with terrorism – federal expenditures appear to reflect voters’ demands. But that correlation is imperfect. Cancer and heart disease were the top two policy priorities for this survey’s respondents. Air pollution placed sixth. Warfare ranked 24th on respondents’ risk-reduction priorities, beneath prescription drug abuse, diabetes, and HIV/AIDS. Thus to the extent that the U.S. defense budget crowds out government spending on health care, that does not appear to be a straightforward function of voters’ policy preferences. The article is therefore relevant not just to understanding the public’s risk priorities in their own right, but also for analyzing how and why the federal budget reflects some of these priorities more than others.

About the Author: Jeffrey A. Friedman is an Assistant Professor of Government at Dartmouth College and is also Visiting Fellow, Institute for Advanced Study in Toulouse. Friedman’s research “Priorities for Preventive Action: Explaining Americans’ Divergent Reactions to 100 Public Risks (” is now available in Early View and will appear in a forthcoming issue of the American Journal of Political Science.





No Harm in Checking: Using Factual Manipulation Checks to Assess Attentiveness in Experiments

AJPS Author Summary of “No Harm in Checking: Using Factual Manipulation Checks to Assess Attentiveness in Experiments” by John V. Kane and Jason Barabas

No Harm in Checking: Using Factual Manipulation Checks to Assess Attentiveness in Experiments

There have been two notable trends within the social sciences in recent decades: (1) the use of experiments to test ideas, and (2) utilizing samples collected online (rather than in-person or via telephone). These concurrent trends have greatly expanded opportunities to conduct high-quality research, but they raise an important concern: are the people taking these online experiments actually paying attention?

This question is vitally important. Fielding experimental studies is costly and requires substantial preparation, and the validity of an experiment’s outcome hinges upon respondents’ willingness to attend to the information they are given. Specifically, if subjects are not paying attention to the study, this will likely bias experimental effects toward zero, potentially leaving the researcher to conclude that the underlying theory is wrong and/or that the design of the experiment was defective.

Researchers can gain leverage on this problem by including a so-called “manipulation check” (MC). MCs can be used to confirm whether an experimental treatment succeeded in affecting the key causal variable of interest or, more generally, whether respondents were attentive to information featured in a survey. However, in practice, researchers rarely report having implemented an MC in their experiments. Moreover, even when MCs are used, they differ markedly in terms of form, function, and placement within the study.

Our article attempts to clarify how MCs can be used in experimental research. Based upon content analyses of published experiments, we identify three main categories of MCs. We then highlight the merits of one such category—factual manipulation checks (FMCs). FMCs ask respondents factual questions about content featured in an experiment which, unlike Instructional Manipulation Checks (IMCs) and (what we refer to as) Subjective Manipulation Checks (SMCs), enables researchers to identify individuals who were (in)attentive to content in the experimental portion of a study. Such information can help researchers understand the reasons underlying their experimental findings. For example, if a researcher found no significant effects for the experiment, but also found that only a small share of the sample correctly answered the FMC, this would suggest that the result has less to do with the underlying theory, and more to do with respondents’ attentiveness to the key information in the study (or lack thereof).

Replicating a series of published experiments, we then demonstrate how FMCs can be constructed and empirically investigate whether the placement of an FMC (i.e., immediately before versus after the outcome measure) is consequential for (1) treatment effects, and (2) answering the FMC correctly. We find little evidence that placing an FMC before an outcome measure significantly distorts treatment effects. However, we also find no evidence that placing an FMC immediately after an outcome significantly reduces respondents’ ability to answer the FMC correctly. We therefore conclude that researchers stand to benefit from employing FMCs in their studies, and placement of the FMC immediately after the outcome measure appears to be optimal. Such practices will equip researchers with a greater ability to diagnose their experimental findings, accurately assess respondents’ attentiveness to the experiment, and avoid any possibility of biasing treatment effects.

About the Authors: John V. Kane is an Assistant Professor at the Center for Global Affairs at New York University and Jason Barabas is a Professor in the Department of Political Science at Stony Brook University, Social & Behavioral Sciences. Their research, “No Harm in Checking: Using Factual Manipulation Checks to Assess Attentiveness in Experiments (” is now available in Early View and will appear in a forthcoming issue of the American Journal of Political Science.

Territorial Representation and the Opinion-Policy Linkage

AJPS Author Summary of “Territorial Representation and the Opinion-Policy Linkage” by Christopher Wratil 

Territorial Representation and the Opinion-Policy Linkage

A central promise of democracy is that government will follow the wishes and opinions of the people. A large body of literature in American and comparative politics has demonstrated that in many situations governments react to shifts in mean public opinion and enact policies that are supported by the majority of citizens across the country. However, the idea that policy-makers follow country-wide mean opinion to get re-elected is most straightforward in political systems where policy-makers are elected by ‘the people’ as a whole. But many political systems elect policy-makers in sub-national constituencies: from U.S. Senators elected only by the citizens in each of the 50 constituent states to national governments in the European Union elected only by the citizens of each of the 28 member states. How do these arrangements of ‘territorial representation’ influence whose preferences will be reflected in policy output in case citizens in different states or territories disagree over policy change?

To answer this question my research uses the case of the EU, where national governments are major policy-makers accountable only to their national publics which have varying opinions on EU policies. I argue that governments will focus on achieving policy change on those issues their national citizens at home care intensely about and have a uniform view on, and potentially make concessions to other governments on issues their citizens’ opinion is ambivalent and less salient. The analyses show that measures weighting opinion across member states by how much national citizens care about an issue rather than by population sizes better explain EU-level policy change than mean opinion across the EU. Moreover, when a national public views an issue as particularly salient, the probability that EU policy on this issue will be in line with majority opinion in this member state increases the more clear-cut public opinion on the matter.

The results do not only highlight that political systems that elect key policy-makers territorially, such as the EU or federal systems, may reallocate influence to citizens in certain parts of the political system depending on how much they care about an issue and how malapportioned the legislative power of policy-makers is compared to voter populations. But they also provide the first quantitative assessment of the responsiveness and congruence of EU-level policy outputs with public opinion on specific issues. The findings challenge the widely-held belief that the EU system is largely insulated from public opinion. Instead, they pose the question of how exactly we should normatively assess the quality of democracy in systems that may not react most strongly to mean opinion but to opinion in different territories depending on the distributions of salience, opinion, and power.

About the author: Christopher Wratil is a John F. Kennedy Memorial Fellow at the Minda de Gunzburg Center for European Studies at Harvard University. His research, “Territorial Representation and the Opinion-Policy Linkage: Evidence from the European Union (” is now available in Early View and will appear in a forthcoming issue of the American Journal of Political Science.

Urbanization Patterns, Information Diffusion and Female Voting in Rural Paraguay

 AJPS Author Summary of “Urbanization Patterns, Information Diffusion and Female Voting in Rural Paraguay” by Alberto Chong, Gianmarco LeónCiliotta,  Vivian Roza, Martín Valdivia, and Gabriela Vega


While the role of social interactions as a vehicle to boost the impact of information campaigns is not a new one, evidence on whether information spreads through social networks and is able to generate behavioral changes is mixed, and this is particularly the case for interventions seeking to boost electoral participation (Sinclair et al 2012, Fafchamps et al. 2018, Gine and Mansuri 2017.) Understanding how social interactions help spread information and generate behavioral change provides insights on the relevance of social networks in the design of public policies. Information exchanges among friends and neighbors, or through role models may help spread the information within a locality, and the quality of such interactions is in turn affected by cultural or ethnic similarities or spatial proximity.

In Chong et al (2018), we present evidence of the relevance of urbanization patterns in mediating the effects of two distinct get-out-the-vote (GOTV) campaigns to boost registration and turnout among women in rural Paraguay. We use individual-level administrative registration and voting data, survey information, and satellite images, and exploit a particularity in urbanization patterns in rural Paraguay: some localities have a clear center, surrounded by houses and agricultural land in the outskirts (non-linear localities), while others are long lanes, along which houses are sparsely distributed with agricultural land as backyards (linear localities).

Prior to the 2013 presidential elections, we randomly assigned rural localities to one of two commonly used methods for running information and political campaigns: non-partisan public rallies (PR) and door-to-door (D2D) canvassing. PRs are a relatively inexpensive way to reach large audiences, and while somewhat impersonal, are an appealing option and are widely used in political and information campaigns. Despite their popularity, very few studies have assessed their impact. On the other hand, D2D campaigns, while more capital and labor intensive, may be more effective due to the closer human contact and the possibility that they generate information spillovers. The trade-off between a mobilization campaign that involves a more impersonal approach, which allows higher reach at a relatively lower cost, and one that is a more personal and interactive one, but has less coverage and is more expensive, is at the core of our research and sheds light on the conditions affecting mobilization efforts’ effectiveness.

In both treatments, we provided information related to registration and the importance of voting. The experiment was designed to estimate spillover effects by randomly varying the intensity of the D2D treatment. We find that neither intervention led to increases in voter registration, but while PRs show small and insignificant effects on voting, face-to-face interactions significantly increased turnout among treated women. Interestingly, we find evidence of spillover effects that leads to higher turnout only in localities with urbanization patterns that appear to favor social interactions and information diffusion (linear localities). These spillover effects are more important for treated women (reinforcement effect) than for untreated women (diffusion effects.) Overall, our results suggest that the design of GOTV campaigns should consider the spatial constraints that affect the frequency and quality of social interactions within a locality, and therefore could limit the extent of spillovers effects.

About the Authors: Alberto Chong is a Professor in the Department of Economics at the Andrew Young School of Policy Studies and holds a joint appointment with the College of Education and Human Development; Gianmarco LeónCiliotta is an Associate Professor at the Department of Economics and Business at the Universitat Pompeu Fabra, an Affiliated Professor at the Barcelona Graduate School of Economics and at IPEG-Barcelona, and a Research Affiliate at CEPR; Vivian Roza is a Program Coordinator at Inter-American Development Bank; Martín Valdivia is a Senior Researcher at the Group for the Analysis of Development; Gabriela Vega is the Social Development Principal Specialist at the Gender and Diversity Division of the Inter-American Development Bank. Their research “Urbanization Patterns, Information Diffusion and Female Voting in Rural Paraguay (” is now available online in Early View and will be published in a forthcoming issue of the American Journal of Political Science.


Chong, Alberto , Gianmarco León, Vivian Roza, Martín Valdivia, and Gabriela Vega (2018) “Urbanization Patterns, Information Diffusion and Female Voting in Rural Paraguay,” American Journal of Political Science, forthcoming.

Fafchamps, Marcel, Pedro Vicente and Ana Vaz (2018) “Voting and Peer Effects: Experimental Evidence from Mozambique.” Economic Development and Cultural Change, forthcoming.

Gine, Xavier, and Ghazala Mansuri (2017). “Together we will: experimental evidence on female voting behavior in Pakistan.” American Economic Journal: Applied Economics.

Sinclair, Betsy, Donald Green and Margaret McConnell (2012). “Detecting Spillover Effects: Design and Analysis of Multilevel Experiments,” American Journal of Political Science, Vol. 56 (4): 1055–1069

Building Cooperation among Groups in Conflict: An Experiment on Intersectarian Cooperation in Lebanon

Building Cooperation among Groups in Conflict: An Experiment on Intersectarian Cooperation in LebanonAJPS Author Summary of “Building Cooperation among Groups in Conflict: An Experiment on Intersectarian Cooperation in Lebanon” by Han Il Chang and Leonid Peisakhin

Across much of the Middle East, relations between Shiites and Sunnis are strained.  In some cases, the two Muslim sects have a long history of grievances, and they cohabit in the region’s hotspots (e.g., Iraq, Syria, Lebanon, Bahrain).  In this paper, we test several interventions designed to improve cooperation across sectarian lines.

The study – a laboratory in the field experiment – took place in Beirut, the capital of Lebanon, where we asked a representative sample of Beirut’s residents to engage in a series of tasks designed to measure conditional and unconditional cooperation.  Conditional cooperation – a type of cooperation that entails strategic considerations about reciprocity – was measured by observing contributions in a public goods game.  Unconditional cooperation – a type that implies selfless other-regarding behavior – was observed in a standard other-other allocation game and also in a series of simulated elections.

The aim of the study was to test the effectiveness of a pro-cooperation appeal by experts and, separately, of a cross-sectarian group discussion on improving cooperation levels by comparison to a baseline.  The expert appeal followed the format of a short televised debate where prominent Shia and Sunni journalists discussed the problems associated with sectarianism and encouraged the shedding of sectarian identities in favor of a national Lebanese identity.  Group discussions centered around participants’ experiences with sectarianism and possible remedies to the problem.  To the best of our knowledge, ours is the first study to test the effectiveness of expert appeals on cooperation in a conflict setting.

We found that the expert appeal increased unconditional cooperation across sectarian lines as expressed.  Levels of conditional cooperation remained unchanged, and observational evidence suggests that lack of an effect was due to the fact that the expert appeal intervention failed to increase cross-sectarian trust.  Contrary to expectations, we found that group discussions had no sizeable effect on cooperation levels, although there was suggestive evidence that highly substantive discussions might, in fact, lead to greater cooperation.  We also established that when participants were offered money to support a member of their own sect – a proxy for clientelism in our study – the positive effects of the expert appeal intervention on unconditional cooperation were canceled out.

All in all, this study suggests that certain types of cross-group cooperation can be improved even in settings as divided as contemporary Lebanon.  Surprisingly, it is the top-down intervention (expert appeal) that, on average, appears to be more effective than a bottom-up one (group discussion).  What is unfortunate is that clientelism seems to negate the effects even of top-down appeals by experts.

About the Authors: Han Il Chang is a Research Associate at New York University–Abu Dhabi and Leonid Peisakhin is an Assistant Professor of Political Science at New York University–Abu Dhabi. Their research “Building Cooperation among Groups in Conflict: An Experiment on Intersectarian Cooperation in Lebanon (” is now available online in Early View and will be published in a forthcoming issue of the American Journal of Political Science.





The American Journal of Political Science (AJPS) is the flagship journal of the Midwest Political Science Association and is published by Wiley.