AJPS Author Summary: “Non‐Governmental Monitoring of Local Governments Increases Compliance with Central Mandates: A National‐Scale Field Experiment in China”

The following AJPS Author Summary of “Non‐Governmental Monitoring of Local Governments Increases Compliance with Central Mandates: A National‐Scale Field Experiment in China” has been provided by Mark Buntaine:

Non‐Governmental Monitoring of Local Governments Increases Compliance with Central Mandates: A National‐Scale Field Experiment in China


One of the most challenging aspects of governance in China is that policy is made centrally, but implementation is the responsibility of local governments. For the management of pollution — a national priority in recent years — the “implementation gap” that arises when local governments fail to oversee industry and other high-polluting activities has caused a public health crisis of global proportions.

Non-governmental organizations might usefully monitor and reveal the performance of local governments, thereby extending the ability of the center to oversee local governments. Although NGOs face many restrictions about what activities they can pursue, particularly those are critical of the state, more recently NGOs have been encouraged by the central government to engage in monitoring local governments to improve environmental performance.

In a national-scale field experiment that involved monitoring fifty municipal governments for their compliance with rules to make information about the management of pollution available to the public, we show that NGOs can play an important role in increasing the compliance of local governments with national mandates. When the Institute of Public and Environmental affairs publicly disclosed a rating about the compliance of 25 treated municipalities with rules to be transparent, these local governments increased mandated disclosures over two years, as compared to a group of 25 municipalities not assigned to the publication of their rating. However, the same rating did not increase public discussions of pollution in treated municipalities, as compared to control municipalities.

This result highlights that NGOs can play an important role in improving authoritarian governance by disclosing the non-compliance of local governments in ways that helps the center with oversight. They can play this role as long as they do not increase public discontent. We explain how this is an emerging mode of governance in several authoritarian political systems, where NGOs are helping to improve governance by addressing the information needs of the central state for oversight of local governments.

About the Authors of the Research: Sarah E. Anderson is Associate Professor of Environmental Politics at the University of California, Santa Barbara; Mark T. Buntaine is Assistant Professor of Environmental Institutions and Governance at the University of California, Santa Barbara; Mengdi Liu is a PhD Candidate at Nanjing University; Bing Zhang is Associate Professor at Nanjing University. Their research, “Non‐Governmental Monitoring of Local Governments Increases Compliance with Central Mandates: A National‐Scale Field Experiment in China” (https://doi.org/10.1111/ajps.12428), is now available in Early View and will appear in a forthcoming issue of the American Journal of Political Science.

Our Experience with the AJPS Transparency and Verification Process for Qualitative Research

“As the editorial term ends, I’m both looking back and looking forward . . . so, as promised, here’s a post by Allison Carnegie and Austin Carson describing their recent experience with qualitative verification at AJPS . . . and within the next week I’ll be posting an important update to the AJPS “Replication/Verification Policy,” one that will endure past the end of the term on June 1.”
– Jan Leighley, AJPS Interim Editor


Our Experience with the AJPS Transparency and Verification Process for Qualitative Research

By Allison Carnegie of Columbia University and Austin Carson of the University of Chicago

The need for increased transparency for qualitative data has been recognized by political scientists for some time, sparking a lively debate about different ways to accomplish this goal (e.g., Elman, Kapiszewski and Lupia 2018; Moravcsik 2014. As a result of the Data Access and Research Transparency (DA-RT) initiative and the final report of the Qualitative Transparency Deliberations,  many leading journals including the AJPS adopted such policies. (Follow this link for a critical view of DA-RT.) While the AJPS has had such a policy in place since 2016, ours was the first article to undergo the formal qualitative verification process. We had a very positive experience with this procedure, and want to share how it worked with other scholars who may by considering using qualitative methods as well.

In our paper, “The Disclosure Dilemma: Nuclear Intelligence and International Organizations (https://doi.org/10.1111/ajps.12426),” we argue that states often wish to disclose intelligence about other states’ violations of international rules and laws, but are deterred by concerns about revealing the sources and methods used to collect it. However, we theorize that properly equipped international organizations can mitigate these dilemmas by analyzing and acting on sensitive information while protecting it from wide dissemination. We focus on the case of nuclear proliferation and the IAEA in particular. To  evaluate  our claims, we couple a formal model with a qualitative analysis using each case of nuclear proliferation, finding that strengthening the IAEA’s intelligence protection capabilities led to greater intelligence sharing and fewer suspected nuclear facilities. This analysis required a variety of qualitative materials including archival documents, expert interviews, and other primary and secondary sources.

To facilitate the verification of the claims we made using these qualitative methods, we first gathered the raw archival material that we used, along with the relevant excerpts from our inter- views, and posted them to a dataverse location. The AJPS next sent our materials to the Qualitative Data Repository (QDR) at Syracuse University, which reviewed our Readme file, verified the frequency counts in our tables, and reviewed each of our evidence-based arguments related to our theory’s mechanisms (though it did not review the cases in our Supplemental  Appendix). (More details for this process can be found in the AJPS Verification and Replication policy, along with its Qualitative Checklist.) QDR then generated a report which identified statements that it deemed were “supported,” “partially supported,” or “not documented/referenced.” For the third type of statement, we were asked to do one of the following: provide a different source, revise the statement, or clarify whether we felt that QDR misunderstood our claim. We were free to address the other two types of statements as we saw fit. While some have questioned the feasibility of this process, in our case it took roughly the same amount of time that verification processes of quantitative data typically do, so it did not delay the publication of our article.

We found the report to be thorough, accurate, and helpful. While we had endeavored to support our claims fully in the original manuscript, we fell short of this goal on several counts, and fol- lowed each of QDR’s excellent recommendations. Occasionally, this involved a bit more research, but typically this resulted in us clarifying statements, adding details, or otherwise improving our descriptions of, say, our coding decisions. For example, QDR noted instances in which we made a compound claim but the referenced source only supported one of the claims. In such a case, we added a citation for the other claim as well. We then drafted a memo detailing each change that we made, which QDR then reviewed and responded to within a few days.

Overall, we were very pleased with this process. This was in no small part due to the AJPS editorial team, whose patience and guidance in shepherding us through this procedure were greatly appreciated. As a result, we believe that the verification both improved the quality of evidence and better aligned our claims with our evidence. Moreover, it increased our confidence that we had clearly and accurately communicated with readers. Finally, archiving our data will allow other scholars to access our sources and evaluate our claims for themselves, as well as potentially use these materials for future research. We thus came away with the view that qualitative transparency is achievable in a way that is friendly to researchers and can improve the quality of the work.

About the Authors: Allison Carnegie is Assistant Professor of Columbia University and Austin Carson is Assistant Professor at the University of Chicago. Their research, “The Disclosure Dilemma: Nuclear Intelligence and International Organizations (https://doi.org/10.1111/ajps.12426),” is now available in Early View and will appear in a forthcoming issue of the American Journal of Political Science. Carnegie can be found on Twitter at
@alliecarnegie and Carson at @carsonaust.

References

Elman, Colin, Diana Kapiszewski and Arthur Lupia. 2018. “Transparent Social Inquiry: Implica- tions for Political Science.” Annual Review of Political Science 21:29–47.

Moravcsik, Andrew. 2014. “Transparency: The Revolution in Qualitative Research.” PS: Political Science & Politics 47(1):48–53.

AJPS Author Summary: How Getting the Facts Right Can Fuel Partisan Motivated Reasoning

AJPS Author Summary of “How Getting the Facts Right Can Fuel Partisan Motivated Reasoning” by Martin Bisgaard

Are citizens able to get the facts right? Ideally, we want them to. If citizens are to punish or reward incumbent politicians for how real-world conditions have changed, citizens need to know whether these conditions have changed for better or for worse. If economic growth stalls, crime rates plummet or unemployment soars, citizens should take due notice and change their perceptions of reality accordingly. But are citizens able—or willing—to do so?

Considerable scholarly discussion revolves around this question. Decades of research suggest that citizens often bend the same facts in ways that are favorable to their own party. In one of the most discussed examples, citizens identifying with the incumbent party tend to view economic conditions much more favorably than citizens identifying with the opposition do. However, more recent work suggests that citizens are not always oblivious to a changing reality. Across both experimental and observational work, researchers have found that partisans sometimes react “in a similar way to changes in the real economy” (De Vries, Hobolt and Tilley 2017, 115); that they “learn slowly toward common truth” (Hill 2017, 1404); and that they “heed the facts, even when doing so forces them to separate from their ideological attachments” (Wood and Porter 2016, 3). Sometimes, even committed partisans can get the facts right.

In my article, however, I develop and test an argument that is overlooked in current discussion. Although citizens of different partisan groups may sometimes accept the same facts, they may just find other ways of making reality fit with what they want to believe. One such way, I demonstrate, is through the selective allocation of credit and blame. 

I conducted four randomized experiments in the United States and Denmark, exposing participants to either negative or positive news about economic growth. Across these experiments, I found that while partisans updated their perceptions of the national economy in the same way, they attributed responsibility in a highly selective fashion, crediting their own party for success and blaming other actors for failure. Furthermore, I exposed citizens to credible arguments about why (not) the incumbent was responsible, yet it did little to temper partisan motivated reasoning. Rather, respondents dramatically shifted how they viewed the persuasiveness of the same arguments depending on whether macroeconomic circumstances were portrayed as good or bad. Lastly and using open-ended questions where respondents were not explicitly prompted to consider the responsibility of the President or government, I found that citizens spontaneously mustered up attributional arguments that fit their preferred conclusion. These findings have important implications for the current discussion on fake news and misinformation: Correcting people’s factual beliefs may just lead them to find other ways of rationalizing reality.

About the Author: Martin Bisgaard is Assistant Professor in the Department of Political Science at Aarhus University. Bisgaard’s research “How Getting the Facts Right Can Fuel Partisan Motivated Reasoning (https://doi.org/10.1111/ajps.12432) is now available in Early View and will appear in a forthcoming issue of the American Journal of Political Science.

Works cited:

De Vries, Catherine E., Sara B. Hobolt, and James Tilley. 2017. “Facing up to the facts.”

Electoral Studies 51: 115–22.

Hill, Seth J. 2017. “Learning Together Slowly.” Journal of Politics 79(4): 1403–18.

Wood, Thomas, and Ethan Porter. Forthcoming. “The Elusive Backfire Effect.” Political Behavior.

Paths of Recruitment: Rational Social Prospecting in Petition Canvassing

In the following blog post, the authors summarize their American Journal of Political Science article titled Paths of Recruitment: Rational Social Prospecting in Petition Canvassing”:AJPS - FB Posts BAA

 When U.S. Representative Gwen Moore (D-WI) prepared her 2008 reelection bid as Wisconsin’s first black member of Congress (representing Milwaukee’s 4th Congressional District), her campaign faced the task of gathering nominating paper signatures for submission to Wisconsin’s Government Accountability Board.  While this might have been an opportunity to travel throughout the largely Democratic district performing campaign outreach and mobilization, the canvassers working on Moore’s behalf took a different approach: they went primarily to her most supportive neighborhoods, which also happened to be the part of the congressional district that Moore had represented in the State Senate until 2004.  Unsurprisingly, canvassers focused their attention on majority-black neighborhoods throughout Northwest Milwaukee.  As time passed, the canvassers relied increasingly on signatures gathered from Moore’s core constituency.

The geographically and socially bounded canvassing carried out by Moore’s campaign is suggestive of a broader trend in how political recruiters search for support, and it holds lessons that expand upon prevailing models of political re
cruitment.  Political recruiters do not only seek out supporters who share common attributes and socioeconomic backgrounds. They also act in response to their geographic milieu, and they update their choices in light of experience.

In our paper “Paths of Recruitment: Rational Social Prospecting in Petition Canvassing, we develop these insights while elaborating a new model of political recruitment that draws lessons from the experiences of petition canvassers in multiple geographic and historical contexts. We test our model using original data we gathered in the form of geocoded signatory lists from a 2005-2006 anti-Iraq War initiative in Wisconsin and an 1839 antislavery campaign in New York City.  Examining the sequence of signatures recorded in these petitions, we have been able to reconstruct canvassers’ recruitment methods– whether they walked the petition door-to-door or went to a central location or meeting place to gather signatures – as well as the path they travelled when they did go door-to-door.  We find that canvassers were substantially more likely to go walking in search of signatures in neighborhoods where residents’ demographic characteristics were similar to their own. In the case of middle-class, predominantly white Wisconsin anti-war canvassers, this meant staying in predominantly white and middle class neighborhoods when going door-to-door. Furthermore, the act of canvassing appeared to follow a rational process where canvassers displayed sensitivity to their costs. For example, in areas where canvassers struggled to find signatures, they were more likely to quit searching.

Understanding how political recruiters find supporters for a political candidate or cause is crucial because recruitment determines who participates in politics. If canvasser strategies reach only a limited set of recruits, then swathes of Americans may be less likely to participate.  Our paper sheds new light on the campaign dynamics that feed this inequality.

About the Authors: Clayton Nall is an Assistant Professor of Political Science at Stanford University. Benjamin Schneer is an Assistant Professor in the Department of Political Science at Florida State University. Daniel Carpenter is Allie S. Freed Professor of Government in the Faculty of Arts and Sciences, and Director of Social Sciences at the Radcliffe Institute for Advanced Study at Harvard University. Their paper Paths of Recruitment: Rational Social Prospecting in Petition Canvassing” (/doi/10.1111/ajps.12305) appears in the January 2018 issue of the American Journal of Political Science and will be awarded the AJPS Best Article Award at the 2019 MPSA Conference. 

Both this article and the co-winning AJPS Best Paper Award article When Common Identities Decrease Trust: An Experimental Study of Partisan Women“(/doi/10.1111/ajps.12366) are currently free to access through April 2019. 

When Common Identities Decrease Trust: An Experimental Study of Partisan Women

AJPS - FB Posts- Klar

AJPS Author Summary by Samara Klar of the University of Arizona

With a record number of women running for the 2020 Democratic nomination, questions will no doubt arise as to the likelihood that a Democratic woman might entice female Republican voters to support a woman from the opposing party. Each time that a woman has run for national office (for example, Sarah Palin as a vice presidential candidate in 2008 or Hillary Clinton as a presidential candidate in 2008 and 2016), political spectators asked: Will women voters “cross the aisle” to vote for a woman?

Yet, each time, we have seen no evidence that women from either party are willing to do so. Indeed, there is very little evidence at all that women from the mass public form inter-party alliances based on their shared gender identity. This might seem surprising – particularly to those familiar with the Common In-Group Identity Model.

The Common In-Group Identity Model argues that an overarching identity (in this case, being a woman) can unite two competing groups (in this case, Democrats and Republicans). Social psychologists demonstrated these effects in an array of “minimal group settings” and others have found that it appears to hold true in “real-world” settings as well. Why, then, are Democratic women and Republican women reluctant to support one another based on their shared gender identity? I set out to investigate the conditions under which the Common In-Group Identity Model holds and whether it might (or might not) apply to American women who identify as Democrats and Republicans.

A key condition of the Common In-Group Identity Model is that the members of both rival groups must hold a common understanding of what it means to identify with their overarching shared identity. Without this, it simply is not a shared identity at all.

Based on existing work, I expected that Democratic women and Republican women, in fact, hold very different views of what it means to be a woman. To test this, I asked a bipartisan sample of 3000 American women how well the word feminist describes them on a scale ranging from 1 (Extremely well) to 5 (Not at all). Democratic women overwhelmingly identify themselves as feminists: their mean response was 2.47 (somewhere in between Very Well [2] and Somewhat Well [3]). Republican women, on the other hand, do not view themselves as feminists: their mean response was a 3.8 (closer to Not Very Well [4]).

I also asked these women to describe how a typical Republican woman and a typical Democrat woman might view feminism. Women from both sides of the aisle are astonishingly accurate in their estimates of how co-partisan and opposing partisan women feel about this issue. There is a clear and accurate perception that Democratic women think of themselves as feminist and that Republican women do not. In sum, being “a woman” is not an identity group that Democratic and Republican women can agree on – and they are well aware of this divide.

If Democratic women and Republican women do not share a common understanding of what it means to be a woman, then their gender should not unite them. In fact, as scholars have shown in other settings, they should actually be driven further apart when their gender becomes salient. This is what I set out to test.

With a survey experiment, I randomly assigned a large sample of women to read a vignette about either a woman or a man, who identifies with either their own party or the other party, and who supports either an issue that makes gender salient or one that does not. I then asked respondents to evaluate this fictitious character.

My results show that gender does not unite women from opposing parties but, in fact, increases their mutual distrust when gender is salient. To be more specific, I find that – when gender is salient – women hold more negative views of women from the opposing party than they do of men from the opposing party. When gender was not salient, however, women no longer penalized women more than they penalized men for identifying with the opposing party.

My work helps us to understand why we do not tend to find political solidarity among women who identify with opposing parties: not only do they disagree about politics but they also tend to disagree about their gender identity. Making gender salient thus exacerbates the divide.

I hope this study also helps to add nuance to our collective understanding of identity politics. Demographic identity groups are not homogeneous voting blocs. This lesson is not exclusive to women but should be taken into account when we think through the political behavior of any subset of the American public. To an outsider, it might appear that a group of individuals objectively shares a common identity, but if they do not hold a common understanding of what that identity means to them then they do not share an identity at all. If we wish to understand how identities influence political attitudes and behaviors, we cannot neglect the nuances that exist with identity groups.

About the Author: Samara Klar of the University of Arizona has authored the article “When Common Identities Decrease Trust: An Experimental Study of Partisan Women(doi/10.1111/ajps.12366) which was published in the July 2018 issue of the American Journal of Political Science and will be awarded the AJPS Best Article Award at the 2019 MPSA Conference. 

Both this article and the co-winning AJPS Best Paper Award article “Paths of Recruitment: Rational Social Prospecting in Petition Canvassing” (doi/10.1111/ajps.12305) are currently free to access through April 2019. 

AJPS Author Summary: Are Biased Media Bad for Democracy?

AJPS Author Summary of “Are Biased Media Bad for Democracy?by Stephane Wolton

“[N]ews media bias is real. It reduces the quality of journalism, and it fosters distrust among readers and viewers. This is bad for democracy.” (Timothy Carney, New York Times, 2015). It is indeed commonly accepted that media outlets (e.g. newspapers, radio stations, television channels) are ideologically oriented and attempt to manipulate their audience to improve the reputation or electoral chances of their preferred politicians. But if this holds true, aside from the likely detrimental effects of media bias on the quality of journalism, is this bias inevitably bad for democracy?

In my paper, I study a game-theoretical framework to provide one answer to this question. I use a political agency model, in which the electorate faces the problem of both selecting and controlling polarized politicians. I focus on the actions of office-holders, the information available to voters, and the resulting welfare under four different media environments. In the first, a representative voter obtains information from a media outlet that exactly matches her policy preference. I use the term “unbiased” to describe this environment. In the second, the voter receives news reports from two biased media outlets, on the right and the left of the policy spectrum. I define this environment as “balanced” (as in most states in the United States, see here). In the last two cases, the voter’s information comes either from a single right-wing outlet (“right-wing biased environment” as in Italy after Berlusconi’s 1994 electoral victory) or from a single left-wing outlet (“left-wing biased environment” as in Venezuela after the closing down of RCTV in May 2007, in the early years of the Chavez regime).

Two important findings emerge from comparing equilibrium behaviors across these media environments. Not surprisingly, and in line with a large literature, the voter is always less informed with biased news providers (whether the environment is balanced or not) than with an unbiased media outlet. If officeholders’ behavior were to be kept constant, the electorate would necessarily be hurt by biased media. However, my analysis highlights that everything else is not constant across media environments. In many circumstances, politicians behave differently with biased rather than unbiased news providers. Taking into account these equilibrium effects, my paper uncovers conditions under which voters are better off with biased rather than unbiased media. Therefore, the often advanced claim that media bias is bad for democracy needs to be qualified.

My work also holds some implications for empirical analyses of biased media. To measure the impact of media bias, one needs to compare an outcome of interest (say the re-election rate of incumbent politicians) under an unbiased and under a biased media environment. However, the problem researchers face is that they rarely observe a situation with unbiased outlets and they end up using changes in the media environment from balanced to right- or left-wing biased to evaluate the consequences of media bias. My paper shows that (i) unbiased and biased news providers do not provide the same information to voters and (ii) office-holders can behave differently under biased and unbiased news outlets. As a result, estimates obtained using a balanced environment as reference point can over- or under-estimate the impact of biased media.

Returning to the quote used at the beginning of this post, my paper shows that Carney is only partially correct. Media bias does reduce the quality of journalism and foster distrust. However, it is not necessarily bad for democracy. Further, my work suggests that while existing empirical studies of the media measure important quantities, they may not tell us much about the impact of biased news providers vis-a-vis unbiased outlets.

About the Author: Stephane Wolton is an Associate Professor in Political Science in the Department of Government at the London School of Economics. Wolton’s research “ Are Biased Media Bad for Democracy? (https://doi.org/10.1111/ajps.12424)” is now available in Early View and will appear in a forthcoming issue of the American Journal of Political Science.

When Should We Use Unit Fixed Effects Regression Models for Causal Inference with Longitudinal Data?

AJPS Author Summary of “When Should We Use Unit Fixed Effects Regression Models for Causal Inference with Longitudinal Data?” by Kosuke Imailn and Song Kim

When Should We Use Unit Fixed Effects Regression Models for Causal Inference with Longitudinal Data?

This paper investigates the causal assumptions of unit fixed effects regression models, which many researchers use as their default methods for the analysis of longitudinal data.  Most importantly, we show that the ability of these models to adjust for unobserved time-invariant confounders comes at the expense of dynamic causal relationships, which are permitted under an alternative selection-on-observables approach. Using the nonparametric directed acyclic graph, we highlight two key causal identification assumptions of unit fixed effects models: past treatments do not directly influence a current outcome, and past outcomes do not affect current treatment. Furthermore, we introduce a new nonparametric matching framework that elucidates how various unit fixed effects models implicitly compare treated and control observations to draw causal inference. By establishing the equivalence between matching and weighted unit fixed effects estimators, this framework enables a diverse set of identification strategies to adjust for unobservables in the absence of dynamic causal relationships between treatment and outcome variables. We illustrate the proposed methodology through its application to the estimation of GATT membership effects on dyadic trade volume.  The open-source software package, wfe: Weighted Linear Fixed Effects Regression Models for Causal Inference, is available at the Comprehensive R Archive Network (CRAN) for implementing the proposed methodology.

While this article examines regression models with unit fixed effects, our related paper proposes a new matching method for causal inference with time-series cross sectional data and show how this method relates to the regression models with both unit and time fixed effects.  The proposed matching method can be implemented through an open-source R package, PanelMatch: Matching Methods for Causal Inference with Time-Series Cross-Sectional Data.

About the Authors: Kosuke Imai is Professor of Government and of Statistics at Harvard University and also an affiliate of the Institute for Quantitative Social Science . Song Kim is Associate Professor of Political Science and a Faculty Affiliate of the Institute for Data, Systems, and Society (IDSS) at Massachusetts Institute of Technology. Their research “When Should We Use Unit Fixed Effects Regression Models for Causal Inference with Longitudinal Data? (https://doi.org/10.1111/ajps.12417 )” is now available in Early View and will appear in a forthcoming issue of the American Journal of Political Science.

Priorities for Preventive Action: Explaining Americans’ Divergent Reactions to 100 Public Risks

AJPS Author Summary of “Priorities for Preventive Action: Explaining Americans’ Divergent Reactions to 100 Public Risks” by Jeffrey A. Friedman

Priorities for Preventive Action: Explaining Americans’ Divergent Reactions to 100 Public Risks

The U.S. government spends over $100 billion per year fighting terrorism, a risk that kills about as many Americans as lightning strikes and accidents involving home appliances. President Trump has said that one of his primary objectives is reducing violent crime, even though this problem is at historic lows nationwide. Meanwhile, the looming threat of climate change could cause vast global harm. Extreme weather induced by global warming may already kill more Americans than terrorists do, yet preventing climate change consistently ranks near the bottom of voters’ policy priorities.

What explains Americans’ divergent reactions to risk? In particular, why do Americans’ priorities for reducing risk often seem so uncorrelated with the danger that those risks objectively present? Many scholars believe the answer to this question is that heuristics, biases, and ignorance cause voters to misperceive risk magnitudes. By contrast, I argue in a forthcoming AJPS article that Americans’ risk priorities reflect value judgments regarding the extent to which some victims deserve more protection than others and the degree to which it is appropriate for government to intervene in different areas of social life.

The paper backs this argument with evidence drawn from a survey of 3,000 Americans, using pairwise comparisons to elicit novel measures of how respondents perceive nine dimensions of 100 life-threatening risks. Unlike many studies which focus on understanding which risks “worry” or “concern” respondents to greater degrees, this survey explicitly distinguished between respondents’ perceptions of how much harm risks caused and respondents’ preferences for how much money the government should spend to mitigate these dangers. This survey produced two main findings.

First, the data show that respondents were well-informed about which risks cause more harm than others. The correlation between perceived and actual mortality across the 100 risks in the study was 0.82 – not perfect, but a far cry from voters’ limited grasp on other kinds of politically-relevant information. The data also show that respondents’ perceptions of how much harm risks cause explained little variation in their policy preferences relative to value judgments about the status of victims and the appropriate role of government. Both of these findings hold regardless of political party, education, and other demographics.

For example, even though respondents assigned terrorism the third-highest priority among risks covered by the survey, they did not see this problem as being particularly deadly. On this measure, terrorism ranked 51st out of 100 risks, around the same level as bicycle accidents and tornadoes. Instead, respondents said that terrorism was exceptionally unfair to its victims (ranked #2, behind only child abuse) and that governments have special obligations to protect citizens from this danger (again #2, behind only nuclear war). This reflects a broader pattern seen throughout the survey data: the main reason that voters support spending government funds to reduce risks is not because they think these problems are especially common, but because they say these problems are especially objectionable.

It is important to take these subjective beliefs seriously both in scholarly analyses and in policy debates. When people disagree in setting policy priorities, they often attribute their opponents’ positions to ignorance or misinformation. Thus many Democrats accuse Republicans of exaggerating the risk of terrorism while downplaying the threat of climate change. Republicans, for their part, often accuse Democrats of inflating the risk of gun violence while ignoring threats to national security. But my study indicates that Republicans and Democrats both hold relatively accurate perceptions of which risks cause more harm than others, and that neither party affords those judgments much weight when considering how to allocate public resources. The key to productive discourse on these issues thus likely lies with understanding voters’ values rather than contesting their factual beliefs.

The article also provides foundations for exploring how public opinion shapes government spending. In some cases – as with terrorism – federal expenditures appear to reflect voters’ demands. But that correlation is imperfect. Cancer and heart disease were the top two policy priorities for this survey’s respondents. Air pollution placed sixth. Warfare ranked 24th on respondents’ risk-reduction priorities, beneath prescription drug abuse, diabetes, and HIV/AIDS. Thus to the extent that the U.S. defense budget crowds out government spending on health care, that does not appear to be a straightforward function of voters’ policy preferences. The article is therefore relevant not just to understanding the public’s risk priorities in their own right, but also for analyzing how and why the federal budget reflects some of these priorities more than others.

About the Author: Jeffrey A. Friedman is an Assistant Professor of Government at Dartmouth College and is also Visiting Fellow, Institute for Advanced Study in Toulouse. Friedman’s research “Priorities for Preventive Action: Explaining Americans’ Divergent Reactions to 100 Public Risks (https://doi.org/10.1111/ajps.12400)” is now available in Early View and will appear in a forthcoming issue of the American Journal of Political Science.

 

 

 

 

No Harm in Checking: Using Factual Manipulation Checks to Assess Attentiveness in Experiments

AJPS Author Summary of “No Harm in Checking: Using Factual Manipulation Checks to Assess Attentiveness in Experiments” by John V. Kane and Jason Barabas

No Harm in Checking: Using Factual Manipulation Checks to Assess Attentiveness in Experiments

There have been two notable trends within the social sciences in recent decades: (1) the use of experiments to test ideas, and (2) utilizing samples collected online (rather than in-person or via telephone). These concurrent trends have greatly expanded opportunities to conduct high-quality research, but they raise an important concern: are the people taking these online experiments actually paying attention?

This question is vitally important. Fielding experimental studies is costly and requires substantial preparation, and the validity of an experiment’s outcome hinges upon respondents’ willingness to attend to the information they are given. Specifically, if subjects are not paying attention to the study, this will likely bias experimental effects toward zero, potentially leaving the researcher to conclude that the underlying theory is wrong and/or that the design of the experiment was defective.

Researchers can gain leverage on this problem by including a so-called “manipulation check” (MC). MCs can be used to confirm whether an experimental treatment succeeded in affecting the key causal variable of interest or, more generally, whether respondents were attentive to information featured in a survey. However, in practice, researchers rarely report having implemented an MC in their experiments. Moreover, even when MCs are used, they differ markedly in terms of form, function, and placement within the study.

Our article attempts to clarify how MCs can be used in experimental research. Based upon content analyses of published experiments, we identify three main categories of MCs. We then highlight the merits of one such category—factual manipulation checks (FMCs). FMCs ask respondents factual questions about content featured in an experiment which, unlike Instructional Manipulation Checks (IMCs) and (what we refer to as) Subjective Manipulation Checks (SMCs), enables researchers to identify individuals who were (in)attentive to content in the experimental portion of a study. Such information can help researchers understand the reasons underlying their experimental findings. For example, if a researcher found no significant effects for the experiment, but also found that only a small share of the sample correctly answered the FMC, this would suggest that the result has less to do with the underlying theory, and more to do with respondents’ attentiveness to the key information in the study (or lack thereof).

Replicating a series of published experiments, we then demonstrate how FMCs can be constructed and empirically investigate whether the placement of an FMC (i.e., immediately before versus after the outcome measure) is consequential for (1) treatment effects, and (2) answering the FMC correctly. We find little evidence that placing an FMC before an outcome measure significantly distorts treatment effects. However, we also find no evidence that placing an FMC immediately after an outcome significantly reduces respondents’ ability to answer the FMC correctly. We therefore conclude that researchers stand to benefit from employing FMCs in their studies, and placement of the FMC immediately after the outcome measure appears to be optimal. Such practices will equip researchers with a greater ability to diagnose their experimental findings, accurately assess respondents’ attentiveness to the experiment, and avoid any possibility of biasing treatment effects.

About the Authors: John V. Kane is an Assistant Professor at the Center for Global Affairs at New York University and Jason Barabas is a Professor in the Department of Political Science at Stony Brook University, Social & Behavioral Sciences. Their research, “No Harm in Checking: Using Factual Manipulation Checks to Assess Attentiveness in Experiments (https://doi.org/10.1111/ajps.12396)” is now available in Early View and will appear in a forthcoming issue of the American Journal of Political Science.

Territorial Representation and the Opinion-Policy Linkage

AJPS Author Summary of “Territorial Representation and the Opinion-Policy Linkage” by Christopher Wratil 

Territorial Representation and the Opinion-Policy Linkage

A central promise of democracy is that government will follow the wishes and opinions of the people. A large body of literature in American and comparative politics has demonstrated that in many situations governments react to shifts in mean public opinion and enact policies that are supported by the majority of citizens across the country. However, the idea that policy-makers follow country-wide mean opinion to get re-elected is most straightforward in political systems where policy-makers are elected by ‘the people’ as a whole. But many political systems elect policy-makers in sub-national constituencies: from U.S. Senators elected only by the citizens in each of the 50 constituent states to national governments in the European Union elected only by the citizens of each of the 28 member states. How do these arrangements of ‘territorial representation’ influence whose preferences will be reflected in policy output in case citizens in different states or territories disagree over policy change?

To answer this question my research uses the case of the EU, where national governments are major policy-makers accountable only to their national publics which have varying opinions on EU policies. I argue that governments will focus on achieving policy change on those issues their national citizens at home care intensely about and have a uniform view on, and potentially make concessions to other governments on issues their citizens’ opinion is ambivalent and less salient. The analyses show that measures weighting opinion across member states by how much national citizens care about an issue rather than by population sizes better explain EU-level policy change than mean opinion across the EU. Moreover, when a national public views an issue as particularly salient, the probability that EU policy on this issue will be in line with majority opinion in this member state increases the more clear-cut public opinion on the matter.

The results do not only highlight that political systems that elect key policy-makers territorially, such as the EU or federal systems, may reallocate influence to citizens in certain parts of the political system depending on how much they care about an issue and how malapportioned the legislative power of policy-makers is compared to voter populations. But they also provide the first quantitative assessment of the responsiveness and congruence of EU-level policy outputs with public opinion on specific issues. The findings challenge the widely-held belief that the EU system is largely insulated from public opinion. Instead, they pose the question of how exactly we should normatively assess the quality of democracy in systems that may not react most strongly to mean opinion but to opinion in different territories depending on the distributions of salience, opinion, and power.

About the author: Christopher Wratil is a John F. Kennedy Memorial Fellow at the Minda de Gunzburg Center for European Studies at Harvard University. His research, “Territorial Representation and the Opinion-Policy Linkage: Evidence from the European Union (https://doi.org/10.1111/ajps.12403)” is now available in Early View and will appear in a forthcoming issue of the American Journal of Political Science.



The American Journal of Political Science (AJPS) is the flagship journal of the Midwest Political Science Association and is published by Wiley.