Does Direct Democracy Hurt Immigrant Minorities? Evidence from Naturalization Decisions in Switzerland

The forthcoming article “Does Direct Democracy Hurt Immigrant Minorities? Evidence from Naturalization Decisions in Switzerland” (https://doi.org/10.1111/ajps.12433) by Jens Hainmueller and Dominik Hangartner is summarized by the authors below.

AJPS Author Summary: Does Direct Democracy Hurt Immigrant Minorities?


What happens to ethnic minorities when policy is decided by a majority of voters rather than elected politicians? Do minorities fare worse under direct democracy than under representative democracy? 

We examine this longstanding question in the context of naturalization applications in Switzerland. Immigrants who seek Swiss citizenship must apply at the municipality in which they reside, and municipalities use different institutions to evaluate the naturalization applications. In the early 1990s, over 80% of municipalities used some form of direct democracy. However, in the early 2000s, following a series of landmark rulings by the Swiss Federal Court, most municipalities switched to representative democracy and delegated naturalization decisions to the elected municipality council.

Using panel data from about 1,400 municipalities for the 1991–2009 period, we found that naturalization rates were about the same under both systems during the four years prior to the switch. After municipalities moved from direct to representative democracy, naturalization rates increased by about 50% in the first year, and by more than 100% in the following years. These results demonstrate that, on average, immigrants fare much better if their naturalization requests are decided by elected officials in the municipality council instead of voters in referendums.

What might explain this institutional effect? Voters in referendums face no cost when they arbitrarily reject qualified applicants based on discriminatory preferences. Politicians in the council, by contrast, must formally justify rejections and may be held accountable by judicial review. Consistent with this mechanism, we see that the switch brings a much greater increase in naturalization rates among more marginalized immigrant groups. The switch is also more influential in areas where voters are more xenophobic or where judicial review is more salient.

More broadly, our study provides evidence that, when taking up exactly the same kind of decision, direct democracy harms minorities more often than representative democracy.

About the Authors: Jens Hainmueller is a Professor in the Department of Political Science at Stanford University and  Dominik Hangartner is an Associate Professor of Public Policy at ETH Zurich and in the Department of Government at the London School of Economics and Political Science. Their research “Does Direct Democracy Hurt Immigrant Minorities? Evidence from Naturalization Decisions in Switzerlandhttps://doi.org/10.1111/ajps.12433 is now available in Early View and will appear in a forthcoming issue of the American Journal of Political Science.

Verification, Verification

By Jan Leighley, AJPS Interim Lead Editor  

After nine months of referring to the AJPS “replication policy,” or (in writing) “replication/verification” policy, I finally had to admit it was time for a change. As lead editor, I had been invited to various panels and workshops where I noticed that the terms “replication”, “verification”, and “reproducibility” were often used interchangeably (sometimes less awkwardly than others), and others where there were intense discussions about what each term meant or required.

Spoiler Alert: I have no intention, in the context of this post, with 10 days left in the editorial term, to even begin to clarify the distinctions between reproducibility, replicability, and verifiability—and how these terms apply to data and materials, in both qualitative and quantitative methods.

A bit of digging in the (surprisingly shallow) archives suggested that “replication” and “verification” had often been used interchangeably (if not redundantly) at AJPS. Not surprising, given the diversity of approaches and terminology used in the natural and social sciences more broadly (See “Terminologies for Reproducible Research” at arXiv.org). But in a 2017 Inside Higher Education article, “Should Journals Be Responsible for Reproducibility?”, former editor Bill Jacoby mentioned that the AJPS “Replication and Verification Policy” terminology would soon be adjusted to be consistent that of the National Science Foundation. From the article: “Replication is using the same processes and methodology with new data to produce similar results, while reproducibility is using the same processes and methodology on the same dataset to produce identical results.”

It made sense to me that a change in names had been in the making, in part due to the important role of the AJPS as a leader in the discipline, social sciences, and possibly natural sciences on issues of transparency and reproducibility in scientific research. While I had no plans as interim editor to address this issue, the publication of the journal’s first paper relying on (verified) qualitative research methods required that the editorial team review the policy and its procedures. That review led to a consideration of the similarities and differences in verifying quantitative and qualitative papers for publication in the AJPS—and my decision to finally make the name change “legal” after all this time: the “AJPS Replication & Verification Policy” that we all know and love will now move forward in name officially as theAJPS Verification Policy“.

This name change reflects my observation that what we are doing at AJPS currently is verifying what is reported in the papers that we publish, though what we verify differs for qualitative and quantitative approaches. In neither case do we replicate the research of our authors.

Do note that the goals and procedures that we have used to verify the papers we publish will essentially remain the same, subject only to the routine types of changes made as we learn how to improve the process, or with the kind of adjustments that come with changes of editorial teams. Since the policy was announced in March 2015, The Odum Institute has used the data and materials posted on the AJPS Dataverse to verify the analyses of 195 papers relying on quantitative analyses.

Our experience in verifying qualitative analyses, in contrast, is limited at this point to only one paper, one that the Qualitative Data Repository verified early this spring, although several others are currently under review. As in the case of quantitative papers, the basic procedures and guidelines for verification of qualitative papers have been posted online for several years. We will continue to develop appropriate verification procedures, as we build on our limited experience thus far, and respond to the complexity and heterogeneity of qualitative research methods. Authors of accepted papers (or those who are curious about verification procedures) should check out the guidelines and checklists posted at www.ajps.org to learn more.

For those who care about graphics more than terminology (!), I note that a few changes have been made to the badges awarded to verified articles. I’ve never been a badge person myself, but apparently this is the currency of the realm in open science circles, and some research suggests that by awarding these badges, researchers are more likely to follow “open science” practices in their work. AJPS is proud to have our authors’ papers sport these symbols of high standards of transparency in the research process on our Dataverse page and on our published papers. Our badge updates include the addition of the words “peer review” to reflect that our verification policy relies on external reviewers (i.e., Odum, QDR) to document verifiability rather than doing it in-house, the most distinctive aspect of the AJPS Verification Policy. It also includes a new “Protected Access” badge that will signify the verification of data that is available only through application to a protected repository, as identified by the Center for Open Science. As new papers are accepted for publication, you will begin to see more of the new badges, along with revised language that reflects more precisely what those badges represent.

Cheers to replication, verification—and the end of the editorial term!
Jan (Sarah, Mary, Jen, Layna and Rocio)


Citation:
Jacoby, William G., Sophia Lafferty-Hess, Thu-Mai Christian. 2017. “Should Journals Be Responsible for Reproducibility?” Inside Higher Education [blog], July 17.

AJPS Author Summary: “Non‐Governmental Monitoring of Local Governments Increases Compliance with Central Mandates: A National‐Scale Field Experiment in China”

The following AJPS Author Summary of “Non‐Governmental Monitoring of Local Governments Increases Compliance with Central Mandates: A National‐Scale Field Experiment in China” has been provided by Mark Buntaine:

Non‐Governmental Monitoring of Local Governments Increases Compliance with Central Mandates: A National‐Scale Field Experiment in China


One of the most challenging aspects of governance in China is that policy is made centrally, but implementation is the responsibility of local governments. For the management of pollution — a national priority in recent years — the “implementation gap” that arises when local governments fail to oversee industry and other high-polluting activities has caused a public health crisis of global proportions.

Non-governmental organizations might usefully monitor and reveal the performance of local governments, thereby extending the ability of the center to oversee local governments. Although NGOs face many restrictions about what activities they can pursue, particularly those are critical of the state, more recently NGOs have been encouraged by the central government to engage in monitoring local governments to improve environmental performance.

In a national-scale field experiment that involved monitoring fifty municipal governments for their compliance with rules to make information about the management of pollution available to the public, we show that NGOs can play an important role in increasing the compliance of local governments with national mandates. When the Institute of Public and Environmental affairs publicly disclosed a rating about the compliance of 25 treated municipalities with rules to be transparent, these local governments increased mandated disclosures over two years, as compared to a group of 25 municipalities not assigned to the publication of their rating. However, the same rating did not increase public discussions of pollution in treated municipalities, as compared to control municipalities.

This result highlights that NGOs can play an important role in improving authoritarian governance by disclosing the non-compliance of local governments in ways that helps the center with oversight. They can play this role as long as they do not increase public discontent. We explain how this is an emerging mode of governance in several authoritarian political systems, where NGOs are helping to improve governance by addressing the information needs of the central state for oversight of local governments.

About the Authors of the Research: Sarah E. Anderson is Associate Professor of Environmental Politics at the University of California, Santa Barbara; Mark T. Buntaine is Assistant Professor of Environmental Institutions and Governance at the University of California, Santa Barbara; Mengdi Liu is a PhD Candidate at Nanjing University; Bing Zhang is Associate Professor at Nanjing University. Their research, “Non‐Governmental Monitoring of Local Governments Increases Compliance with Central Mandates: A National‐Scale Field Experiment in China” (https://doi.org/10.1111/ajps.12428), is now available in Early View and will appear in a forthcoming issue of the American Journal of Political Science.

Our Experience with the AJPS Transparency and Verification Process for Qualitative Research

“As the editorial term ends, I’m both looking back and looking forward . . . so, as promised, here’s a post by Allison Carnegie and Austin Carson describing their recent experience with qualitative verification at AJPS . . . and within the next week I’ll be posting an important update to the AJPS “Replication/Verification Policy,” one that will endure past the end of the term on June 1.”
– Jan Leighley, AJPS Interim Editor


Our Experience with the AJPS Transparency and Verification Process for Qualitative Research

By Allison Carnegie of Columbia University and Austin Carson of the University of Chicago

The need for increased transparency for qualitative data has been recognized by political scientists for some time, sparking a lively debate about different ways to accomplish this goal (e.g., Elman, Kapiszewski and Lupia 2018; Moravcsik 2014. As a result of the Data Access and Research Transparency (DA-RT) initiative and the final report of the Qualitative Transparency Deliberations,  many leading journals including the AJPS adopted such policies. (Follow this link for a critical view of DA-RT.) While the AJPS has had such a policy in place since 2016, ours was the first article to undergo the formal qualitative verification process. We had a very positive experience with this procedure, and want to share how it worked with other scholars who may by considering using qualitative methods as well.

In our paper, “The Disclosure Dilemma: Nuclear Intelligence and International Organizations (https://doi.org/10.1111/ajps.12426),” we argue that states often wish to disclose intelligence about other states’ violations of international rules and laws, but are deterred by concerns about revealing the sources and methods used to collect it. However, we theorize that properly equipped international organizations can mitigate these dilemmas by analyzing and acting on sensitive information while protecting it from wide dissemination. We focus on the case of nuclear proliferation and the IAEA in particular. To  evaluate  our claims, we couple a formal model with a qualitative analysis using each case of nuclear proliferation, finding that strengthening the IAEA’s intelligence protection capabilities led to greater intelligence sharing and fewer suspected nuclear facilities. This analysis required a variety of qualitative materials including archival documents, expert interviews, and other primary and secondary sources.

To facilitate the verification of the claims we made using these qualitative methods, we first gathered the raw archival material that we used, along with the relevant excerpts from our inter- views, and posted them to a dataverse location. The AJPS next sent our materials to the Qualitative Data Repository (QDR) at Syracuse University, which reviewed our Readme file, verified the frequency counts in our tables, and reviewed each of our evidence-based arguments related to our theory’s mechanisms (though it did not review the cases in our Supplemental  Appendix). (More details for this process can be found in the AJPS Verification and Replication policy, along with its Qualitative Checklist.) QDR then generated a report which identified statements that it deemed were “supported,” “partially supported,” or “not documented/referenced.” For the third type of statement, we were asked to do one of the following: provide a different source, revise the statement, or clarify whether we felt that QDR misunderstood our claim. We were free to address the other two types of statements as we saw fit. While some have questioned the feasibility of this process, in our case it took roughly the same amount of time that verification processes of quantitative data typically do, so it did not delay the publication of our article.

We found the report to be thorough, accurate, and helpful. While we had endeavored to support our claims fully in the original manuscript, we fell short of this goal on several counts, and fol- lowed each of QDR’s excellent recommendations. Occasionally, this involved a bit more research, but typically this resulted in us clarifying statements, adding details, or otherwise improving our descriptions of, say, our coding decisions. For example, QDR noted instances in which we made a compound claim but the referenced source only supported one of the claims. In such a case, we added a citation for the other claim as well. We then drafted a memo detailing each change that we made, which QDR then reviewed and responded to within a few days.

Overall, we were very pleased with this process. This was in no small part due to the AJPS editorial team, whose patience and guidance in shepherding us through this procedure were greatly appreciated. As a result, we believe that the verification both improved the quality of evidence and better aligned our claims with our evidence. Moreover, it increased our confidence that we had clearly and accurately communicated with readers. Finally, archiving our data will allow other scholars to access our sources and evaluate our claims for themselves, as well as potentially use these materials for future research. We thus came away with the view that qualitative transparency is achievable in a way that is friendly to researchers and can improve the quality of the work.

About the Authors: Allison Carnegie is Assistant Professor of Columbia University and Austin Carson is Assistant Professor at the University of Chicago. Their research, “The Disclosure Dilemma: Nuclear Intelligence and International Organizations (https://doi.org/10.1111/ajps.12426),” is now available in Early View and will appear in a forthcoming issue of the American Journal of Political Science. Carnegie can be found on Twitter at
@alliecarnegie and Carson at @carsonaust.

References

Elman, Colin, Diana Kapiszewski and Arthur Lupia. 2018. “Transparent Social Inquiry: Implica- tions for Political Science.” Annual Review of Political Science 21:29–47.

Moravcsik, Andrew. 2014. “Transparency: The Revolution in Qualitative Research.” PS: Political Science & Politics 47(1):48–53.

AJPS Author Summary: How Getting the Facts Right Can Fuel Partisan Motivated Reasoning

AJPS Author Summary of “How Getting the Facts Right Can Fuel Partisan Motivated Reasoning” by Martin Bisgaard

Are citizens able to get the facts right? Ideally, we want them to. If citizens are to punish or reward incumbent politicians for how real-world conditions have changed, citizens need to know whether these conditions have changed for better or for worse. If economic growth stalls, crime rates plummet or unemployment soars, citizens should take due notice and change their perceptions of reality accordingly. But are citizens able—or willing—to do so?

Considerable scholarly discussion revolves around this question. Decades of research suggest that citizens often bend the same facts in ways that are favorable to their own party. In one of the most discussed examples, citizens identifying with the incumbent party tend to view economic conditions much more favorably than citizens identifying with the opposition do. However, more recent work suggests that citizens are not always oblivious to a changing reality. Across both experimental and observational work, researchers have found that partisans sometimes react “in a similar way to changes in the real economy” (De Vries, Hobolt and Tilley 2017, 115); that they “learn slowly toward common truth” (Hill 2017, 1404); and that they “heed the facts, even when doing so forces them to separate from their ideological attachments” (Wood and Porter 2016, 3). Sometimes, even committed partisans can get the facts right.

In my article, however, I develop and test an argument that is overlooked in current discussion. Although citizens of different partisan groups may sometimes accept the same facts, they may just find other ways of making reality fit with what they want to believe. One such way, I demonstrate, is through the selective allocation of credit and blame. 

I conducted four randomized experiments in the United States and Denmark, exposing participants to either negative or positive news about economic growth. Across these experiments, I found that while partisans updated their perceptions of the national economy in the same way, they attributed responsibility in a highly selective fashion, crediting their own party for success and blaming other actors for failure. Furthermore, I exposed citizens to credible arguments about why (not) the incumbent was responsible, yet it did little to temper partisan motivated reasoning. Rather, respondents dramatically shifted how they viewed the persuasiveness of the same arguments depending on whether macroeconomic circumstances were portrayed as good or bad. Lastly and using open-ended questions where respondents were not explicitly prompted to consider the responsibility of the President or government, I found that citizens spontaneously mustered up attributional arguments that fit their preferred conclusion. These findings have important implications for the current discussion on fake news and misinformation: Correcting people’s factual beliefs may just lead them to find other ways of rationalizing reality.

About the Author: Martin Bisgaard is Assistant Professor in the Department of Political Science at Aarhus University. Bisgaard’s research “How Getting the Facts Right Can Fuel Partisan Motivated Reasoning (https://doi.org/10.1111/ajps.12432) is now available in Early View and will appear in a forthcoming issue of the American Journal of Political Science.

Works cited:

De Vries, Catherine E., Sara B. Hobolt, and James Tilley. 2017. “Facing up to the facts.”

Electoral Studies 51: 115–22.

Hill, Seth J. 2017. “Learning Together Slowly.” Journal of Politics 79(4): 1403–18.

Wood, Thomas, and Ethan Porter. Forthcoming. “The Elusive Backfire Effect.” Political Behavior.

On Manuscript Preparation, Salami-Slicing, and Professional Standards

By Jan Leighley, AJPS Interim Lead Editor  

One of the most challenging (and potentially mind-numbing) tasks that occurs in the inner sanctum of the editorial office is the veritable “technical check.” Even mentioning this work might trigger some unpleasant memories for colleagues who previously served as graduate assistants for AJPS editors over the past several decades. It might also remind those who recently submitted manuscripts of the long checklist of required “to-do’s” that, if not met, delays the long-anticipated start of the peer review process.

But the requirements of manuscript preparation focusing on the mechanics (e.g., double-spacing, complete citations, word limits, etc.) are only part of what editors and reviewers are dependent on authors for. Beyond the detailed items that staff can verify, editors expect that authors follow our “Guidelines for Preparing Manuscripts,” including not submitting manuscripts that are under review elsewhere; not including material that has already been published elsewhere; or not having been reviewed previously at the AJPS. Before submitting your next paper, take a fresh look at the long list of expectations for manuscript preparation and manuscript submissions at www.ajps.org, as that list of requirements seems to grow ever longer with every editorial term—and the new editorial team will likely update that list as they see fit.

One of the submission requirements that we added a few months ago is: If the paper to be submitted is part of a larger research agenda (e.g., other related papers under review or book manuscripts in development) these details should be identified in the “Author Comments” text box during the manuscript submission process. We added this requirement after we had several reviewers, on different manuscripts, question the original contribution of the papers they were reviewing, as they seemed trivially different from other papers associated with a bigger project. Editors (thank you, John Ishiyama) sometimes refer to this as “salami slicing,” with the question being: how thin a slice of the big project can stand as its own independent, substantial contribution? Another reason for asking authors to report on bigger, related projects has to do with how these “bigger projects,” if involving a large group of scholars in a subfield who are not authors, might compromise the peer review process. Providing these details, as well as a comprehensive list of co-authors of all authors of the manuscript being submitted, is incredibly helpful as editors seek to identify appropriate reviewers—including those who might have conflicts of interest with the authors, or those who may base their review on who the author is, rather than the quality of the work.

As a testament to the serious and careful work our reviewers do, over the past few months, we have had to respond to problems with a number of submitted manuscripts that reviewers have suggested violate AJPS’s peer review principles. One reviewer identified a paper that had previously been declined, as he or she had already reviewed it once. Some, but not all, authors have communicated directly with us, asking whether, with substantial revisions to theory, data, and presentation, we would allow a (previously declined) paper to be reviewed as a new manuscript submission. Usually these revised manuscripts do not clear the bar as new submissions. In some senses, if you have to ask, you probably are not going to clear that bar. But we applaud these authors for taking this issue seriously, and communicating with us directly. That is the appropriate, and ethical, way to handle the question.

We’ve had similar problems with manuscripts that include text that has been previously published in another (often specialized subfield or non-political science) journal. Reasonable people, I suppose, might disagree about the “seriousness” or ethics of using paragraphs that have been published elsewhere in a paper under review at APJS (or elsewhere). The usual response is: How many ways are there to describe a variable, or a data set, or a frequency distribution? To avoid a violation of the “letter of the law” authors sometimes revert to undergraduate approaches to avoiding plagiarism, by changing a word here or there, or substituting different adjectives in every other sentence. The more paragraphs, of course, the closer the issues of “text recycling” and “self-plagiarism” come into play.

This sloppiness or laziness, however, pales in contrast to the more egregious violations of shared text between submitted and previously published papers that we have had to deal with. Sometimes we have read the same causal story, or saw analytical approaches augmented with one more variable added to a model, or a different measure used to test a series of the same hypotheses, or three more countries or ten more years added to the data set. At which point we had to determine whether the manuscript violates journal policies, or professional publishing standards.

When faced with these issues, we have followed the recommendations of the Committee on Publishing Ethics and directly contacted authors for responses to the issues we raise. I realize that junior faculty (especially) are under incredible pressure to produce more and better research in a limited pre-tenure period, and; I recognize that (a handful of?) more senior faculty may have some incentives for padding the c.v. with additional publications for very different reasons.

While there might be grey areas, I admit to having little sympathy for authors “forgetting” to cite their own work; using “author anonymity” as an excuse for not citing relevant work; or cutting and pasting text from one paper to another. This is not to say that the issues are simple, or that the appropriate editorial response is obvious. But it is discouraging to have to spend editorial time on issues such as these. And as a discipline, we can do better, by explicitly teaching our students, and holding colleagues accountable, to principles of openness, honesty, and integrity. Read the guidelines. Do the work. Write well. Identify issues before you submit. And don’t try to slide by.

The discipline—its scholarship, publishing outlets, its editorial operations, and professional standards—has certainly changed a lot, and in many good ways since the last time I edited. What has not changed is the critical importance of expecting our students and colleagues to respect shared ethical principles. Our editorial team has made some of those issues more explicit in the submission process, asking about editorial conflicts of interest, IRB approvals, and potential reviewer conflicts of interest. While this requires more work of our authors, we think it is work that is well worth the effort, and we thank our authors and reviewers for helping us maintain the highest of professional standards at the AJPS.

Paths of Recruitment: Rational Social Prospecting in Petition Canvassing

In the following blog post, the authors summarize their American Journal of Political Science article titled Paths of Recruitment: Rational Social Prospecting in Petition Canvassing”:AJPS - FB Posts BAA

 When U.S. Representative Gwen Moore (D-WI) prepared her 2008 reelection bid as Wisconsin’s first black member of Congress (representing Milwaukee’s 4th Congressional District), her campaign faced the task of gathering nominating paper signatures for submission to Wisconsin’s Government Accountability Board.  While this might have been an opportunity to travel throughout the largely Democratic district performing campaign outreach and mobilization, the canvassers working on Moore’s behalf took a different approach: they went primarily to her most supportive neighborhoods, which also happened to be the part of the congressional district that Moore had represented in the State Senate until 2004.  Unsurprisingly, canvassers focused their attention on majority-black neighborhoods throughout Northwest Milwaukee.  As time passed, the canvassers relied increasingly on signatures gathered from Moore’s core constituency.

The geographically and socially bounded canvassing carried out by Moore’s campaign is suggestive of a broader trend in how political recruiters search for support, and it holds lessons that expand upon prevailing models of political re
cruitment.  Political recruiters do not only seek out supporters who share common attributes and socioeconomic backgrounds. They also act in response to their geographic milieu, and they update their choices in light of experience.

In our paper “Paths of Recruitment: Rational Social Prospecting in Petition Canvassing, we develop these insights while elaborating a new model of political recruitment that draws lessons from the experiences of petition canvassers in multiple geographic and historical contexts. We test our model using original data we gathered in the form of geocoded signatory lists from a 2005-2006 anti-Iraq War initiative in Wisconsin and an 1839 antislavery campaign in New York City.  Examining the sequence of signatures recorded in these petitions, we have been able to reconstruct canvassers’ recruitment methods– whether they walked the petition door-to-door or went to a central location or meeting place to gather signatures – as well as the path they travelled when they did go door-to-door.  We find that canvassers were substantially more likely to go walking in search of signatures in neighborhoods where residents’ demographic characteristics were similar to their own. In the case of middle-class, predominantly white Wisconsin anti-war canvassers, this meant staying in predominantly white and middle class neighborhoods when going door-to-door. Furthermore, the act of canvassing appeared to follow a rational process where canvassers displayed sensitivity to their costs. For example, in areas where canvassers struggled to find signatures, they were more likely to quit searching.

Understanding how political recruiters find supporters for a political candidate or cause is crucial because recruitment determines who participates in politics. If canvasser strategies reach only a limited set of recruits, then swathes of Americans may be less likely to participate.  Our paper sheds new light on the campaign dynamics that feed this inequality.

About the Authors: Clayton Nall is an Assistant Professor of Political Science at Stanford University. Benjamin Schneer is an Assistant Professor in the Department of Political Science at Florida State University. Daniel Carpenter is Allie S. Freed Professor of Government in the Faculty of Arts and Sciences, and Director of Social Sciences at the Radcliffe Institute for Advanced Study at Harvard University. Their paper Paths of Recruitment: Rational Social Prospecting in Petition Canvassing” (/doi/10.1111/ajps.12305) appears in the January 2018 issue of the American Journal of Political Science and will be awarded the AJPS Best Article Award at the 2019 MPSA Conference. 

Both this article and the co-winning AJPS Best Paper Award article When Common Identities Decrease Trust: An Experimental Study of Partisan Women“(/doi/10.1111/ajps.12366) are currently free to access through April 2019. 

When Common Identities Decrease Trust: An Experimental Study of Partisan Women

AJPS - FB Posts- Klar

AJPS Author Summary by Samara Klar of the University of Arizona

With a record number of women running for the 2020 Democratic nomination, questions will no doubt arise as to the likelihood that a Democratic woman might entice female Republican voters to support a woman from the opposing party. Each time that a woman has run for national office (for example, Sarah Palin as a vice presidential candidate in 2008 or Hillary Clinton as a presidential candidate in 2008 and 2016), political spectators asked: Will women voters “cross the aisle” to vote for a woman?

Yet, each time, we have seen no evidence that women from either party are willing to do so. Indeed, there is very little evidence at all that women from the mass public form inter-party alliances based on their shared gender identity. This might seem surprising – particularly to those familiar with the Common In-Group Identity Model.

The Common In-Group Identity Model argues that an overarching identity (in this case, being a woman) can unite two competing groups (in this case, Democrats and Republicans). Social psychologists demonstrated these effects in an array of “minimal group settings” and others have found that it appears to hold true in “real-world” settings as well. Why, then, are Democratic women and Republican women reluctant to support one another based on their shared gender identity? I set out to investigate the conditions under which the Common In-Group Identity Model holds and whether it might (or might not) apply to American women who identify as Democrats and Republicans.

A key condition of the Common In-Group Identity Model is that the members of both rival groups must hold a common understanding of what it means to identify with their overarching shared identity. Without this, it simply is not a shared identity at all.

Based on existing work, I expected that Democratic women and Republican women, in fact, hold very different views of what it means to be a woman. To test this, I asked a bipartisan sample of 3000 American women how well the word feminist describes them on a scale ranging from 1 (Extremely well) to 5 (Not at all). Democratic women overwhelmingly identify themselves as feminists: their mean response was 2.47 (somewhere in between Very Well [2] and Somewhat Well [3]). Republican women, on the other hand, do not view themselves as feminists: their mean response was a 3.8 (closer to Not Very Well [4]).

I also asked these women to describe how a typical Republican woman and a typical Democrat woman might view feminism. Women from both sides of the aisle are astonishingly accurate in their estimates of how co-partisan and opposing partisan women feel about this issue. There is a clear and accurate perception that Democratic women think of themselves as feminist and that Republican women do not. In sum, being “a woman” is not an identity group that Democratic and Republican women can agree on – and they are well aware of this divide.

If Democratic women and Republican women do not share a common understanding of what it means to be a woman, then their gender should not unite them. In fact, as scholars have shown in other settings, they should actually be driven further apart when their gender becomes salient. This is what I set out to test.

With a survey experiment, I randomly assigned a large sample of women to read a vignette about either a woman or a man, who identifies with either their own party or the other party, and who supports either an issue that makes gender salient or one that does not. I then asked respondents to evaluate this fictitious character.

My results show that gender does not unite women from opposing parties but, in fact, increases their mutual distrust when gender is salient. To be more specific, I find that – when gender is salient – women hold more negative views of women from the opposing party than they do of men from the opposing party. When gender was not salient, however, women no longer penalized women more than they penalized men for identifying with the opposing party.

My work helps us to understand why we do not tend to find political solidarity among women who identify with opposing parties: not only do they disagree about politics but they also tend to disagree about their gender identity. Making gender salient thus exacerbates the divide.

I hope this study also helps to add nuance to our collective understanding of identity politics. Demographic identity groups are not homogeneous voting blocs. This lesson is not exclusive to women but should be taken into account when we think through the political behavior of any subset of the American public. To an outsider, it might appear that a group of individuals objectively shares a common identity, but if they do not hold a common understanding of what that identity means to them then they do not share an identity at all. If we wish to understand how identities influence political attitudes and behaviors, we cannot neglect the nuances that exist with identity groups.

About the Author: Samara Klar of the University of Arizona has authored the article “When Common Identities Decrease Trust: An Experimental Study of Partisan Women(doi/10.1111/ajps.12366) which was published in the July 2018 issue of the American Journal of Political Science and will be awarded the AJPS Best Article Award at the 2019 MPSA Conference. 

Both this article and the co-winning AJPS Best Paper Award article “Paths of Recruitment: Rational Social Prospecting in Petition Canvassing” (doi/10.1111/ajps.12305) are currently free to access through April 2019. 

Celebrating Verification, Replication, and Qualitative Research Methods at the AJPS

By Jan Leighley, AJPS Interim Lead Editor

I’ve always recommended to junior faculty that they celebrate each step along the way toward publication: Data collection and analysis—done! Rough draft—done! Final draft—done! Paper submitted for review—done! Revisions in response to first rejection—done! Paper submitted for review a second time—done! In that spirit, I’d like to celebrate one of AJPS’s “firsts” today: the first verification, replication, and publication of a paper using qualitative research methods, “The Disclosure Dilemma: Nuclear Intelligence and International Organizations (https://doi.org/10.1111/ajps.12426)” by Allison Carnegie and Austin Carson.

The Disclosure Dilemma: Nuclear Intelligence and International Organizations Allison Carnegie Austin Carson

As with many academic accomplishments, it takes a village—or at least a notable gaggle—to make good things happen. The distant origins of the AJPS replication/verification policy were in Gary King’s 1995 “Replication, Replication” essay, as well as the vigorous efforts of Colin Elman, Diana Kapiszewski, and Skip Lupia as part of the DA-RT initiative that began around 2010 (for more details, including others who were involved in these discussions, see https://www.dartstatement.org/events ), and many others in between, especially the editors of the Quarterly Journal of Political Science and Political Analysis. At some point, these journals (and perhaps others?) expected authors to post replication files, but where the files were posted, or if publication was contingent on posting such files, varied. They also continued the replication discussion that King’s (1995) essay began, as a broader group of political scientists (and editors) started to take notice (Elman, Kapiszewski and Lupia 2018).

In 2012, AJPS editor Rick Wilson required that replication files for all accepted papers be posted to the AJPS Dataverse. Then, in 2015, AJPS editor Bill Jacoby announced the new policy that all papers published in AJPS must first be verified prior to publication. He initially worked most closely with the late Tom Carsey (University of North Carolina; Odum Institute) to develop procedures for external replication of quantitative data analyses. Upon satisfaction of the replication requirement, the published article and associated AJPS Dataverse files are awarded “Open Practices” badges as established by the Center for Open Science. Since then, the staff of the Odum Institute and our authors have worked diligently to assure that each paper meets the highest of research standards; as of last week, we had awarded replication badges to 185 AJPS publications.

In 2016, Jacoby worked with Colin Elman (Syracuse University) and Diana Kapiszewski (Georgetown University), co-directors of the Qualitative Data Repository at Syracuse University, to develop more detailed verification guidelines appropriate for qualitative and multi-method research.  This revision of the original verification guidelines acknowledges the diversity of qualitative research traditions, clarifies differences in the verification process necessitated by the distinct features of quantitative and qualitative analyses, and different types of qualitative work. The policy also discusses confidentiality and human subjects protection in greater detail for both types of analysis.

But it is only in our next issue that we will be publishing our first paper (available online today in Early View with free access) that required verification for qualitative data analysis, “The Disclosure Dilemma: Nuclear Intelligence and International Organizations (https://doi.org/10.1111/ajps.12426)” by Allison Carnegie and Austin Carson.  I’m excited to see the AJPS move the discipline along in this important way! To celebrate our first verification of qualitative work, I’ve asked Allison and Austin to share a summary of their experience, which will be posted here in the next few weeks.

As part of the efforts of those named here (and those I’ve missed, with apologies), today the AJPS is well-known in academic publishing circles as taking the lead on replication/verification policies—so much so that in May, Sarah Brooks and I will be representing the AJPS at a roundtable on verification/replication policies at the annual meeting of the Consortium of Science Editors (CSE), an association of journal editors from the natural and medical sciences. AJPS will be the one and only social science journal represented at the meeting, where we will  discuss what we have learned, and how better to support authors in this process.

If you have experiences you wish to share about the establishment of the replication/verification policy, or questions you wish to raise, feel free to send them to us at ajps@mpsanet.org. And be sure to celebrate another first!

Cited in post:

King, Gary. 1995. “Replication, Replication.” PS: Political Science and Politics. 28:3, 444-452. https://doi.org/10.2307/420301

Elman, Colin, Diana Kapiszewski and Arthur Lupia. 2018. “Transparent Social Inquiry: Implications for Political Science.” Annual Review of Political Science 21, 29-47. https://doi.org/10.1146/annurev-polisci-091515-025429

AJPS Author Summary: Are Biased Media Bad for Democracy?

AJPS Author Summary of “Are Biased Media Bad for Democracy?by Stephane Wolton

“[N]ews media bias is real. It reduces the quality of journalism, and it fosters distrust among readers and viewers. This is bad for democracy.” (Timothy Carney, New York Times, 2015). It is indeed commonly accepted that media outlets (e.g. newspapers, radio stations, television channels) are ideologically oriented and attempt to manipulate their audience to improve the reputation or electoral chances of their preferred politicians. But if this holds true, aside from the likely detrimental effects of media bias on the quality of journalism, is this bias inevitably bad for democracy?

In my paper, I study a game-theoretical framework to provide one answer to this question. I use a political agency model, in which the electorate faces the problem of both selecting and controlling polarized politicians. I focus on the actions of office-holders, the information available to voters, and the resulting welfare under four different media environments. In the first, a representative voter obtains information from a media outlet that exactly matches her policy preference. I use the term “unbiased” to describe this environment. In the second, the voter receives news reports from two biased media outlets, on the right and the left of the policy spectrum. I define this environment as “balanced” (as in most states in the United States, see here). In the last two cases, the voter’s information comes either from a single right-wing outlet (“right-wing biased environment” as in Italy after Berlusconi’s 1994 electoral victory) or from a single left-wing outlet (“left-wing biased environment” as in Venezuela after the closing down of RCTV in May 2007, in the early years of the Chavez regime).

Two important findings emerge from comparing equilibrium behaviors across these media environments. Not surprisingly, and in line with a large literature, the voter is always less informed with biased news providers (whether the environment is balanced or not) than with an unbiased media outlet. If officeholders’ behavior were to be kept constant, the electorate would necessarily be hurt by biased media. However, my analysis highlights that everything else is not constant across media environments. In many circumstances, politicians behave differently with biased rather than unbiased news providers. Taking into account these equilibrium effects, my paper uncovers conditions under which voters are better off with biased rather than unbiased media. Therefore, the often advanced claim that media bias is bad for democracy needs to be qualified.

My work also holds some implications for empirical analyses of biased media. To measure the impact of media bias, one needs to compare an outcome of interest (say the re-election rate of incumbent politicians) under an unbiased and under a biased media environment. However, the problem researchers face is that they rarely observe a situation with unbiased outlets and they end up using changes in the media environment from balanced to right- or left-wing biased to evaluate the consequences of media bias. My paper shows that (i) unbiased and biased news providers do not provide the same information to voters and (ii) office-holders can behave differently under biased and unbiased news outlets. As a result, estimates obtained using a balanced environment as reference point can over- or under-estimate the impact of biased media.

Returning to the quote used at the beginning of this post, my paper shows that Carney is only partially correct. Media bias does reduce the quality of journalism and foster distrust. However, it is not necessarily bad for democracy. Further, my work suggests that while existing empirical studies of the media measure important quantities, they may not tell us much about the impact of biased news providers vis-a-vis unbiased outlets.

About the Author: Stephane Wolton is an Associate Professor in Political Science in the Department of Government at the London School of Economics. Wolton’s research “ Are Biased Media Bad for Democracy? (https://doi.org/10.1111/ajps.12424)” is now available in Early View and will appear in a forthcoming issue of the American Journal of Political Science.

 

The American Journal of Political Science (AJPS) is the flagship journal of the Midwest Political Science Association and is published by Wiley.