How Conditioning on Posttreatment Variables Can Ruin Your Experiment and What to Do about It

The AJPS Workshop article “How Conditioning on Posttreatment Variables Can Ruin Your Experiment and What to Do about It” (https://doi.org/10.1111/ajps.12357) by Jacob Montgomery, Brendan Nyhan, and Michelle Torres is summarized by the authors below. 

AJPS Workshop - How Conditioning on Posttreatment Variables Can Ruin Your Experiment and What to Do about It

 

Identifying the causal effect of a treatment on an outcome is not a trivial task. It requires knowledge and measurement of all the factors that cause both the treatment and the outcome, which are known as confounders. Political scientists increasingly rely on experimental studies because they allow researchers to obtain unbiased estimates of causal effects without having to identify all such confounders or engaging in complex statistical modeling.

The key characteristic of experiments is that treatment status is randomly assigned. When this is true, the difference between the average outcomes of observations that received a treatment and those that did not is an unbiased estimate of its causal effect.

Of course, this description of experiments is idealized. In the real world, things get messy. Some participants ignore stimuli or fail to receive their assigned treatment. Researchers may also wish to understand the mechanism that produced an experimental effect or to rule out alternative explanations.

Unfortunately, researchers seeking to address these types of concerns often resort to common but problematic practices including dropping participants who fail manipulation checks; controlling for variables measured after the treatment such as potential mediators; or subsetting samples based on variables measured after the treatment is applied, which are known as post-treatment variables. Many applied scholars seem unaware that these post-treatment conditioning practices can ruin experiments and that we should not engage in them.

Though the dangers of post-treatment bias have long been recognized in the fields of statistics, econometrics, and political methodology, there is still significant confusion in the wider discipline about its sources and consequences. In fact, we find that 46.7% of the experimental articles published between 2012 and 2014 in the American Journal of Political Science, American Political Science Review, or Journal of Politics engage in post-treatment conditioning.

As we show in our article, these practices contaminate experimental analyses and distort treatment effect estimates. Post-treatment bias can affect our estimates in any direction and can be of any size. Moreover, there is often no way to provide finite bounds or eliminate it absent strong assumptions that are unlikely to hold in real-world settings. We therefore provide guidance on how to address practical challenges in experimental research without inducing post-treatment bias. Changing our research practices to avoid conditioning on post-treatment variables is one of the most important ways we can improve experimental practice in political science.


About the Authors: Jacob M. Montgomery  is an Associate Professor in the Department of Political Science at Washington University in St. Louis., Brendan Nyhan is a Professor in the Ford School of Public Policy at the University of Michigan, and Michelle Torres is an incoming Assistant Professor in the Department of Political Science at Rice University. Their research “How Conditioning on Posttreatment Variables Can Ruin Your Experiment and What to Do about It” (https://doi.org/10.1111/ajps.12357) appeared in the July 2018 issue of the American Journal of Political Science and is currently available with Free Access.

 

The New AJPS Editorial Team Starts Today! Here Are Our Four Central Goals

By AJPS Co-Editors Kathy Dolan and Jennifer Lawless

Today marks the day! The new editorial team at AJPS is up and running. We’re honored to serve the discipline this way and we’re excited about what the next four years have in store. Before anything else, we want to introduce the new team:

Co-Editors-in-Chief:

Kathleen Dolan, University of Wisconsin Milwaukee
Jennifer Lawless, University of Virginia

Associate Editors:
Elizabeth Cohen, Syracuse University
Rose McDermott, Brown University
Graeme Robertson, University of North Carolina
Jonathan Woon, University of Pittsburgh

You can take a look at the new Editorial Board here. We are thrilled that such an impressive, well-rounded, diverse group of scholars agreed to serve.

Over the course of the coming days and weeks, we’ll use this blog to call your attention to new policies and procedures. (Don’t worry – for the most part, processes won’t change!) But we want to take a few minutes now to highlight four central goals for our term.

STABILITY: AJPS has undergone a lot of transitions in a short period of time. And we’re grateful to the interim team for stepping up on short notice last year and working tirelessly to ensure that the journal would continue to thrive. But now we’ve got a permanent team in place for the next four years and are eager to provide the stability the journal needs.

TRANSPARENCY: We’re committed to managing a process that maintains transparency and academic rigor. We will accomplish this, in part, by maintaining the current system of data verification and the professional and personal conflict of interest policy. We will also require authors of work based on human subjects to confirm institutional IRB approval of their projects at the time a manuscript is submitted for consideration. And we’ll be vigilant about ensuring that authors are authorized to use – at the time of submission – all data included in their manuscripts.

DIVERSITY: As scholars of gender politics, we are well aware of the ways in which top journals do not always represent the diversity of a discipline. In putting together our team of Associate Editors and our Editorial Board, we have intentionally worked to represent race, sex, subfield, rank, institutional, and methodological diversity. It is our hope that the presence and work of these leaders sends a message to the discipline that we value all work and the work of all.  We want to be as clear as possible, though, that our plan to diversify the works and the scholars represented in the journal in no way compromises our commitment to identifying and publishing the best political science research. Indeed, we believe that attempts at diversification will actually increase the odds of identifying the best and most creative work.

OPEN COMMUNICATION: The journal’s success is contingent on the editorial team, authors, reviewers, and the user-community working together. In that vein, we value open communication. Undoubtedly, you won’t love everything we do. Maybe you’ll be upset, disappointed, or troubled by a decision we make. Perhaps you’ll disagree with a new policy or procedure. Please contact us and let us know. We can likely address any concerns better through direct communication than by discerning what you mean in an angry tweet. We get that those tweets will still happen. But we hope you’ll feel comfortable contacting us directly before your blood begins to boil.

Before we sign off, we want to let you know that we’re aware that, for some people, earlier frustration with the MPSA had bled over into AJPS. We ask for your continued support and patience as the new MPSA leadership addresses issues of concern and seeks to rebuild your trust. We ask that you not take your frustrations out on the journal by refusing to submit or review. A journal can only function if the community is invested in it.

Thanks in advance for tolerating the transition bumps and bruises that are sure to occur. We’ll try to minimize them; we promise.

Kathy and Jen

When Diversity Works: The Effects of Coalition Composition on the Success of Lobbying Coalitions

The forthcoming article “ When Diversity Works: The Effects of Coalition Composition on the Success of Lobbying Coalitions” (https://doi.org/10.1111/ajps.12437) by Wiebke Marie Junk is summarized by the author below.

AJPS Author Summary: When Diversity Works: The Effects of Coalition Composition on the Success of Lobbying Coalitions


Teaming up with unlike coalition partners in a lobbying campaign comes with collective action costs and reputational risks. At the same time, coalitions between ‘strange bedfellows’, such as business associations and non-governmental organizations (NGOs), can signal to policymakers that different socio-economic interests have found consensus. So, such diverse coalitions might pay off despite their costs and, at the same time, play an important role in interest mediation in democratic politics. Both from a normative and practical perspective, it is therefore important to ask whether and when policymakers are actually responsive to diverse coalitions that signal a broad support base, instead of allowing policy capture by a single type of interest.

In this article, I speak to these questions by probing the conditions for the success of lobbying coalitions in attaining their policy preferences on a diverse set of 50 political issues in five European countries (Denmark, Germany, the Netherlands, Sweden and the United Kingdom). I propose the theory that the appeal of a coalition to policymakers depends on the composition of the coalition, especially the societal and economic interests represented by it. In addition, I expect that the responsiveness by policymakers to coalition diversity, as well as contributions of coalition participants to the common effort, vary depending on the policy issue at stake. I argue that advocacy salience is a crucial conditioning factor in these regards: when an issue is salient in the lobbying community, policymakers will be more weary of political repercussions of policy outcomes that lack broad support. Furthermore, when outside pressures are high, because many advocates compete on an issue, there are higher incentives for members inside the diverse coalition to overcome cooperation problems and lobby together more efficiently.

The results based on analysing a set of 122 distinct coalitions in the five countries support these expectations: On salient issues, more diverse coalitions have significantly higher lobbying success than less diverse coalitions. On issues with low salience, however, coalition diversity is related negatively to lobbying success. These findings show that diversity in the types of interests united in a coalition for a common cause is no panacea, but that it is highly context dependent whether diverse coalitions attract higher costs or benefits compared to homogenous coalitions.

This pattern is highly consequential for understanding decisions in democratic politics, which might primarily be responsive to signals of diverse support when there are high levels of mobilization on an issue, but, perhaps worryingly, not on issues that attract less attention. By focus\sing on active lobbying coalitions on specific issues, the article addresses long-standing questions on the responsiveness of policymakers to different types of interests in a novel way. It is relevant for scholars of policy processes, interest representation and lobbying success, as well as the general public interested in the effect of lobbying on policymaking. Crucially, the findings provide evidence that policymakers reward diversity in mobilization, yet that differences between issues strongly affect the costs and benefits associated with uniting support from different types of societal groups.

About the Author: Wiebke Marie Junk is a Postdoc in the Department of Political Science at the University of Copenhagen. Her research ““ When Diversity Works: The Effects of Coalition Composition on the Success of Lobbying Coalitions” (https://doi.org/10.1111/ajps.12437) is now available in Early View and will appear in a forthcoming issue of the American Journal of Political Science.

Does Direct Democracy Hurt Immigrant Minorities? Evidence from Naturalization Decisions in Switzerland

The forthcoming article “Does Direct Democracy Hurt Immigrant Minorities? Evidence from Naturalization Decisions in Switzerland” (https://doi.org/10.1111/ajps.12433) by Jens Hainmueller and Dominik Hangartner is summarized by the authors below.

AJPS Author Summary: Does Direct Democracy Hurt Immigrant Minorities?


What happens to ethnic minorities when policy is decided by a majority of voters rather than elected politicians? Do minorities fare worse under direct democracy than under representative democracy? 

We examine this longstanding question in the context of naturalization applications in Switzerland. Immigrants who seek Swiss citizenship must apply at the municipality in which they reside, and municipalities use different institutions to evaluate the naturalization applications. In the early 1990s, over 80% of municipalities used some form of direct democracy. However, in the early 2000s, following a series of landmark rulings by the Swiss Federal Court, most municipalities switched to representative democracy and delegated naturalization decisions to the elected municipality council.

Using panel data from about 1,400 municipalities for the 1991–2009 period, we found that naturalization rates were about the same under both systems during the four years prior to the switch. After municipalities moved from direct to representative democracy, naturalization rates increased by about 50% in the first year, and by more than 100% in the following years. These results demonstrate that, on average, immigrants fare much better if their naturalization requests are decided by elected officials in the municipality council instead of voters in referendums.

What might explain this institutional effect? Voters in referendums face no cost when they arbitrarily reject qualified applicants based on discriminatory preferences. Politicians in the council, by contrast, must formally justify rejections and may be held accountable by judicial review. Consistent with this mechanism, we see that the switch brings a much greater increase in naturalization rates among more marginalized immigrant groups. The switch is also more influential in areas where voters are more xenophobic or where judicial review is more salient.

More broadly, our study provides evidence that, when taking up exactly the same kind of decision, direct democracy harms minorities more often than representative democracy.

About the Authors: Jens Hainmueller is a Professor in the Department of Political Science at Stanford University and  Dominik Hangartner is an Associate Professor of Public Policy at ETH Zurich and in the Department of Government at the London School of Economics and Political Science. Their research “Does Direct Democracy Hurt Immigrant Minorities? Evidence from Naturalization Decisions in Switzerlandhttps://doi.org/10.1111/ajps.12433 is now available in Early View and will appear in a forthcoming issue of the American Journal of Political Science.

Verification, Verification

By Jan Leighley, AJPS Interim Lead Editor  

After nine months of referring to the AJPS “replication policy,” or (in writing) “replication/verification” policy, I finally had to admit it was time for a change. As lead editor, I had been invited to various panels and workshops where I noticed that the terms “replication”, “verification”, and “reproducibility” were often used interchangeably (sometimes less awkwardly than others), and others where there were intense discussions about what each term meant or required.

Spoiler Alert: I have no intention, in the context of this post, with 10 days left in the editorial term, to even begin to clarify the distinctions between reproducibility, replicability, and verifiability—and how these terms apply to data and materials, in both qualitative and quantitative methods.

A bit of digging in the (surprisingly shallow) archives suggested that “replication” and “verification” had often been used interchangeably (if not redundantly) at AJPS. Not surprising, given the diversity of approaches and terminology used in the natural and social sciences more broadly (See “Terminologies for Reproducible Research” at arXiv.org). But in a 2017 Inside Higher Education article, “Should Journals Be Responsible for Reproducibility?”, former editor Bill Jacoby mentioned that the AJPS “Replication and Verification Policy” terminology would soon be adjusted to be consistent that of the National Science Foundation. From the article: “Replication is using the same processes and methodology with new data to produce similar results, while reproducibility is using the same processes and methodology on the same dataset to produce identical results.”

It made sense to me that a change in names had been in the making, in part due to the important role of the AJPS as a leader in the discipline, social sciences, and possibly natural sciences on issues of transparency and reproducibility in scientific research. While I had no plans as interim editor to address this issue, the publication of the journal’s first paper relying on (verified) qualitative research methods required that the editorial team review the policy and its procedures. That review led to a consideration of the similarities and differences in verifying quantitative and qualitative papers for publication in the AJPS—and my decision to finally make the name change “legal” after all this time: the “AJPS Replication & Verification Policy” that we all know and love will now move forward in name officially as theAJPS Verification Policy“.

This name change reflects my observation that what we are doing at AJPS currently is verifying what is reported in the papers that we publish, though what we verify differs for qualitative and quantitative approaches. In neither case do we replicate the research of our authors.

Do note that the goals and procedures that we have used to verify the papers we publish will essentially remain the same, subject only to the routine types of changes made as we learn how to improve the process, or with the kind of adjustments that come with changes of editorial teams. Since the policy was announced in March 2015, The Odum Institute has used the data and materials posted on the AJPS Dataverse to verify the analyses of 195 papers relying on quantitative analyses.

Our experience in verifying qualitative analyses, in contrast, is limited at this point to only one paper, one that the Qualitative Data Repository verified early this spring, although several others are currently under review. As in the case of quantitative papers, the basic procedures and guidelines for verification of qualitative papers have been posted online for several years. We will continue to develop appropriate verification procedures, as we build on our limited experience thus far, and respond to the complexity and heterogeneity of qualitative research methods. Authors of accepted papers (or those who are curious about verification procedures) should check out the guidelines and checklists posted at www.ajps.org to learn more.

For those who care about graphics more than terminology (!), I note that a few changes have been made to the badges awarded to verified articles. I’ve never been a badge person myself, but apparently this is the currency of the realm in open science circles, and some research suggests that by awarding these badges, researchers are more likely to follow “open science” practices in their work. AJPS is proud to have our authors’ papers sport these symbols of high standards of transparency in the research process on our Dataverse page and on our published papers. Our badge updates include the addition of the words “peer review” to reflect that our verification policy relies on external reviewers (i.e., Odum, QDR) to document verifiability rather than doing it in-house, the most distinctive aspect of the AJPS Verification Policy. It also includes a new “Protected Access” badge that will signify the verification of data that is available only through application to a protected repository, as identified by the Center for Open Science. As new papers are accepted for publication, you will begin to see more of the new badges, along with revised language that reflects more precisely what those badges represent.

Cheers to replication, verification—and the end of the editorial term!
Jan (Sarah, Mary, Jen, Layna and Rocio)


Citation:
Jacoby, William G., Sophia Lafferty-Hess, Thu-Mai Christian. 2017. “Should Journals Be Responsible for Reproducibility?” Inside Higher Education [blog], July 17.

AJPS Author Summary: “Non‐Governmental Monitoring of Local Governments Increases Compliance with Central Mandates: A National‐Scale Field Experiment in China”

The following AJPS Author Summary of “Non‐Governmental Monitoring of Local Governments Increases Compliance with Central Mandates: A National‐Scale Field Experiment in China” has been provided by Mark Buntaine:

Non‐Governmental Monitoring of Local Governments Increases Compliance with Central Mandates: A National‐Scale Field Experiment in China


One of the most challenging aspects of governance in China is that policy is made centrally, but implementation is the responsibility of local governments. For the management of pollution — a national priority in recent years — the “implementation gap” that arises when local governments fail to oversee industry and other high-polluting activities has caused a public health crisis of global proportions.

Non-governmental organizations might usefully monitor and reveal the performance of local governments, thereby extending the ability of the center to oversee local governments. Although NGOs face many restrictions about what activities they can pursue, particularly those are critical of the state, more recently NGOs have been encouraged by the central government to engage in monitoring local governments to improve environmental performance.

In a national-scale field experiment that involved monitoring fifty municipal governments for their compliance with rules to make information about the management of pollution available to the public, we show that NGOs can play an important role in increasing the compliance of local governments with national mandates. When the Institute of Public and Environmental affairs publicly disclosed a rating about the compliance of 25 treated municipalities with rules to be transparent, these local governments increased mandated disclosures over two years, as compared to a group of 25 municipalities not assigned to the publication of their rating. However, the same rating did not increase public discussions of pollution in treated municipalities, as compared to control municipalities.

This result highlights that NGOs can play an important role in improving authoritarian governance by disclosing the non-compliance of local governments in ways that helps the center with oversight. They can play this role as long as they do not increase public discontent. We explain how this is an emerging mode of governance in several authoritarian political systems, where NGOs are helping to improve governance by addressing the information needs of the central state for oversight of local governments.

About the Authors of the Research: Sarah E. Anderson is Associate Professor of Environmental Politics at the University of California, Santa Barbara; Mark T. Buntaine is Assistant Professor of Environmental Institutions and Governance at the University of California, Santa Barbara; Mengdi Liu is a PhD Candidate at Nanjing University; Bing Zhang is Associate Professor at Nanjing University. Their research, “Non‐Governmental Monitoring of Local Governments Increases Compliance with Central Mandates: A National‐Scale Field Experiment in China” (https://doi.org/10.1111/ajps.12428), is now available in Early View and will appear in a forthcoming issue of the American Journal of Political Science.

Our Experience with the AJPS Transparency and Verification Process for Qualitative Research

“As the editorial term ends, I’m both looking back and looking forward . . . so, as promised, here’s a post by Allison Carnegie and Austin Carson describing their recent experience with qualitative verification at AJPS . . . and within the next week I’ll be posting an important update to the AJPS “Replication/Verification Policy,” one that will endure past the end of the term on June 1.”
– Jan Leighley, AJPS Interim Editor


Our Experience with the AJPS Transparency and Verification Process for Qualitative Research

By Allison Carnegie of Columbia University and Austin Carson of the University of Chicago

The need for increased transparency for qualitative data has been recognized by political scientists for some time, sparking a lively debate about different ways to accomplish this goal (e.g., Elman, Kapiszewski and Lupia 2018; Moravcsik 2014. As a result of the Data Access and Research Transparency (DA-RT) initiative and the final report of the Qualitative Transparency Deliberations,  many leading journals including the AJPS adopted such policies. (Follow this link for a critical view of DA-RT.) While the AJPS has had such a policy in place since 2016, ours was the first article to undergo the formal qualitative verification process. We had a very positive experience with this procedure, and want to share how it worked with other scholars who may by considering using qualitative methods as well.

In our paper, “The Disclosure Dilemma: Nuclear Intelligence and International Organizations (https://doi.org/10.1111/ajps.12426),” we argue that states often wish to disclose intelligence about other states’ violations of international rules and laws, but are deterred by concerns about revealing the sources and methods used to collect it. However, we theorize that properly equipped international organizations can mitigate these dilemmas by analyzing and acting on sensitive information while protecting it from wide dissemination. We focus on the case of nuclear proliferation and the IAEA in particular. To  evaluate  our claims, we couple a formal model with a qualitative analysis using each case of nuclear proliferation, finding that strengthening the IAEA’s intelligence protection capabilities led to greater intelligence sharing and fewer suspected nuclear facilities. This analysis required a variety of qualitative materials including archival documents, expert interviews, and other primary and secondary sources.

To facilitate the verification of the claims we made using these qualitative methods, we first gathered the raw archival material that we used, along with the relevant excerpts from our inter- views, and posted them to a dataverse location. The AJPS next sent our materials to the Qualitative Data Repository (QDR) at Syracuse University, which reviewed our Readme file, verified the frequency counts in our tables, and reviewed each of our evidence-based arguments related to our theory’s mechanisms (though it did not review the cases in our Supplemental  Appendix). (More details for this process can be found in the AJPS Verification and Replication policy, along with its Qualitative Checklist.) QDR then generated a report which identified statements that it deemed were “supported,” “partially supported,” or “not documented/referenced.” For the third type of statement, we were asked to do one of the following: provide a different source, revise the statement, or clarify whether we felt that QDR misunderstood our claim. We were free to address the other two types of statements as we saw fit. While some have questioned the feasibility of this process, in our case it took roughly the same amount of time that verification processes of quantitative data typically do, so it did not delay the publication of our article.

We found the report to be thorough, accurate, and helpful. While we had endeavored to support our claims fully in the original manuscript, we fell short of this goal on several counts, and fol- lowed each of QDR’s excellent recommendations. Occasionally, this involved a bit more research, but typically this resulted in us clarifying statements, adding details, or otherwise improving our descriptions of, say, our coding decisions. For example, QDR noted instances in which we made a compound claim but the referenced source only supported one of the claims. In such a case, we added a citation for the other claim as well. We then drafted a memo detailing each change that we made, which QDR then reviewed and responded to within a few days.

Overall, we were very pleased with this process. This was in no small part due to the AJPS editorial team, whose patience and guidance in shepherding us through this procedure were greatly appreciated. As a result, we believe that the verification both improved the quality of evidence and better aligned our claims with our evidence. Moreover, it increased our confidence that we had clearly and accurately communicated with readers. Finally, archiving our data will allow other scholars to access our sources and evaluate our claims for themselves, as well as potentially use these materials for future research. We thus came away with the view that qualitative transparency is achievable in a way that is friendly to researchers and can improve the quality of the work.

About the Authors: Allison Carnegie is Assistant Professor of Columbia University and Austin Carson is Assistant Professor at the University of Chicago. Their research, “The Disclosure Dilemma: Nuclear Intelligence and International Organizations (https://doi.org/10.1111/ajps.12426),” is now available in Early View and will appear in a forthcoming issue of the American Journal of Political Science. Carnegie can be found on Twitter at
@alliecarnegie and Carson at @carsonaust.

References

Elman, Colin, Diana Kapiszewski and Arthur Lupia. 2018. “Transparent Social Inquiry: Implica- tions for Political Science.” Annual Review of Political Science 21:29–47.

Moravcsik, Andrew. 2014. “Transparency: The Revolution in Qualitative Research.” PS: Political Science & Politics 47(1):48–53.

AJPS Author Summary: How Getting the Facts Right Can Fuel Partisan Motivated Reasoning

AJPS Author Summary of “How Getting the Facts Right Can Fuel Partisan Motivated Reasoning” by Martin Bisgaard

Are citizens able to get the facts right? Ideally, we want them to. If citizens are to punish or reward incumbent politicians for how real-world conditions have changed, citizens need to know whether these conditions have changed for better or for worse. If economic growth stalls, crime rates plummet or unemployment soars, citizens should take due notice and change their perceptions of reality accordingly. But are citizens able—or willing—to do so?

Considerable scholarly discussion revolves around this question. Decades of research suggest that citizens often bend the same facts in ways that are favorable to their own party. In one of the most discussed examples, citizens identifying with the incumbent party tend to view economic conditions much more favorably than citizens identifying with the opposition do. However, more recent work suggests that citizens are not always oblivious to a changing reality. Across both experimental and observational work, researchers have found that partisans sometimes react “in a similar way to changes in the real economy” (De Vries, Hobolt and Tilley 2017, 115); that they “learn slowly toward common truth” (Hill 2017, 1404); and that they “heed the facts, even when doing so forces them to separate from their ideological attachments” (Wood and Porter 2016, 3). Sometimes, even committed partisans can get the facts right.

In my article, however, I develop and test an argument that is overlooked in current discussion. Although citizens of different partisan groups may sometimes accept the same facts, they may just find other ways of making reality fit with what they want to believe. One such way, I demonstrate, is through the selective allocation of credit and blame. 

I conducted four randomized experiments in the United States and Denmark, exposing participants to either negative or positive news about economic growth. Across these experiments, I found that while partisans updated their perceptions of the national economy in the same way, they attributed responsibility in a highly selective fashion, crediting their own party for success and blaming other actors for failure. Furthermore, I exposed citizens to credible arguments about why (not) the incumbent was responsible, yet it did little to temper partisan motivated reasoning. Rather, respondents dramatically shifted how they viewed the persuasiveness of the same arguments depending on whether macroeconomic circumstances were portrayed as good or bad. Lastly and using open-ended questions where respondents were not explicitly prompted to consider the responsibility of the President or government, I found that citizens spontaneously mustered up attributional arguments that fit their preferred conclusion. These findings have important implications for the current discussion on fake news and misinformation: Correcting people’s factual beliefs may just lead them to find other ways of rationalizing reality.

About the Author: Martin Bisgaard is Assistant Professor in the Department of Political Science at Aarhus University. Bisgaard’s research “How Getting the Facts Right Can Fuel Partisan Motivated Reasoning (https://doi.org/10.1111/ajps.12432) is now available in Early View and will appear in a forthcoming issue of the American Journal of Political Science.

Works cited:

De Vries, Catherine E., Sara B. Hobolt, and James Tilley. 2017. “Facing up to the facts.”

Electoral Studies 51: 115–22.

Hill, Seth J. 2017. “Learning Together Slowly.” Journal of Politics 79(4): 1403–18.

Wood, Thomas, and Ethan Porter. Forthcoming. “The Elusive Backfire Effect.” Political Behavior.

On Manuscript Preparation, Salami-Slicing, and Professional Standards

By Jan Leighley, AJPS Interim Lead Editor  

One of the most challenging (and potentially mind-numbing) tasks that occurs in the inner sanctum of the editorial office is the veritable “technical check.” Even mentioning this work might trigger some unpleasant memories for colleagues who previously served as graduate assistants for AJPS editors over the past several decades. It might also remind those who recently submitted manuscripts of the long checklist of required “to-do’s” that, if not met, delays the long-anticipated start of the peer review process.

But the requirements of manuscript preparation focusing on the mechanics (e.g., double-spacing, complete citations, word limits, etc.) are only part of what editors and reviewers are dependent on authors for. Beyond the detailed items that staff can verify, editors expect that authors follow our “Guidelines for Preparing Manuscripts,” including not submitting manuscripts that are under review elsewhere; not including material that has already been published elsewhere; or not having been reviewed previously at the AJPS. Before submitting your next paper, take a fresh look at the long list of expectations for manuscript preparation and manuscript submissions at www.ajps.org, as that list of requirements seems to grow ever longer with every editorial term—and the new editorial team will likely update that list as they see fit.

One of the submission requirements that we added a few months ago is: If the paper to be submitted is part of a larger research agenda (e.g., other related papers under review or book manuscripts in development) these details should be identified in the “Author Comments” text box during the manuscript submission process. We added this requirement after we had several reviewers, on different manuscripts, question the original contribution of the papers they were reviewing, as they seemed trivially different from other papers associated with a bigger project. Editors (thank you, John Ishiyama) sometimes refer to this as “salami slicing,” with the question being: how thin a slice of the big project can stand as its own independent, substantial contribution? Another reason for asking authors to report on bigger, related projects has to do with how these “bigger projects,” if involving a large group of scholars in a subfield who are not authors, might compromise the peer review process. Providing these details, as well as a comprehensive list of co-authors of all authors of the manuscript being submitted, is incredibly helpful as editors seek to identify appropriate reviewers—including those who might have conflicts of interest with the authors, or those who may base their review on who the author is, rather than the quality of the work.

As a testament to the serious and careful work our reviewers do, over the past few months, we have had to respond to problems with a number of submitted manuscripts that reviewers have suggested violate AJPS’s peer review principles. One reviewer identified a paper that had previously been declined, as he or she had already reviewed it once. Some, but not all, authors have communicated directly with us, asking whether, with substantial revisions to theory, data, and presentation, we would allow a (previously declined) paper to be reviewed as a new manuscript submission. Usually these revised manuscripts do not clear the bar as new submissions. In some senses, if you have to ask, you probably are not going to clear that bar. But we applaud these authors for taking this issue seriously, and communicating with us directly. That is the appropriate, and ethical, way to handle the question.

We’ve had similar problems with manuscripts that include text that has been previously published in another (often specialized subfield or non-political science) journal. Reasonable people, I suppose, might disagree about the “seriousness” or ethics of using paragraphs that have been published elsewhere in a paper under review at APJS (or elsewhere). The usual response is: How many ways are there to describe a variable, or a data set, or a frequency distribution? To avoid a violation of the “letter of the law” authors sometimes revert to undergraduate approaches to avoiding plagiarism, by changing a word here or there, or substituting different adjectives in every other sentence. The more paragraphs, of course, the closer the issues of “text recycling” and “self-plagiarism” come into play.

This sloppiness or laziness, however, pales in contrast to the more egregious violations of shared text between submitted and previously published papers that we have had to deal with. Sometimes we have read the same causal story, or saw analytical approaches augmented with one more variable added to a model, or a different measure used to test a series of the same hypotheses, or three more countries or ten more years added to the data set. At which point we had to determine whether the manuscript violates journal policies, or professional publishing standards.

When faced with these issues, we have followed the recommendations of the Committee on Publishing Ethics and directly contacted authors for responses to the issues we raise. I realize that junior faculty (especially) are under incredible pressure to produce more and better research in a limited pre-tenure period, and; I recognize that (a handful of?) more senior faculty may have some incentives for padding the c.v. with additional publications for very different reasons.

While there might be grey areas, I admit to having little sympathy for authors “forgetting” to cite their own work; using “author anonymity” as an excuse for not citing relevant work; or cutting and pasting text from one paper to another. This is not to say that the issues are simple, or that the appropriate editorial response is obvious. But it is discouraging to have to spend editorial time on issues such as these. And as a discipline, we can do better, by explicitly teaching our students, and holding colleagues accountable, to principles of openness, honesty, and integrity. Read the guidelines. Do the work. Write well. Identify issues before you submit. And don’t try to slide by.

The discipline—its scholarship, publishing outlets, its editorial operations, and professional standards—has certainly changed a lot, and in many good ways since the last time I edited. What has not changed is the critical importance of expecting our students and colleagues to respect shared ethical principles. Our editorial team has made some of those issues more explicit in the submission process, asking about editorial conflicts of interest, IRB approvals, and potential reviewer conflicts of interest. While this requires more work of our authors, we think it is work that is well worth the effort, and we thank our authors and reviewers for helping us maintain the highest of professional standards at the AJPS.

Paths of Recruitment: Rational Social Prospecting in Petition Canvassing

In the following blog post, the authors summarize their American Journal of Political Science article titled Paths of Recruitment: Rational Social Prospecting in Petition Canvassing”:AJPS - FB Posts BAA

 When U.S. Representative Gwen Moore (D-WI) prepared her 2008 reelection bid as Wisconsin’s first black member of Congress (representing Milwaukee’s 4th Congressional District), her campaign faced the task of gathering nominating paper signatures for submission to Wisconsin’s Government Accountability Board.  While this might have been an opportunity to travel throughout the largely Democratic district performing campaign outreach and mobilization, the canvassers working on Moore’s behalf took a different approach: they went primarily to her most supportive neighborhoods, which also happened to be the part of the congressional district that Moore had represented in the State Senate until 2004.  Unsurprisingly, canvassers focused their attention on majority-black neighborhoods throughout Northwest Milwaukee.  As time passed, the canvassers relied increasingly on signatures gathered from Moore’s core constituency.

The geographically and socially bounded canvassing carried out by Moore’s campaign is suggestive of a broader trend in how political recruiters search for support, and it holds lessons that expand upon prevailing models of political re
cruitment.  Political recruiters do not only seek out supporters who share common attributes and socioeconomic backgrounds. They also act in response to their geographic milieu, and they update their choices in light of experience.

In our paper “Paths of Recruitment: Rational Social Prospecting in Petition Canvassing, we develop these insights while elaborating a new model of political recruitment that draws lessons from the experiences of petition canvassers in multiple geographic and historical contexts. We test our model using original data we gathered in the form of geocoded signatory lists from a 2005-2006 anti-Iraq War initiative in Wisconsin and an 1839 antislavery campaign in New York City.  Examining the sequence of signatures recorded in these petitions, we have been able to reconstruct canvassers’ recruitment methods– whether they walked the petition door-to-door or went to a central location or meeting place to gather signatures – as well as the path they travelled when they did go door-to-door.  We find that canvassers were substantially more likely to go walking in search of signatures in neighborhoods where residents’ demographic characteristics were similar to their own. In the case of middle-class, predominantly white Wisconsin anti-war canvassers, this meant staying in predominantly white and middle class neighborhoods when going door-to-door. Furthermore, the act of canvassing appeared to follow a rational process where canvassers displayed sensitivity to their costs. For example, in areas where canvassers struggled to find signatures, they were more likely to quit searching.

Understanding how political recruiters find supporters for a political candidate or cause is crucial because recruitment determines who participates in politics. If canvasser strategies reach only a limited set of recruits, then swathes of Americans may be less likely to participate.  Our paper sheds new light on the campaign dynamics that feed this inequality.

About the Authors: Clayton Nall is an Assistant Professor of Political Science at Stanford University. Benjamin Schneer is an Assistant Professor in the Department of Political Science at Florida State University. Daniel Carpenter is Allie S. Freed Professor of Government in the Faculty of Arts and Sciences, and Director of Social Sciences at the Radcliffe Institute for Advanced Study at Harvard University. Their paper Paths of Recruitment: Rational Social Prospecting in Petition Canvassing” (/doi/10.1111/ajps.12305) appears in the January 2018 issue of the American Journal of Political Science and will be awarded the AJPS Best Article Award at the 2019 MPSA Conference. 

Both this article and the co-winning AJPS Best Paper Award article When Common Identities Decrease Trust: An Experimental Study of Partisan Women“(/doi/10.1111/ajps.12366) are currently free to access through April 2019. 

 

The American Journal of Political Science (AJPS) is the flagship journal of the Midwest Political Science Association and is published by Wiley.