AJPS Editor’s Blog

Covid-19 has thrown everything off kilter, even academic journals.  Here at AJPS, we have seen two patterns in the past two or three weeks – a 27 percent increase in manuscript submissions AND a 54 percent decline in review invitations accepted – over the same period last year.  While AJPS reviewers have terrific turnaround time, we realize that people may be delayed in returning reviews this semester. So these figures suggest that manuscript processing might take a bit longer from start to finish for this “Covid-19 cohort.” As a result, we call on authors to exercise patience and gratitude for the colleagues doing this work.

AJPS Editor’s Blog

Covid-19 Update:

The AJPS continues to process manuscripts.  We understand that people have many things going on during this time of crisis, so please know that we are happy to be flexible with deadlines for reviews and manuscript revisions.  If you receive a request to review and can’t accept, we understand. If you can review, but need more than the usual time frame, just ask for an extension. We ask authors for patience and empathy during this time as we continue to work.

AJPS Editor’s Blog

December 1st marked our first six months as co-editors of AJPS. We want to thank our associate editors, board members, authors, and reviewers for a very smooth transition. Now that we have some experience under our belts, we thought we’d offer some thoughts about several topics that might be helpful to reviewers and authors alike. So, on the occasional Tuesday, we’ll post a short entry about some aspect of the journal submission, review, or publication process that we’ve had to address over the course of the last six months. While these issues are probably relevant to most journals, we only speak for ourselves and our expectations for AJPS.

Proposing and Opposing Reviewers

AJPS, like many other journals, gives authors an opportunity to suggest appropriate reviewers as well as identify scholars authors might like us to avoid. Throughout the course of the last few months, it’s become clear that a few dos and don’ts might be helpful as complete the proposing and opposing reviewers boxes in Editorial Manager.

Here are three tips when proposing reviewers:

  • Avoid common conflicts of interest – your department colleagues, recent co-authors, member of your dissertation committee, etc. are not appropriate reviewers. In fact, screening for those people is part of our technical check process.
  • Please don’t recommend people who have previously read and commented on the manuscript, whether as a conference discussant or as a more informal collegial favor. Whenever possible, we prefer people to come to a manuscript with a fresh set of eyes.
  • Identify subfield experts, experts in your particular methodology, or younger scholars who might not yet be a part of our reviewer database.

Opposing reviewers is a bit trickier. Without suggesting that academics can engage in petty personal squabbles or have territorial interests around subject areas, we understand that conflicts between scholars can color a reviewer’s assessment. If you believe that a likely reviewer is not well-suited to assess the manuscript objectively – perhaps you’ve had personal disputes, maybe the person was unprofessional at a conference on a panel, we could go on – then please provide a brief statement as to why we should avoid this scholar. Simply noting that the person comes at your question from a different perspective is insufficient. Indeed, these are the very people who should be reading and evaluating your work.

As editors, we take these suggestions into account, although we do not guarantee that we will follow any of the suggestions.  But providing us with specific reasons for whom you propose and whom you’d like us to avoid will help us best evaluate the request.

AJPS Editor’s Blog

December 1st marked our first six months as co-editors of AJPS. We want to thank our associate editors, board members, authors, and reviewers for a very smooth transition. Now that we have some experience under our belts, we thought we’d offer some thoughts about several topics that might be helpful to reviewers and authors alike. So on the occasional Tuesday, we’ll post a short entry about some aspect of the journal submission, review, or publication process that we’ve had to address over the course of the last six months. While these issues are probably relevant to most journals, we only speak for ourselves and our expectations for AJPS.

Passing Verification with Flying Colors

For nearly five years now, AJPS has verified quantitative manuscripts. And last year, we verified our first qualitative paper as well. As most of you know, this doesn’t mean that we replicate authors’ analyses. It “simply” means that we reproduce – using the authors’ data, code, and/or interview transcripts – the results presented in the paper. Notice that simply is in quotes. That’s because most papers don’t pass verification by the Odum Institute in the first round. Sometimes the data set won’t open. Sometimes the computing clusters where the authors conducted the analyses can’t “talk” to the cluster where the analyses are being verified. Sometimes the code includes typos or other minor errors. Sometimes it’s all of the above. Regardless of the problem, though, when a paper can’t be verified, it goes back to the author for revision and resubmission. Multiple rounds of back and forth are, unfortunately, not uncommon.

Given that authors now conduct increasingly complex analyses that regularly rely on multiple data sets, we figured that this would be a good time to offer three helpful hints that can make the verification process less cumbersome and more efficient:

  1. Verify your own analyses first. Before uploading your data set and code to Dataverse, run it and see if you get the exact same results as you report in the paper. Sounds obvious, right? Well, it turns out that most authors actually don’t do this. But those who do have reported a relatively seamless verification process – they’ve caught glitches, typos, and bugs. If you can perform this task on a different computer than you typically use, even better. That increases the probability that the code will run when Odum begins the verification.
  2. Keep the code as slim as possible. After a paper has been verified, if you make any changes that involve the verified results, then the whole thing must go through the process again. Let’s say you want to modify a figure’s title and source note when you receive the page proofs. Well, if you generated the title and note in Stata or R and it’s part of the code, then the whole paper needs to go back to Odum. To avoid unnecessary delays, try to remove from the code anything that’s not part of the actual results or formatting of the figure itself. If you can do it in Word, take it out of the code.
  3. Talk to the IT person at your institution. If you know that your analyses have caused you any trouble or required somewhat unusual accommodations – like a super-computer or major technical assistance merging data sets – then prepare a document that summarizes how you solved these problems and the specifications of the computing environment you used. That short memo will go a long way helping Odum determine the best way to verify the analyses without having to troubleshoot all of the issues you already confronted.

We know that the verification process is clunky. But taking these three steps will undoubtedly make it a lot smoother and much faster. It’ll also make it that much easier when you want to verify someone else’s results.

AJPS Editor’s Blog

December 1st marked our first six months as co-editors of AJPS. We want to thank our associate editors, board members, authors, and reviewers for a very smooth transition. Now that we have some experience under our belts, we thought we’d offer some thoughts about several topics that might be helpful to reviewers and authors alike. So, on occasional Tuesdays, we’ll post a short entry about some aspect of the journal submission, review, or publication process that we’ve had to address over the course of the last six months. While these issues are probably relevant to most journals, we only speak for ourselves and our expectations for AJPS.

Appendices, Supporting Information, Supplemental Materials, You Get the Picture

In the old days – like the early 2000s – most articles appeared exclusively in print. Authors struggled with word counts because submissions had to include all relevant material, including appendices. Online supplemental appendices now allow authors to focus the body of the text on telling the main story. Details about survey questions, experimental treatments, alternative model specifications, robustness checks, and additional analyses can be relegated to the appendix. The upside is that articles themselves can be shorter, crisper, and more straightforward, but readers can still find clarifying information in the appendix. The downside is that some authors have taken a “more is better” and “better safe than sorry” approach to appendix compilations. In our six months on the job, we have received 10,000-word manuscripts that are “supported” by 50, 75, even 100-page appendices. Most appendices aren’t this long, but almost every manuscript now comes with significant supplemental materials.

We understand why authors do this. Why not preempt any concern a reviewer might raise, provide every alternative specification possible to model, and share every detail about the research design and protocol? The problem is that while appendix space may seem “free” to authors, it comes with a substantial cost to reviewers, who are now often faced with a 10,000-word manuscript and an equally long or longer appendix. Anything that increases the burden on reviewers makes an overworked system even more precarious.

At AJPS, we limit supplemental appendices to 20 pages. We believe that this gives authors sufficient space to provide additional information that might not belong in the body of a manuscript but is still important to the paper’s central contribution. In enforcing this limit, we ask authors to think carefully about what they really need to include in an appendix verbatim versus what they can summarize. If you run three experiments with identical treatments, you only need to offer the script of the treatment once. If you’re providing alternative analyses, you don’t have to provide every model you ever ran or think a reviewer might anticipate. If the additional material doesn’t merit some discussion in the main paper, then the more elaborate discussion doesn’t belong in the appendix either. As a general rule, we believe that a manuscript must be able to stand on its own. A reader must be able to understand it and find it convincing even without the appendix. The appendix, in other words, should be a place to provide information about “housekeeping” details, not a way to back door in thousands of words you couldn’t fit in the paper itself.

We know that limiting appendix pages can be anxiety-inducing for authors. That’s probably why so many of you request exemptions. But we’ve found that requiring authors to distinguish between what’s essential and what might be extraneous improves the quality of the manuscript and makes the task of reviewing that much easier and more reasonable – something every author appreciates when wearing the hat of a reviewer.

The New AJPS Editorial Team Starts Today! Here Are Our Four Central Goals

By AJPS Co-Editors Kathy Dolan and Jennifer Lawless

Today marks the day! The new editorial team at AJPS is up and running. We’re honored to serve the discipline this way and we’re excited about what the next four years have in store. Before anything else, we want to introduce the new team:

Co-Editors-in-Chief:

Kathleen Dolan, University of Wisconsin Milwaukee
Jennifer Lawless, University of Virginia

Associate Editors:
Elizabeth Cohen, Syracuse University
Rose McDermott, Brown University
Graeme Robertson, University of North Carolina
Jonathan Woon, University of Pittsburgh

You can take a look at the new Editorial Board here. We are thrilled that such an impressive, well-rounded, diverse group of scholars agreed to serve.

Over the course of the coming days and weeks, we’ll use this blog to call your attention to new policies and procedures. (Don’t worry – for the most part, processes won’t change!) But we want to take a few minutes now to highlight four central goals for our term.

STABILITY: AJPS has undergone a lot of transitions in a short period of time. And we’re grateful to the interim team for stepping up on short notice last year and working tirelessly to ensure that the journal would continue to thrive. But now we’ve got a permanent team in place for the next four years and are eager to provide the stability the journal needs.

TRANSPARENCY: We’re committed to managing a process that maintains transparency and academic rigor. We will accomplish this, in part, by maintaining the current system of data verification and the professional and personal conflict of interest policy. We will also require authors of work based on human subjects to confirm institutional IRB approval of their projects at the time a manuscript is submitted for consideration. And we’ll be vigilant about ensuring that authors are authorized to use – at the time of submission – all data included in their manuscripts.

DIVERSITY: As scholars of gender politics, we are well aware of the ways in which top journals do not always represent the diversity of a discipline. In putting together our team of Associate Editors and our Editorial Board, we have intentionally worked to represent race, sex, subfield, rank, institutional, and methodological diversity. It is our hope that the presence and work of these leaders sends a message to the discipline that we value all work and the work of all.  We want to be as clear as possible, though, that our plan to diversify the works and the scholars represented in the journal in no way compromises our commitment to identifying and publishing the best political science research. Indeed, we believe that attempts at diversification will actually increase the odds of identifying the best and most creative work.

OPEN COMMUNICATION: The journal’s success is contingent on the editorial team, authors, reviewers, and the user-community working together. In that vein, we value open communication. Undoubtedly, you won’t love everything we do. Maybe you’ll be upset, disappointed, or troubled by a decision we make. Perhaps you’ll disagree with a new policy or procedure. Please contact us and let us know. We can likely address any concerns better through direct communication than by discerning what you mean in an angry tweet. We get that those tweets will still happen. But we hope you’ll feel comfortable contacting us directly before your blood begins to boil.

Before we sign off, we want to let you know that we’re aware that, for some people, earlier frustration with the MPSA had bled over into AJPS. We ask for your continued support and patience as the new MPSA leadership addresses issues of concern and seeks to rebuild your trust. We ask that you not take your frustrations out on the journal by refusing to submit or review. A journal can only function if the community is invested in it.

Thanks in advance for tolerating the transition bumps and bruises that are sure to occur. We’ll try to minimize them; we promise.

Kathy and Jen

Verification, Verification

By Jan Leighley, AJPS Interim Lead Editor  

After nine months of referring to the AJPS “replication policy,” or (in writing) “replication/verification” policy, I finally had to admit it was time for a change. As lead editor, I had been invited to various panels and workshops where I noticed that the terms “replication”, “verification”, and “reproducibility” were often used interchangeably (sometimes less awkwardly than others), and others where there were intense discussions about what each term meant or required.

Spoiler Alert: I have no intention, in the context of this post, with 10 days left in the editorial term, to even begin to clarify the distinctions between reproducibility, replicability, and verifiability—and how these terms apply to data and materials, in both qualitative and quantitative methods.

A bit of digging in the (surprisingly shallow) archives suggested that “replication” and “verification” had often been used interchangeably (if not redundantly) at AJPS. Not surprising, given the diversity of approaches and terminology used in the natural and social sciences more broadly (See “Terminologies for Reproducible Research” at arXiv.org). But in a 2017 Inside Higher Education article, “Should Journals Be Responsible for Reproducibility?”, former editor Bill Jacoby mentioned that the AJPS “Replication and Verification Policy” terminology would soon be adjusted to be consistent that of the National Science Foundation. From the article: “Replication is using the same processes and methodology with new data to produce similar results, while reproducibility is using the same processes and methodology on the same dataset to produce identical results.”

It made sense to me that a change in names had been in the making, in part due to the important role of the AJPS as a leader in the discipline, social sciences, and possibly natural sciences on issues of transparency and reproducibility in scientific research. While I had no plans as interim editor to address this issue, the publication of the journal’s first paper relying on (verified) qualitative research methods required that the editorial team review the policy and its procedures. That review led to a consideration of the similarities and differences in verifying quantitative and qualitative papers for publication in the AJPS—and my decision to finally make the name change “legal” after all this time: the “AJPS Replication & Verification Policy” that we all know and love will now move forward in name officially as theAJPS Verification Policy“.

This name change reflects my observation that what we are doing at AJPS currently is verifying what is reported in the papers that we publish, though what we verify differs for qualitative and quantitative approaches. In neither case do we replicate the research of our authors.

Do note that the goals and procedures that we have used to verify the papers we publish will essentially remain the same, subject only to the routine types of changes made as we learn how to improve the process, or with the kind of adjustments that come with changes of editorial teams. Since the policy was announced in March 2015, The Odum Institute has used the data and materials posted on the AJPS Dataverse to verify the analyses of 195 papers relying on quantitative analyses.

Our experience in verifying qualitative analyses, in contrast, is limited at this point to only one paper, one that the Qualitative Data Repository verified early this spring, although several others are currently under review. As in the case of quantitative papers, the basic procedures and guidelines for verification of qualitative papers have been posted online for several years. We will continue to develop appropriate verification procedures, as we build on our limited experience thus far, and respond to the complexity and heterogeneity of qualitative research methods. Authors of accepted papers (or those who are curious about verification procedures) should check out the guidelines and checklists posted at www.ajps.org to learn more.

For those who care about graphics more than terminology (!), I note that a few changes have been made to the badges awarded to verified articles. I’ve never been a badge person myself, but apparently this is the currency of the realm in open science circles, and some research suggests that by awarding these badges, researchers are more likely to follow “open science” practices in their work. AJPS is proud to have our authors’ papers sport these symbols of high standards of transparency in the research process on our Dataverse page and on our published papers. Our badge updates include the addition of the words “peer review” to reflect that our verification policy relies on external reviewers (i.e., Odum, QDR) to document verifiability rather than doing it in-house, the most distinctive aspect of the AJPS Verification Policy. It also includes a new “Protected Access” badge that will signify the verification of data that is available only through application to a protected repository, as identified by the Center for Open Science. As new papers are accepted for publication, you will begin to see more of the new badges, along with revised language that reflects more precisely what those badges represent.

Cheers to replication, verification—and the end of the editorial term!
Jan (Sarah, Mary, Jen, Layna and Rocio)


Citation:
Jacoby, William G., Sophia Lafferty-Hess, Thu-Mai Christian. 2017. “Should Journals Be Responsible for Reproducibility?” Inside Higher Education [blog], July 17.

Our Experience with the AJPS Transparency and Verification Process for Qualitative Research

“As the editorial term ends, I’m both looking back and looking forward . . . so, as promised, here’s a post by Allison Carnegie and Austin Carson describing their recent experience with qualitative verification at AJPS . . . and within the next week I’ll be posting an important update to the AJPS “Replication/Verification Policy,” one that will endure past the end of the term on June 1.”
– Jan Leighley, AJPS Interim Editor


Our Experience with the AJPS Transparency and Verification Process for Qualitative Research

By Allison Carnegie of Columbia University and Austin Carson of the University of Chicago

The need for increased transparency for qualitative data has been recognized by political scientists for some time, sparking a lively debate about different ways to accomplish this goal (e.g., Elman, Kapiszewski and Lupia 2018; Moravcsik 2014. As a result of the Data Access and Research Transparency (DA-RT) initiative and the final report of the Qualitative Transparency Deliberations,  many leading journals including the AJPS adopted such policies. (Follow this link for a critical view of DA-RT.) While the AJPS has had such a policy in place since 2016, ours was the first article to undergo the formal qualitative verification process. We had a very positive experience with this procedure, and want to share how it worked with other scholars who may by considering using qualitative methods as well.

In our paper, “The Disclosure Dilemma: Nuclear Intelligence and International Organizations (https://doi.org/10.1111/ajps.12426),” we argue that states often wish to disclose intelligence about other states’ violations of international rules and laws, but are deterred by concerns about revealing the sources and methods used to collect it. However, we theorize that properly equipped international organizations can mitigate these dilemmas by analyzing and acting on sensitive information while protecting it from wide dissemination. We focus on the case of nuclear proliferation and the IAEA in particular. To  evaluate  our claims, we couple a formal model with a qualitative analysis using each case of nuclear proliferation, finding that strengthening the IAEA’s intelligence protection capabilities led to greater intelligence sharing and fewer suspected nuclear facilities. This analysis required a variety of qualitative materials including archival documents, expert interviews, and other primary and secondary sources.

To facilitate the verification of the claims we made using these qualitative methods, we first gathered the raw archival material that we used, along with the relevant excerpts from our inter- views, and posted them to a dataverse location. The AJPS next sent our materials to the Qualitative Data Repository (QDR) at Syracuse University, which reviewed our Readme file, verified the frequency counts in our tables, and reviewed each of our evidence-based arguments related to our theory’s mechanisms (though it did not review the cases in our Supplemental  Appendix). (More details for this process can be found in the AJPS Verification and Replication policy, along with its Qualitative Checklist.) QDR then generated a report which identified statements that it deemed were “supported,” “partially supported,” or “not documented/referenced.” For the third type of statement, we were asked to do one of the following: provide a different source, revise the statement, or clarify whether we felt that QDR misunderstood our claim. We were free to address the other two types of statements as we saw fit. While some have questioned the feasibility of this process, in our case it took roughly the same amount of time that verification processes of quantitative data typically do, so it did not delay the publication of our article.

We found the report to be thorough, accurate, and helpful. While we had endeavored to support our claims fully in the original manuscript, we fell short of this goal on several counts, and fol- lowed each of QDR’s excellent recommendations. Occasionally, this involved a bit more research, but typically this resulted in us clarifying statements, adding details, or otherwise improving our descriptions of, say, our coding decisions. For example, QDR noted instances in which we made a compound claim but the referenced source only supported one of the claims. In such a case, we added a citation for the other claim as well. We then drafted a memo detailing each change that we made, which QDR then reviewed and responded to within a few days.

Overall, we were very pleased with this process. This was in no small part due to the AJPS editorial team, whose patience and guidance in shepherding us through this procedure were greatly appreciated. As a result, we believe that the verification both improved the quality of evidence and better aligned our claims with our evidence. Moreover, it increased our confidence that we had clearly and accurately communicated with readers. Finally, archiving our data will allow other scholars to access our sources and evaluate our claims for themselves, as well as potentially use these materials for future research. We thus came away with the view that qualitative transparency is achievable in a way that is friendly to researchers and can improve the quality of the work.

About the Authors: Allison Carnegie is Assistant Professor of Columbia University and Austin Carson is Assistant Professor at the University of Chicago. Their research, “The Disclosure Dilemma: Nuclear Intelligence and International Organizations (https://doi.org/10.1111/ajps.12426),” is now available in Early View and will appear in a forthcoming issue of the American Journal of Political Science. Carnegie can be found on Twitter at
@alliecarnegie and Carson at @carsonaust.

References

Elman, Colin, Diana Kapiszewski and Arthur Lupia. 2018. “Transparent Social Inquiry: Implica- tions for Political Science.” Annual Review of Political Science 21:29–47.

Moravcsik, Andrew. 2014. “Transparency: The Revolution in Qualitative Research.” PS: Political Science & Politics 47(1):48–53.

On Manuscript Preparation, Salami-Slicing, and Professional Standards

By Jan Leighley, AJPS Interim Lead Editor  

One of the most challenging (and potentially mind-numbing) tasks that occurs in the inner sanctum of the editorial office is the veritable “technical check.” Even mentioning this work might trigger some unpleasant memories for colleagues who previously served as graduate assistants for AJPS editors over the past several decades. It might also remind those who recently submitted manuscripts of the long checklist of required “to-do’s” that, if not met, delays the long-anticipated start of the peer review process.

But the requirements of manuscript preparation focusing on the mechanics (e.g., double-spacing, complete citations, word limits, etc.) are only part of what editors and reviewers are dependent on authors for. Beyond the detailed items that staff can verify, editors expect that authors follow our “Guidelines for Preparing Manuscripts,” including not submitting manuscripts that are under review elsewhere; not including material that has already been published elsewhere; or not having been reviewed previously at the AJPS. Before submitting your next paper, take a fresh look at the long list of expectations for manuscript preparation and manuscript submissions at www.ajps.org, as that list of requirements seems to grow ever longer with every editorial term—and the new editorial team will likely update that list as they see fit.

One of the submission requirements that we added a few months ago is: If the paper to be submitted is part of a larger research agenda (e.g., other related papers under review or book manuscripts in development) these details should be identified in the “Author Comments” text box during the manuscript submission process. We added this requirement after we had several reviewers, on different manuscripts, question the original contribution of the papers they were reviewing, as they seemed trivially different from other papers associated with a bigger project. Editors (thank you, John Ishiyama) sometimes refer to this as “salami slicing,” with the question being: how thin a slice of the big project can stand as its own independent, substantial contribution? Another reason for asking authors to report on bigger, related projects has to do with how these “bigger projects,” if involving a large group of scholars in a subfield who are not authors, might compromise the peer review process. Providing these details, as well as a comprehensive list of co-authors of all authors of the manuscript being submitted, is incredibly helpful as editors seek to identify appropriate reviewers—including those who might have conflicts of interest with the authors, or those who may base their review on who the author is, rather than the quality of the work.

As a testament to the serious and careful work our reviewers do, over the past few months, we have had to respond to problems with a number of submitted manuscripts that reviewers have suggested violate AJPS’s peer review principles. One reviewer identified a paper that had previously been declined, as he or she had already reviewed it once. Some, but not all, authors have communicated directly with us, asking whether, with substantial revisions to theory, data, and presentation, we would allow a (previously declined) paper to be reviewed as a new manuscript submission. Usually these revised manuscripts do not clear the bar as new submissions. In some senses, if you have to ask, you probably are not going to clear that bar. But we applaud these authors for taking this issue seriously, and communicating with us directly. That is the appropriate, and ethical, way to handle the question.

We’ve had similar problems with manuscripts that include text that has been previously published in another (often specialized subfield or non-political science) journal. Reasonable people, I suppose, might disagree about the “seriousness” or ethics of using paragraphs that have been published elsewhere in a paper under review at APJS (or elsewhere). The usual response is: How many ways are there to describe a variable, or a data set, or a frequency distribution? To avoid a violation of the “letter of the law” authors sometimes revert to undergraduate approaches to avoiding plagiarism, by changing a word here or there, or substituting different adjectives in every other sentence. The more paragraphs, of course, the closer the issues of “text recycling” and “self-plagiarism” come into play.

This sloppiness or laziness, however, pales in contrast to the more egregious violations of shared text between submitted and previously published papers that we have had to deal with. Sometimes we have read the same causal story, or saw analytical approaches augmented with one more variable added to a model, or a different measure used to test a series of the same hypotheses, or three more countries or ten more years added to the data set. At which point we had to determine whether the manuscript violates journal policies, or professional publishing standards.

When faced with these issues, we have followed the recommendations of the Committee on Publishing Ethics and directly contacted authors for responses to the issues we raise. I realize that junior faculty (especially) are under incredible pressure to produce more and better research in a limited pre-tenure period, and; I recognize that (a handful of?) more senior faculty may have some incentives for padding the c.v. with additional publications for very different reasons.

While there might be grey areas, I admit to having little sympathy for authors “forgetting” to cite their own work; using “author anonymity” as an excuse for not citing relevant work; or cutting and pasting text from one paper to another. This is not to say that the issues are simple, or that the appropriate editorial response is obvious. But it is discouraging to have to spend editorial time on issues such as these. And as a discipline, we can do better, by explicitly teaching our students, and holding colleagues accountable, to principles of openness, honesty, and integrity. Read the guidelines. Do the work. Write well. Identify issues before you submit. And don’t try to slide by.

The discipline—its scholarship, publishing outlets, its editorial operations, and professional standards—has certainly changed a lot, and in many good ways since the last time I edited. What has not changed is the critical importance of expecting our students and colleagues to respect shared ethical principles. Our editorial team has made some of those issues more explicit in the submission process, asking about editorial conflicts of interest, IRB approvals, and potential reviewer conflicts of interest. While this requires more work of our authors, we think it is work that is well worth the effort, and we thank our authors and reviewers for helping us maintain the highest of professional standards at the AJPS.

Celebrating Verification, Replication, and Qualitative Research Methods at the AJPS

By Jan Leighley, AJPS Interim Lead Editor

I’ve always recommended to junior faculty that they celebrate each step along the way toward publication: Data collection and analysis—done! Rough draft—done! Final draft—done! Paper submitted for review—done! Revisions in response to first rejection—done! Paper submitted for review a second time—done! In that spirit, I’d like to celebrate one of AJPS’s “firsts” today: the first verification, replication, and publication of a paper using qualitative research methods, “The Disclosure Dilemma: Nuclear Intelligence and International Organizations (https://doi.org/10.1111/ajps.12426)” by Allison Carnegie and Austin Carson.

The Disclosure Dilemma: Nuclear Intelligence and International Organizations Allison Carnegie Austin Carson

As with many academic accomplishments, it takes a village—or at least a notable gaggle—to make good things happen. The distant origins of the AJPS replication/verification policy were in Gary King’s 1995 “Replication, Replication” essay, as well as the vigorous efforts of Colin Elman, Diana Kapiszewski, and Skip Lupia as part of the DA-RT initiative that began around 2010 (for more details, including others who were involved in these discussions, see https://www.dartstatement.org/events ), and many others in between, especially the editors of the Quarterly Journal of Political Science and Political Analysis. At some point, these journals (and perhaps others?) expected authors to post replication files, but where the files were posted, or if publication was contingent on posting such files, varied. They also continued the replication discussion that King’s (1995) essay began, as a broader group of political scientists (and editors) started to take notice (Elman, Kapiszewski and Lupia 2018).

In 2012, AJPS editor Rick Wilson required that replication files for all accepted papers be posted to the AJPS Dataverse. Then, in 2015, AJPS editor Bill Jacoby announced the new policy that all papers published in AJPS must first be verified prior to publication. He initially worked most closely with the late Tom Carsey (University of North Carolina; Odum Institute) to develop procedures for external replication of quantitative data analyses. Upon satisfaction of the replication requirement, the published article and associated AJPS Dataverse files are awarded “Open Practices” badges as established by the Center for Open Science. Since then, the staff of the Odum Institute and our authors have worked diligently to assure that each paper meets the highest of research standards; as of last week, we had awarded replication badges to 185 AJPS publications.

In 2016, Jacoby worked with Colin Elman (Syracuse University) and Diana Kapiszewski (Georgetown University), co-directors of the Qualitative Data Repository at Syracuse University, to develop more detailed verification guidelines appropriate for qualitative and multi-method research.  This revision of the original verification guidelines acknowledges the diversity of qualitative research traditions, clarifies differences in the verification process necessitated by the distinct features of quantitative and qualitative analyses, and different types of qualitative work. The policy also discusses confidentiality and human subjects protection in greater detail for both types of analysis.

But it is only in our next issue that we will be publishing our first paper (available online today in Early View with free access) that required verification for qualitative data analysis, “The Disclosure Dilemma: Nuclear Intelligence and International Organizations (https://doi.org/10.1111/ajps.12426)” by Allison Carnegie and Austin Carson.  I’m excited to see the AJPS move the discipline along in this important way! To celebrate our first verification of qualitative work, I’ve asked Allison and Austin to share a summary of their experience, which will be posted here in the next few weeks.

As part of the efforts of those named here (and those I’ve missed, with apologies), today the AJPS is well-known in academic publishing circles as taking the lead on replication/verification policies—so much so that in May, Sarah Brooks and I will be representing the AJPS at a roundtable on verification/replication policies at the annual meeting of the Consortium of Science Editors (CSE), an association of journal editors from the natural and medical sciences. AJPS will be the one and only social science journal represented at the meeting, where we will  discuss what we have learned, and how better to support authors in this process.

If you have experiences you wish to share about the establishment of the replication/verification policy, or questions you wish to raise, feel free to send them to us at ajps@mpsanet.org. And be sure to celebrate another first!

Cited in post:

King, Gary. 1995. “Replication, Replication.” PS: Political Science and Politics. 28:3, 444-452. https://doi.org/10.2307/420301

Elman, Colin, Diana Kapiszewski and Arthur Lupia. 2018. “Transparent Social Inquiry: Implications for Political Science.” Annual Review of Political Science 21, 29-47. https://doi.org/10.1146/annurev-polisci-091515-025429

 

The American Journal of Political Science (AJPS) is the flagship journal of the Midwest Political Science Association and is published by Wiley.