It Takes a Submission: Gendered Patterns in the Pages of AJPS

Kathleen Dolan and Jennifer L. Lawless

When we became editors of the American Journal of Political Science on June 1, 2019, we stated that one of our goals was to understand the patterns of submission and publication by authors from underrepresented groups. We begin that examination by presenting data on submission and publication rates of women and men. We focus on manuscripts submitted to the journal between January 1, 2017 and October 31, 2019. This time period spans three different editors/editorial teams: Bill Jacoby served as editor from January 2017 until April 2018; Jan Leighley from April 2018 through May 2019; and we have been co-editors since June 2019. Although our editorial team was in place for only the last five months of this period, we wanted to examine a long enough time span to get a good sense of any gendered patterns that exist in the pages of AJPS.

We view these data as contributing to recent conversations about the representation of women as authors and as cited authorities in political science journals. Michelle Dion and Sarah Mitchell, for example, recently published a piece in PS about the citation gap in political science articles.[1] They compare the gender composition of membership in several APSA organized sections with the gender balance in citations published by each section’s official journal. Dawn Teele and Kathleen Thelen document a lower percentage of female authors in 10 political science journals than women’s share of the overall profession.[2]

We take a different approach. Because we have AJPS submission data, we can examine the link between gender gaps in submission rates and subsequent publication rates. After all, women and men can be under- or over-represented in the pool of published articles only in proportion to their presence in the pool of submitted manuscripts. We believe that attention to the appropriate denominator offers a clearer picture of authorship patterns.

Submissions
During the period under examination, 4,916 authors submitted manuscripts and received final decisions from AJPS. Women accounted for 1,210 (or 25%) of the submitting authors.

At the manuscript level, the gender disparity was less substantial. Of the 2,672 manuscripts on which an editor issued a final decision, 945 (or 35%) had at least one female author.

The lion’s share of the manuscripts that included a female author, however, also included at least one male co-author (see Figure 1). Indeed, we processed four and half times as many manuscripts written only a man or men (65%) as we did those authored only by a woman or women (14%).

Homing in on the 1,238 solo-authored manuscripts, 962 came from men. Women, in other words, accounted for just 22% of the solo-authored submissions we received.

Figure 1. Composition of Authors for Manuscripts Submitted to AJPS
Figure 1
Notes: Bars represent the percentage of manuscripts that fall into each category. The analysis is based on the 2,672 manuscript for which we issued a final decision (accept or decline) from January 2017 – October 2019.

Decisions
Whereas striking gender disparities emerge during the submission process, we find no significant gender differences when it comes to manuscript decisions. During this time period, we accepted roughly 6% of submitted manuscripts. Those submissions included a total of 307 authors, 75 of whom were women. Thus, women comprised 24% of accepted authors – this is statistically indistinguishable from the 25% of female submitting authors.[3] Notice, too, that our rates of acceptance are consistent across the composition of authors. Regardless of how many women or men author a piece, only about 6% are accepted for publication. None of the differences across categories in Figure 2 is statistically significant.

Given the comparable acceptance rates across author composition, it’s no surprise that the percentage of female authors on our pages is roughly the same as the proportion of manuscripts submitted that included at least one female author (35%). Of course, given that most of the manuscripts submitted by women also include at least one male co-author, 84% of the articles published during this time had at least one male author. 

Figure 2. Manuscript Acceptance Rates at AJPS, by Composition of Authors
Figure 2Notes: Bars represent the percentage of accepted manuscripts that fall into each category. The analysis is based on the 2,672 manuscript for which we issued a final decision (accept or decline) from January 2017 – October 2019.

A COVID-19 Caveat
Over the course of the last several weeks, submissions at AJPS have picked up substantially (as compared to the same month last year). It’s impossible to know whether to attribute the uptick to MPSA conference papers that were no longer awaiting feedback, more time at home for authors, different teaching commitments, etc. But we examined the 108 submitted manuscripts we received from March 15th through April 19th to assess whether the patterns from the larger data set have been exacerbated amid COVID-19. After all, women are still more likely than men – even among high-level professionals – to shoulder the majority of the household labor and childcare or elder care responsibilities. It wouldn’t be surprising if the gender gap in manuscript submissions grew during this time.

The data reveal that it hasn’t. The 108 manuscripts we processed in this month-long period included 54 female and 108 male authors. So, women comprised 33% of submitting authors, which is actually somewhat higher than usual (remember that women comprised 25% of the authors in the 2017 – 2019 data set).

At the manuscript level, 41 of the 108 papers had at least one female author. That’s 38% of the total, which is again a slightly greater share than the 35% of manuscripts with at least one female author in the larger data set.

This doesn’t mean that Covid-19 hasn’t taken a toll on female authors, though. Women submitted only 8 of the 46 solo-authored papers during this time. Their share of 17% is down from 22% in the larger data set. As a percentage change, that’s substantial. Even if women’s overall submission rates are up, they seem to have less time to submit their own work than men do amid the crisis.

Conclusions
In examining the gendered patterns in submission and publication at AJPS over the past three years, we see two different realities. In terms of “supply,” there is a large disparity. Women constitute just one-quarter of submitting authors, and their names appear on only one-third of submitted manuscripts. But when it comes to “demand,” there is no evidence of clear bias in the review or publication process. Women’s ratios on the printed pages are indistinguishable from their ratios in the submission pool. As long as it’s the case that women are less likely than men to submit manuscripts to AJPS, the gender disparities in publication rates will remain.

Given these findings, and the work we do, we would be remiss not to draw a comparison to the political arena. We’ve known for decades now that when women run for office, they do as well as men. They win at equal rates, raise as much money, and even garner similar media coverage. Yet women remain significantly under-represented in U.S. political institutions. Why? Because they look at a political arena where they are significantly under-represented and assume (rationally) that widespread bias and systematic discrimination is keeping them out. Because they think that in order to be qualified to run for office, they need to be twice as good to get half as far. Because they’re less likely than men to receive encouragement to throw their hats into the ring.

But we also know that when women are encouraged to run for office, they’re more likely to think they’re qualified and they’re more likely to give it a shot.

So as a discipline, it’s incumbent upon us to encourage female scholars to submit their work to AJPS and other top journals. It’s our responsibility to let them know that their work is just as competent and just as important as that of their male colleagues. We are not so naïve as to believe that encouragement is all it takes to close the gender gap in rates of submission. That women are still not similarly situated with men in important resources (tenure track jobs, research support, family obligations) poses obstacles that encouragement alone cannot surmount. But while the discipline continues to address these resource gaps, we can change the face of tables of contents by calling attention to the myths about women not succeeding when they submit their work.

[1] Dion, Michelle L. and Sara M. Mitchell. 2020. “How Many Citations to Women Is ‘Enough?’ Estimates of Gender Representation in Political Science.” PS: Political Science & Politics 53(1):107-13.

[2] Teele, Dawn Langan and Kathleen Thelen. 2017. “Gender in the Journals: Publication Patterns in Political Science.” PS: Political Science & Politics 50(2):433-47.

[3] These results are consistent with a 2018 symposium on gender in the American Political Science Association’s journals. See “Gender in the Journals, Continued: Evidence from Five Political Science Journals.” PS: Political Science & Politics 51(4).

AJPS Editor’s Blog

Covid-19 has thrown everything off kilter, even academic journals.  Here at AJPS, we have seen two patterns in the past two or three weeks – a 27 percent increase in manuscript submissions AND a 54 percent decline in review invitations accepted – over the same period last year.  While AJPS reviewers have terrific turnaround time, we realize that people may be delayed in returning reviews this semester. So these figures suggest that manuscript processing might take a bit longer from start to finish for this “Covid-19 cohort.” As a result, we call on authors to exercise patience and gratitude for the colleagues doing this work.

AJPS Editor’s Blog

Covid-19 Update:

The AJPS continues to process manuscripts.  We understand that people have many things going on during this time of crisis, so please know that we are happy to be flexible with deadlines for reviews and manuscript revisions.  If you receive a request to review and can’t accept, we understand. If you can review, but need more than the usual time frame, just ask for an extension. We ask authors for patience and empathy during this time as we continue to work.

AJPS Editor’s Blog

December 1st marked our first six months as co-editors of AJPS. We want to thank our associate editors, board members, authors, and reviewers for a very smooth transition. Now that we have some experience under our belts, we thought we’d offer some thoughts about several topics that might be helpful to reviewers and authors alike. So, on the occasional Tuesday, we’ll post a short entry about some aspect of the journal submission, review, or publication process that we’ve had to address over the course of the last six months. While these issues are probably relevant to most journals, we only speak for ourselves and our expectations for AJPS.

Proposing and Opposing Reviewers

AJPS, like many other journals, gives authors an opportunity to suggest appropriate reviewers as well as identify scholars authors might like us to avoid. Throughout the course of the last few months, it’s become clear that a few dos and don’ts might be helpful as complete the proposing and opposing reviewers boxes in Editorial Manager.

Here are three tips when proposing reviewers:

  • Avoid common conflicts of interest – your department colleagues, recent co-authors, member of your dissertation committee, etc. are not appropriate reviewers. In fact, screening for those people is part of our technical check process.
  • Please don’t recommend people who have previously read and commented on the manuscript, whether as a conference discussant or as a more informal collegial favor. Whenever possible, we prefer people to come to a manuscript with a fresh set of eyes.
  • Identify subfield experts, experts in your particular methodology, or younger scholars who might not yet be a part of our reviewer database.

Opposing reviewers is a bit trickier. Without suggesting that academics can engage in petty personal squabbles or have territorial interests around subject areas, we understand that conflicts between scholars can color a reviewer’s assessment. If you believe that a likely reviewer is not well-suited to assess the manuscript objectively – perhaps you’ve had personal disputes, maybe the person was unprofessional at a conference on a panel, we could go on – then please provide a brief statement as to why we should avoid this scholar. Simply noting that the person comes at your question from a different perspective is insufficient. Indeed, these are the very people who should be reading and evaluating your work.

As editors, we take these suggestions into account, although we do not guarantee that we will follow any of the suggestions.  But providing us with specific reasons for whom you propose and whom you’d like us to avoid will help us best evaluate the request.

AJPS Editor’s Blog

December 1st marked our first six months as co-editors of AJPS. We want to thank our associate editors, board members, authors, and reviewers for a very smooth transition. Now that we have some experience under our belts, we thought we’d offer some thoughts about several topics that might be helpful to reviewers and authors alike. So on the occasional Tuesday, we’ll post a short entry about some aspect of the journal submission, review, or publication process that we’ve had to address over the course of the last six months. While these issues are probably relevant to most journals, we only speak for ourselves and our expectations for AJPS.

Passing Verification with Flying Colors

For nearly five years now, AJPS has verified quantitative manuscripts. And last year, we verified our first qualitative paper as well. As most of you know, this doesn’t mean that we replicate authors’ analyses. It “simply” means that we reproduce – using the authors’ data, code, and/or interview transcripts – the results presented in the paper. Notice that simply is in quotes. That’s because most papers don’t pass verification by the Odum Institute in the first round. Sometimes the data set won’t open. Sometimes the computing clusters where the authors conducted the analyses can’t “talk” to the cluster where the analyses are being verified. Sometimes the code includes typos or other minor errors. Sometimes it’s all of the above. Regardless of the problem, though, when a paper can’t be verified, it goes back to the author for revision and resubmission. Multiple rounds of back and forth are, unfortunately, not uncommon.

Given that authors now conduct increasingly complex analyses that regularly rely on multiple data sets, we figured that this would be a good time to offer three helpful hints that can make the verification process less cumbersome and more efficient:

  1. Verify your own analyses first. Before uploading your data set and code to Dataverse, run it and see if you get the exact same results as you report in the paper. Sounds obvious, right? Well, it turns out that most authors actually don’t do this. But those who do have reported a relatively seamless verification process – they’ve caught glitches, typos, and bugs. If you can perform this task on a different computer than you typically use, even better. That increases the probability that the code will run when Odum begins the verification.
  2. Keep the code as slim as possible. After a paper has been verified, if you make any changes that involve the verified results, then the whole thing must go through the process again. Let’s say you want to modify a figure’s title and source note when you receive the page proofs. Well, if you generated the title and note in Stata or R and it’s part of the code, then the whole paper needs to go back to Odum. To avoid unnecessary delays, try to remove from the code anything that’s not part of the actual results or formatting of the figure itself. If you can do it in Word, take it out of the code.
  3. Talk to the IT person at your institution. If you know that your analyses have caused you any trouble or required somewhat unusual accommodations – like a super-computer or major technical assistance merging data sets – then prepare a document that summarizes how you solved these problems and the specifications of the computing environment you used. That short memo will go a long way helping Odum determine the best way to verify the analyses without having to troubleshoot all of the issues you already confronted.

We know that the verification process is clunky. But taking these three steps will undoubtedly make it a lot smoother and much faster. It’ll also make it that much easier when you want to verify someone else’s results.

AJPS Editor’s Blog

December 1st marked our first six months as co-editors of AJPS. We want to thank our associate editors, board members, authors, and reviewers for a very smooth transition. Now that we have some experience under our belts, we thought we’d offer some thoughts about several topics that might be helpful to reviewers and authors alike. So, on occasional Tuesdays, we’ll post a short entry about some aspect of the journal submission, review, or publication process that we’ve had to address over the course of the last six months. While these issues are probably relevant to most journals, we only speak for ourselves and our expectations for AJPS.

Appendices, Supporting Information, Supplemental Materials, You Get the Picture

In the old days – like the early 2000s – most articles appeared exclusively in print. Authors struggled with word counts because submissions had to include all relevant material, including appendices. Online supplemental appendices now allow authors to focus the body of the text on telling the main story. Details about survey questions, experimental treatments, alternative model specifications, robustness checks, and additional analyses can be relegated to the appendix. The upside is that articles themselves can be shorter, crisper, and more straightforward, but readers can still find clarifying information in the appendix. The downside is that some authors have taken a “more is better” and “better safe than sorry” approach to appendix compilations. In our six months on the job, we have received 10,000-word manuscripts that are “supported” by 50, 75, even 100-page appendices. Most appendices aren’t this long, but almost every manuscript now comes with significant supplemental materials.

We understand why authors do this. Why not preempt any concern a reviewer might raise, provide every alternative specification possible to model, and share every detail about the research design and protocol? The problem is that while appendix space may seem “free” to authors, it comes with a substantial cost to reviewers, who are now often faced with a 10,000-word manuscript and an equally long or longer appendix. Anything that increases the burden on reviewers makes an overworked system even more precarious.

At AJPS, we limit supplemental appendices to 20 pages. We believe that this gives authors sufficient space to provide additional information that might not belong in the body of a manuscript but is still important to the paper’s central contribution. In enforcing this limit, we ask authors to think carefully about what they really need to include in an appendix verbatim versus what they can summarize. If you run three experiments with identical treatments, you only need to offer the script of the treatment once. If you’re providing alternative analyses, you don’t have to provide every model you ever ran or think a reviewer might anticipate. If the additional material doesn’t merit some discussion in the main paper, then the more elaborate discussion doesn’t belong in the appendix either. As a general rule, we believe that a manuscript must be able to stand on its own. A reader must be able to understand it and find it convincing even without the appendix. The appendix, in other words, should be a place to provide information about “housekeeping” details, not a way to back door in thousands of words you couldn’t fit in the paper itself.

We know that limiting appendix pages can be anxiety-inducing for authors. That’s probably why so many of you request exemptions. But we’ve found that requiring authors to distinguish between what’s essential and what might be extraneous improves the quality of the manuscript and makes the task of reviewing that much easier and more reasonable – something every author appreciates when wearing the hat of a reviewer.

The New AJPS Editorial Team Starts Today! Here Are Our Four Central Goals

By AJPS Co-Editors Kathy Dolan and Jennifer Lawless

Today marks the day! The new editorial team at AJPS is up and running. We’re honored to serve the discipline this way and we’re excited about what the next four years have in store. Before anything else, we want to introduce the new team:

Co-Editors-in-Chief:

Kathleen Dolan, University of Wisconsin Milwaukee
Jennifer Lawless, University of Virginia

Associate Editors:
Elizabeth Cohen, Syracuse University
Rose McDermott, Brown University
Graeme Robertson, University of North Carolina
Jonathan Woon, University of Pittsburgh

You can take a look at the new Editorial Board here. We are thrilled that such an impressive, well-rounded, diverse group of scholars agreed to serve.

Over the course of the coming days and weeks, we’ll use this blog to call your attention to new policies and procedures. (Don’t worry – for the most part, processes won’t change!) But we want to take a few minutes now to highlight four central goals for our term.

STABILITY: AJPS has undergone a lot of transitions in a short period of time. And we’re grateful to the interim team for stepping up on short notice last year and working tirelessly to ensure that the journal would continue to thrive. But now we’ve got a permanent team in place for the next four years and are eager to provide the stability the journal needs.

TRANSPARENCY: We’re committed to managing a process that maintains transparency and academic rigor. We will accomplish this, in part, by maintaining the current system of data verification and the professional and personal conflict of interest policy. We will also require authors of work based on human subjects to confirm institutional IRB approval of their projects at the time a manuscript is submitted for consideration. And we’ll be vigilant about ensuring that authors are authorized to use – at the time of submission – all data included in their manuscripts.

DIVERSITY: As scholars of gender politics, we are well aware of the ways in which top journals do not always represent the diversity of a discipline. In putting together our team of Associate Editors and our Editorial Board, we have intentionally worked to represent race, sex, subfield, rank, institutional, and methodological diversity. It is our hope that the presence and work of these leaders sends a message to the discipline that we value all work and the work of all.  We want to be as clear as possible, though, that our plan to diversify the works and the scholars represented in the journal in no way compromises our commitment to identifying and publishing the best political science research. Indeed, we believe that attempts at diversification will actually increase the odds of identifying the best and most creative work.

OPEN COMMUNICATION: The journal’s success is contingent on the editorial team, authors, reviewers, and the user-community working together. In that vein, we value open communication. Undoubtedly, you won’t love everything we do. Maybe you’ll be upset, disappointed, or troubled by a decision we make. Perhaps you’ll disagree with a new policy or procedure. Please contact us and let us know. We can likely address any concerns better through direct communication than by discerning what you mean in an angry tweet. We get that those tweets will still happen. But we hope you’ll feel comfortable contacting us directly before your blood begins to boil.

Before we sign off, we want to let you know that we’re aware that, for some people, earlier frustration with the MPSA had bled over into AJPS. We ask for your continued support and patience as the new MPSA leadership addresses issues of concern and seeks to rebuild your trust. We ask that you not take your frustrations out on the journal by refusing to submit or review. A journal can only function if the community is invested in it.

Thanks in advance for tolerating the transition bumps and bruises that are sure to occur. We’ll try to minimize them; we promise.

Kathy and Jen

Verification, Verification

By Jan Leighley, AJPS Interim Lead Editor  

After nine months of referring to the AJPS “replication policy,” or (in writing) “replication/verification” policy, I finally had to admit it was time for a change. As lead editor, I had been invited to various panels and workshops where I noticed that the terms “replication”, “verification”, and “reproducibility” were often used interchangeably (sometimes less awkwardly than others), and others where there were intense discussions about what each term meant or required.

Spoiler Alert: I have no intention, in the context of this post, with 10 days left in the editorial term, to even begin to clarify the distinctions between reproducibility, replicability, and verifiability—and how these terms apply to data and materials, in both qualitative and quantitative methods.

A bit of digging in the (surprisingly shallow) archives suggested that “replication” and “verification” had often been used interchangeably (if not redundantly) at AJPS. Not surprising, given the diversity of approaches and terminology used in the natural and social sciences more broadly (See “Terminologies for Reproducible Research” at arXiv.org). But in a 2017 Inside Higher Education article, “Should Journals Be Responsible for Reproducibility?”, former editor Bill Jacoby mentioned that the AJPS “Replication and Verification Policy” terminology would soon be adjusted to be consistent that of the National Science Foundation. From the article: “Replication is using the same processes and methodology with new data to produce similar results, while reproducibility is using the same processes and methodology on the same dataset to produce identical results.”

It made sense to me that a change in names had been in the making, in part due to the important role of the AJPS as a leader in the discipline, social sciences, and possibly natural sciences on issues of transparency and reproducibility in scientific research. While I had no plans as interim editor to address this issue, the publication of the journal’s first paper relying on (verified) qualitative research methods required that the editorial team review the policy and its procedures. That review led to a consideration of the similarities and differences in verifying quantitative and qualitative papers for publication in the AJPS—and my decision to finally make the name change “legal” after all this time: the “AJPS Replication & Verification Policy” that we all know and love will now move forward in name officially as theAJPS Verification Policy“.

This name change reflects my observation that what we are doing at AJPS currently is verifying what is reported in the papers that we publish, though what we verify differs for qualitative and quantitative approaches. In neither case do we replicate the research of our authors.

Do note that the goals and procedures that we have used to verify the papers we publish will essentially remain the same, subject only to the routine types of changes made as we learn how to improve the process, or with the kind of adjustments that come with changes of editorial teams. Since the policy was announced in March 2015, The Odum Institute has used the data and materials posted on the AJPS Dataverse to verify the analyses of 195 papers relying on quantitative analyses.

Our experience in verifying qualitative analyses, in contrast, is limited at this point to only one paper, one that the Qualitative Data Repository verified early this spring, although several others are currently under review. As in the case of quantitative papers, the basic procedures and guidelines for verification of qualitative papers have been posted online for several years. We will continue to develop appropriate verification procedures, as we build on our limited experience thus far, and respond to the complexity and heterogeneity of qualitative research methods. Authors of accepted papers (or those who are curious about verification procedures) should check out the guidelines and checklists posted at www.ajps.org to learn more.

For those who care about graphics more than terminology (!), I note that a few changes have been made to the badges awarded to verified articles. I’ve never been a badge person myself, but apparently this is the currency of the realm in open science circles, and some research suggests that by awarding these badges, researchers are more likely to follow “open science” practices in their work. AJPS is proud to have our authors’ papers sport these symbols of high standards of transparency in the research process on our Dataverse page and on our published papers. Our badge updates include the addition of the words “peer review” to reflect that our verification policy relies on external reviewers (i.e., Odum, QDR) to document verifiability rather than doing it in-house, the most distinctive aspect of the AJPS Verification Policy. It also includes a new “Protected Access” badge that will signify the verification of data that is available only through application to a protected repository, as identified by the Center for Open Science. As new papers are accepted for publication, you will begin to see more of the new badges, along with revised language that reflects more precisely what those badges represent.

Cheers to replication, verification—and the end of the editorial term!
Jan (Sarah, Mary, Jen, Layna and Rocio)


Citation:
Jacoby, William G., Sophia Lafferty-Hess, Thu-Mai Christian. 2017. “Should Journals Be Responsible for Reproducibility?” Inside Higher Education [blog], July 17.

Our Experience with the AJPS Transparency and Verification Process for Qualitative Research

“As the editorial term ends, I’m both looking back and looking forward . . . so, as promised, here’s a post by Allison Carnegie and Austin Carson describing their recent experience with qualitative verification at AJPS . . . and within the next week I’ll be posting an important update to the AJPS “Replication/Verification Policy,” one that will endure past the end of the term on June 1.”
– Jan Leighley, AJPS Interim Editor


Our Experience with the AJPS Transparency and Verification Process for Qualitative Research

By Allison Carnegie of Columbia University and Austin Carson of the University of Chicago

The need for increased transparency for qualitative data has been recognized by political scientists for some time, sparking a lively debate about different ways to accomplish this goal (e.g., Elman, Kapiszewski and Lupia 2018; Moravcsik 2014. As a result of the Data Access and Research Transparency (DA-RT) initiative and the final report of the Qualitative Transparency Deliberations,  many leading journals including the AJPS adopted such policies. (Follow this link for a critical view of DA-RT.) While the AJPS has had such a policy in place since 2016, ours was the first article to undergo the formal qualitative verification process. We had a very positive experience with this procedure, and want to share how it worked with other scholars who may by considering using qualitative methods as well.

In our paper, “The Disclosure Dilemma: Nuclear Intelligence and International Organizations (https://doi.org/10.1111/ajps.12426),” we argue that states often wish to disclose intelligence about other states’ violations of international rules and laws, but are deterred by concerns about revealing the sources and methods used to collect it. However, we theorize that properly equipped international organizations can mitigate these dilemmas by analyzing and acting on sensitive information while protecting it from wide dissemination. We focus on the case of nuclear proliferation and the IAEA in particular. To  evaluate  our claims, we couple a formal model with a qualitative analysis using each case of nuclear proliferation, finding that strengthening the IAEA’s intelligence protection capabilities led to greater intelligence sharing and fewer suspected nuclear facilities. This analysis required a variety of qualitative materials including archival documents, expert interviews, and other primary and secondary sources.

To facilitate the verification of the claims we made using these qualitative methods, we first gathered the raw archival material that we used, along with the relevant excerpts from our inter- views, and posted them to a dataverse location. The AJPS next sent our materials to the Qualitative Data Repository (QDR) at Syracuse University, which reviewed our Readme file, verified the frequency counts in our tables, and reviewed each of our evidence-based arguments related to our theory’s mechanisms (though it did not review the cases in our Supplemental  Appendix). (More details for this process can be found in the AJPS Verification and Replication policy, along with its Qualitative Checklist.) QDR then generated a report which identified statements that it deemed were “supported,” “partially supported,” or “not documented/referenced.” For the third type of statement, we were asked to do one of the following: provide a different source, revise the statement, or clarify whether we felt that QDR misunderstood our claim. We were free to address the other two types of statements as we saw fit. While some have questioned the feasibility of this process, in our case it took roughly the same amount of time that verification processes of quantitative data typically do, so it did not delay the publication of our article.

We found the report to be thorough, accurate, and helpful. While we had endeavored to support our claims fully in the original manuscript, we fell short of this goal on several counts, and fol- lowed each of QDR’s excellent recommendations. Occasionally, this involved a bit more research, but typically this resulted in us clarifying statements, adding details, or otherwise improving our descriptions of, say, our coding decisions. For example, QDR noted instances in which we made a compound claim but the referenced source only supported one of the claims. In such a case, we added a citation for the other claim as well. We then drafted a memo detailing each change that we made, which QDR then reviewed and responded to within a few days.

Overall, we were very pleased with this process. This was in no small part due to the AJPS editorial team, whose patience and guidance in shepherding us through this procedure were greatly appreciated. As a result, we believe that the verification both improved the quality of evidence and better aligned our claims with our evidence. Moreover, it increased our confidence that we had clearly and accurately communicated with readers. Finally, archiving our data will allow other scholars to access our sources and evaluate our claims for themselves, as well as potentially use these materials for future research. We thus came away with the view that qualitative transparency is achievable in a way that is friendly to researchers and can improve the quality of the work.

About the Authors: Allison Carnegie is Assistant Professor of Columbia University and Austin Carson is Assistant Professor at the University of Chicago. Their research, “The Disclosure Dilemma: Nuclear Intelligence and International Organizations (https://doi.org/10.1111/ajps.12426),” is now available in Early View and will appear in a forthcoming issue of the American Journal of Political Science. Carnegie can be found on Twitter at
@alliecarnegie and Carson at @carsonaust.

References

Elman, Colin, Diana Kapiszewski and Arthur Lupia. 2018. “Transparent Social Inquiry: Implica- tions for Political Science.” Annual Review of Political Science 21:29–47.

Moravcsik, Andrew. 2014. “Transparency: The Revolution in Qualitative Research.” PS: Political Science & Politics 47(1):48–53.

On Manuscript Preparation, Salami-Slicing, and Professional Standards

By Jan Leighley, AJPS Interim Lead Editor  

One of the most challenging (and potentially mind-numbing) tasks that occurs in the inner sanctum of the editorial office is the veritable “technical check.” Even mentioning this work might trigger some unpleasant memories for colleagues who previously served as graduate assistants for AJPS editors over the past several decades. It might also remind those who recently submitted manuscripts of the long checklist of required “to-do’s” that, if not met, delays the long-anticipated start of the peer review process.

But the requirements of manuscript preparation focusing on the mechanics (e.g., double-spacing, complete citations, word limits, etc.) are only part of what editors and reviewers are dependent on authors for. Beyond the detailed items that staff can verify, editors expect that authors follow our “Guidelines for Preparing Manuscripts,” including not submitting manuscripts that are under review elsewhere; not including material that has already been published elsewhere; or not having been reviewed previously at the AJPS. Before submitting your next paper, take a fresh look at the long list of expectations for manuscript preparation and manuscript submissions at www.ajps.org, as that list of requirements seems to grow ever longer with every editorial term—and the new editorial team will likely update that list as they see fit.

One of the submission requirements that we added a few months ago is: If the paper to be submitted is part of a larger research agenda (e.g., other related papers under review or book manuscripts in development) these details should be identified in the “Author Comments” text box during the manuscript submission process. We added this requirement after we had several reviewers, on different manuscripts, question the original contribution of the papers they were reviewing, as they seemed trivially different from other papers associated with a bigger project. Editors (thank you, John Ishiyama) sometimes refer to this as “salami slicing,” with the question being: how thin a slice of the big project can stand as its own independent, substantial contribution? Another reason for asking authors to report on bigger, related projects has to do with how these “bigger projects,” if involving a large group of scholars in a subfield who are not authors, might compromise the peer review process. Providing these details, as well as a comprehensive list of co-authors of all authors of the manuscript being submitted, is incredibly helpful as editors seek to identify appropriate reviewers—including those who might have conflicts of interest with the authors, or those who may base their review on who the author is, rather than the quality of the work.

As a testament to the serious and careful work our reviewers do, over the past few months, we have had to respond to problems with a number of submitted manuscripts that reviewers have suggested violate AJPS’s peer review principles. One reviewer identified a paper that had previously been declined, as he or she had already reviewed it once. Some, but not all, authors have communicated directly with us, asking whether, with substantial revisions to theory, data, and presentation, we would allow a (previously declined) paper to be reviewed as a new manuscript submission. Usually these revised manuscripts do not clear the bar as new submissions. In some senses, if you have to ask, you probably are not going to clear that bar. But we applaud these authors for taking this issue seriously, and communicating with us directly. That is the appropriate, and ethical, way to handle the question.

We’ve had similar problems with manuscripts that include text that has been previously published in another (often specialized subfield or non-political science) journal. Reasonable people, I suppose, might disagree about the “seriousness” or ethics of using paragraphs that have been published elsewhere in a paper under review at APJS (or elsewhere). The usual response is: How many ways are there to describe a variable, or a data set, or a frequency distribution? To avoid a violation of the “letter of the law” authors sometimes revert to undergraduate approaches to avoiding plagiarism, by changing a word here or there, or substituting different adjectives in every other sentence. The more paragraphs, of course, the closer the issues of “text recycling” and “self-plagiarism” come into play.

This sloppiness or laziness, however, pales in contrast to the more egregious violations of shared text between submitted and previously published papers that we have had to deal with. Sometimes we have read the same causal story, or saw analytical approaches augmented with one more variable added to a model, or a different measure used to test a series of the same hypotheses, or three more countries or ten more years added to the data set. At which point we had to determine whether the manuscript violates journal policies, or professional publishing standards.

When faced with these issues, we have followed the recommendations of the Committee on Publishing Ethics and directly contacted authors for responses to the issues we raise. I realize that junior faculty (especially) are under incredible pressure to produce more and better research in a limited pre-tenure period, and; I recognize that (a handful of?) more senior faculty may have some incentives for padding the c.v. with additional publications for very different reasons.

While there might be grey areas, I admit to having little sympathy for authors “forgetting” to cite their own work; using “author anonymity” as an excuse for not citing relevant work; or cutting and pasting text from one paper to another. This is not to say that the issues are simple, or that the appropriate editorial response is obvious. But it is discouraging to have to spend editorial time on issues such as these. And as a discipline, we can do better, by explicitly teaching our students, and holding colleagues accountable, to principles of openness, honesty, and integrity. Read the guidelines. Do the work. Write well. Identify issues before you submit. And don’t try to slide by.

The discipline—its scholarship, publishing outlets, its editorial operations, and professional standards—has certainly changed a lot, and in many good ways since the last time I edited. What has not changed is the critical importance of expecting our students and colleagues to respect shared ethical principles. Our editorial team has made some of those issues more explicit in the submission process, asking about editorial conflicts of interest, IRB approvals, and potential reviewer conflicts of interest. While this requires more work of our authors, we think it is work that is well worth the effort, and we thank our authors and reviewers for helping us maintain the highest of professional standards at the AJPS.

 

The American Journal of Political Science (AJPS) is the flagship journal of the Midwest Political Science Association and is published by Wiley.