STUDY: ‘no clear evidence’ of gender bias in poli sci journals
A new study has found “no clear evidence” that political science journals discriminate against female scholars despite concerns over the so-called “gender publication gap.”
The work, titled “Gender in the Journals, Continued: Evidence from Five Political Science Journals,” was inspired by a 2017 study, “Gender and Editorial Outcomes,” that found a strong “gender gap” in publication rates of peer-reviewed academic articles.
"The source of female under-representation at top journals is in the pool of submissions."
Simply counting authors by gender, however, is a flawed way of assessing discrimination against women, suggest the authors of the new study, Purdue University Professor Nadia E. Brown and University of Minnesota Professor David Samuels.
On Monday, Samuels and Brown announced the findings of self-audits undertaken by five leading political science journals, including Comparative Political Studies, Political Behavior, and the American Political Science Review.
“[T]he results across journals were remarkably similar,” the researchers wrote. “Even though the journals differ in terms of substantive focus, management/ownership, as well editorial structure and process, none found evidence of systematic gender bias in editorial decisions.”
Instead of simply analyzing the gender of published authors, the editors of these journals also assessed the gender of those who submitted articles to the publications.
Two striking patterns emerged.
First, across all publications, women were significantly less likely to submit research in hopes of publication. At Political Behavior, for example, only 10.8 percent of manuscripts between 2015-2017 were pitched by individual women, whereas 34.8 percent of manuscripts pitched during the same time period were submitted by individual men.
Second, they determined that women are also much less likely to work on teams. Exactly 20 percent of articles pitched to Comparative Political Studies between 2013-2016 are co-authored by teams of all-male researchers, while female-only teams constitute 4.1 percent of submissions.
“Research collaborations predict success,” lead author David Samuels told Campus Reform. “However, women are far less likely to be part of research teams than men. So, they submit far fewer collaborative papers, and thus publish fewer papers.”
As far as Samuels can tell, the “source of female under-representation at top journals is in the pool of submissions,” and not—as many feminist academics have theorized—due to gender discrimination during the review process itself.
According to the scholar, perception of bias might be a bigger factor that discourages women from submitting academic journals for review.
“Perceptions can become reality—that is, if (some) women believe they don't get a fair shake at some journals (for whatever reason, and there's no point in arguing with anyone about whether the perceptions are justified or not), they won't submit to those journals, perpetuating the perception that the journal (and by extension the discipline) is not fair to women, and helping perpetuate the dominance of men in top positions in the discipline,” he explained.
Samuels, however, was quick to point out that this doesn’t negate the possibility of gender bias elsewhere in the field since it only examines publication rates of research by gender.
Teaching evaluations, in particular, can contribute to the “pipeline” problem of women in the field, he said.
Samuels said that gender disparities “start with teaching evaluations, which are so obviously biased against women [that] many of us dismiss their utility entirely when considering a scholar for tenure, for example. And evaluations are just scratching the surface.”
Because the new study relied on self-audit, Campus Reform also asked Samuels about the potential that findings were selectively presented to obscure a reality of discrimination against female researchers.
Samuels, however, dismissed this suggestion, saying there is a strong incentive for editors to make sure they are reporting accurate data.
“None of us have permanent positions as editors—for the American Political Science Review, for example, the term is only three years,” he explained. “Editors are likely to continue to do these audits, so everyone has an incentive to ‘get the numbers right’ so as not to be questioned by a later editor’s work.”
Follow the author of this article on Twitter: @Toni_Airaksinen