Tag Archives: AJPS

Research and Accessibility

A lot of our best basic research often seems esoteric and is rarely approachable to those outside our own specialization. But this need not be the case. Some disciplines are excellent at promoting their work and getting the word out. Consider the search for the Higgs boson and the hoopla when it was found. Most of us don’t know what the Higgs boson is and why it matters (much less being able to see it). Yet we all know it is important and it was a remarkable scientific achievement. The physics community did a great job making their work accessible.

How can Political Scientists make their work more accessible? The question here is how to balance the rigor of our science with making it clear to non-specialists about what we found and why it is important. Rather than complaining that we never make the effort, I thought I would try my hand at short, cartoonish, interpretations of articles that I have recently read and like. My first effort focuses on a forthcoming paper in the American Journal of Political Science by Kris Kanthak and Jon Woon entitled: “Women Don’t Run? Election Aversion and Candidate Entry.” I liked this paper the first time I heard it presented and it has only gotten better. You can see my take on it on YouTube under my channel Politricks.

I am going to try to do more of these over time. Who knows if they will get much attention. However, I see it as breaking out of the usual mold in which we write papers, cite the work and try to teach it to our students. Perhaps this will inspire others.

Others who have done similar work in the social sciences have inspired me. The first I remember seeing was featured in The Monkey Cage. The cartoon was remarkable for being short and exactly on the mark. The article that it translated was a dense piece of formal theory. The cartoon got it exactly right. More recently I was impressed by a very short animation that perfectly points to a problem in decision theory regarding queuing. It is perfectly understandable because we have all been there.

When I teach an Introduction to American Government class, I often use this to explain problems inherent with “first past the post” electoral systems.  While a little long, it is clear and the students get it quickly.

There are plenty of other examples and I’ll post things I like as I find them.

Advertisements

Publishing, the Gender Gap and the American Journal of Political Science

Over a year ago I wrote a short piece  concerning whether women were getting a fair shake at the AJPS. I thought so and I reported some statistics that reflected that opinion. However, I thought I could do better than simply report the percentage of published articles with women as authors or co-authors. What was glaringly absent was a benchmark metric. I did not even know what proportion of women authors submitted manuscripts to AJPS. I decided to rectify this (and my colleague Ashley Leeds urged me to stick with it).

Compiling a list of all manuscripts that were submitted while I was editor was easy. This information can easily be retrieved from the electronic manuscript manager that I used. However, getting information about characteristics of the corresponding author is nearly impossible. For co-authors, it is impossible. No information is collected about the gender, race, age or other characteristics of authors. The same problem is true with reviewers. I downloaded all of the manuscript data and all of the reviewers tied to each manuscript. I then had two research assistants code each author, co-author and reviewer for gender. Altogether there were 2,835 unique manuscripts that arrived at the AJPS offices from January 1, 2010 through December 31, 2013. There were a total of 5,064 authors. Of course, these are not unique authors, since more than a few authors submitted multiple manuscripts over the four years I was with the journal. On the reviewer side there were a total of 10,984 reviewers initially solicited. Of that set, 6,158 completed their review.

Authors.

In the Monkey Cage post I noted that 19.8% of AJPS articles had a woman as a lead author and 34.8% of AJPS articles had at least one woman as an author. This later percent accounts for articles that are co-authored. The problem with this count is that there is no useful denominator. It merely reflects the percentage relative to all published articles. It does not take into account the percentage of articles submitted by women.

I now have the distributions for manuscripts submitted to AJPS. It turns out that women are publishing at about the same rate as they submit. While I do not have data concerning the lead author, if I take solo-authored articles, then women submit 21.4% of the articles – slightly more than the 19.8% of AJPS articles with a female lead author. Of course, these are not exactly comparable. On the other hand, 31.96% of the articles submitted had at least one female author while 34.8% percentage of accepted articles had at least one female author. In a sense my earlier count was not far off.

The table below looks at basic decisions: desk rejections, declines on first review and manuscripts invited back or accepted. As a rule, on my first reading, I tended to blind myself to the author(s). Apparently I desk rejected manuscripts with male authors more frequently than manuscripts with female authors (about 5 percent more). This evens out following review, with males being declined at just over 50 percent, while manuscripts with a female author are declined almost 55 percent of the time. There is no appreciable difference in R&Rs or first accepts between the two groups.

Editorial Decisions
Male authors only At least one female author
Desk Reject 38.31%

(739)

33.22%

(301)

Reviewed and Declined 50.29%

(970)

54.75%

(54.75)

R&R and/or Accept 11.41%

(220)

11.76%

(109)

It appears that once manuscripts enter the review process the probability of receiving an R&R or acceptance is not correlated with the sex of the author. The only thing that is certain is that if you do not send in a manuscript, you will not get published.

Reviewers.

I worked hard to eliminate biases at the initial stage of review (whether to desk reject a manuscript). It could be that biases emerged in second critical stage, as reviewers are assigned to manuscripts.

A large number of reviewers were initially contacted. Of the 10,984 reviewers, 24.14 percent were women (2,652). This is slightly more than the 21.25% of authors who were female (1,076). In part this may be due to the fact that I asked my Editorial Assistants to add to our reviewer database and expand beyond the usual suspects.

These are aggregate numbers and count the same reviewer multiple times. I tried very hard not to call on the same reviewers more than twice a year, but there is the possibility that females were used disproportionately. Of the 5,133 unique reviewers used, 25.85% were female. This is above the proportion of women submitting manuscripts to the AJPS.

It may be that females are more conscientious, so I called on them more often. However, this does not seem to be the case. Male reviewers completed their review 55.3% of the time compared with 54.5% of female reviewers.

It appears that, at the margin, I called on women to review disproportionate to their numbers in submitting manuscripts. The percentage differences are not large. I do not have data to indicate whether this was a part of my deliberate outreach to junior faculty and some advanced graduate students.

The Bottom Line

Journals should be transparent in what they do. A start is providing these kinds of data. They allow the community to check on any biases that may creep into decisions. Editorial boards and interested communities have every incentive to monitor the decisions by Editors.

These data are also useful for Editors for double-checking what they are doing during the course of their tenure. I wish that I had done this in the middle of my tenure rather than after I stopped being Editor.

These data are very hard to collect. I used a lot of student coding time to pull these data together. Associations and Editors should press their electronic manuscript managers to add a handful of fields that are required for authors and co-authors. The burden should be minimal for those submitting a manuscript. Getting reviewers to enter additional information, however may be difficult. I review for a lot of different journals, and I am positive that I have failed to enter much personal information – I’m usually overwhelmed with other things that need to be done and finishing and submitting a review is about all I’m interested in doing. I doubt that I am alone in this feeling. Despite my reluctance, I see that such information is useful and I will change my bad habits.