What Should Journals Do?

While attending the 11th meeting of EGAP (Experiments in Governance and Politics) this weekend a session was held on transparency and replication. The discussion was fascinating and for the panel, Don Green, Macartan Humphreys and Jenny Smith presented a paper entitled “Read it, understand it, believe it, use it: Principles and proposals for a more credible research publication.”  The paper presents a challenge to the academy and to journals in particular. The goal is nothing less than defining “best-practices” for social science journals. I see the authors as presenting a challenge to business-as-usual and pressing for a response from professional associations and from journal editors alike. I am not in a position of power in a professional association nor am I an editor. But I have some experience in the latter and thought I would share some of my own thoughts.

The paper begins from the premise that fraud, deceit and cheating are serious concerns, especially for the social sciences. While no evidence is mustered for whether these problems are widespread, the more general point is that even a few bad apples will spoil the rest of us. I discount this basis for their argument. They also argue that these proposed changes will force the social sciences to be more careful with inference and lead us to committing fewer false positives. With this I wholeheartedly agree. At heart are two concerns – the first aligning the incentives of researchers and journals and the second aligning the incentives of researchers and their discipline. I’ll take each in order.

Aligning the incentives of researchers and journals.

Table 2 from their paper (and which I’ve reproduced here) summarizes the 14 proposed innovations and details the costs and risks attached to each, as well as whether coordination is needed across journals. The assumption is that if journals implement these, then the incentives for researchers will change and science will improve. Many of these changes are worth considering as a challenge to the current way we run journals. But, I have some concerns. Journal_4_12_14   Innovation #4 asks for open access. This same point is being pushed by Congress, especially for research funded by Federal grants. The point is reasonable, and as Green, Humphreys and Smith recognize, this will require a new business model for journals. While the general journals in the social sciences are supported by professional associations this is not true for all. Many journals are run on a shoestring and supported by publishers. The revenue streams are uneven for all journals and most could not survive if abandoned by publishers. Journals could easily forego publishing hard copies of each issue and putting these on line. However, the costs of printing an issue is minor compared with the basic costs of production. These include staffing support for journals (most of which run on a shoestring – even the general journals), support for electronic submissions and reviewer submissions, copyediting and numerous other costs. While some of these costs could be recouped from submission fees, most social scientists are adverse to such costs. Realistically, submission fees could be as high as $500 per manuscript for general journals if moving to such a model.

A number of innovations that are proposed are of minimal cost – at least to authors. Numbers 7 and 8 are great ideas. They ensure data quality before being sent to reviewers, asking editors and an in-house staff to vet replication files before being sent out for review. While authors bear no costs, the journals will. Where will these resources come from? Not from publishers if open access is imposed. If I did it all over again, I would probably ask for support for an in-house statistician. To optimally use that person’s skills, I would require that all manuscripts given an R&R be required to submit all of their files prior to being reviewed under revision. This only partially gets at what is suggested, but it is feasible for a general journal. For subfield journals, this is more difficult.

Innovation 10 is a great check on the robustness of a finding. It asks that publication be withheld for a year. A manuscript that has been tentatively accepted, after the data has been vetted, it has survived the review process and has passed the muster of the editor, should then be put out for public comment. So far, so good. But, this further delays publication and puts junior faculty at risk. It is probably the case that getting good science is far more important than the careers of faculty. Yet there is a question of whether delay serves science. Published work does attract interest – especially if there are flaws in that work. But good science may be worth the wait. Who moderates the period of public commenting? The Editor? She is likely busy with a myriad of other things and cannot diligently attend to all postings as they come through. The author, certainly shouldn’t be trusted with moderating. Public comment could be left unmoderated. But will public comment devolve into some version of the “rumors” postings? This would do little to serve the interests of science.

Innovation 2 is intriguing. Authors should submit their manuscript as fully written, except without results. The aim is to get Editors and reviewers to focus on the hypotheses, constructs and research design without reference to the significance of the findings. This might serve to welcome null results and nudge findings from the p just less than .05 phenomenon. As with many things, however, this will increase the waiting time for authors. Passing the bar for a data-blind review may result in a second review before deciding on an R&R. It will be impossible to gauge effect sizes from data-blind review. Admittedly this would press an additional standard on a researcher – specifying a priori what constitutes an important effect. In the best of all possible worlds, such a manuscript would be identical to a registered design, but it will be impossible to know if the researcher “peeked” at the data before sending in the data-blind manuscript (and post hoc decided effect sizes and detailed the analysis plan).

Innovations related to open data and materials are critical. I see no reason that all journals shouldn’t require and mandate this. I discovered that authors are very willing to comply when asked to provide such materials contingent on final acceptance. Of course, such material should be stored on some publically accessible platform. There are many. Expecting researchers to store and maintain materials on their own website is not going to be a long-term solution. Many of the innovations asked in this paper are going to require a huge change. The investment of time and energy by editors and reviewers is going to be substantial. It is important to remember that both give of their time and create public goods. Science operates on the basis of scientists making sacrifices in their own work to provide public goods. I worry that many of these innovations will crowd out the provision of public goods. This will be to the detriment of science.

Aligning the incentives of researchers and the discipline.

Green, Humphreys and Smith pay less attention to this aspect of aligning interests. Yet it is important. If nothing else it is important to change the incentives created by disciplines for scholars. First, it is important to encourage replication of findings in different contexts. At present there is too much emphasis on pursuing novelty at the cost of credibility. There is little incentive for researchers to replicate a novel finding in a different setting. This is different than replicating with the same data. While the latter is useful to ensure that a finding is not due to a mistake, replicating in a different environment or with different data helps provide confidence in the veracity of a finding. The complaint is that journals are not interested in replication. This is perhaps true of the general journals. But plenty of field-specific journals are open to replications with new data. More importantly senior faculty should reward junior faculty for producing corroborating findings.

Second, the community needs to support norms for scholars that pre-register their designs and analysis plans. There are very few subfields in political science that wouldn’t benefit from pre-registration. Observational studies can easily detail their hypotheses, research design, variable construction and analysis plan well before touching data. While most people realize that research is rarely carried out in this manner (many of us go fishing once we have the data), it is also the case that most of us do some version of pre-registration when we embark on our research. We have a very good idea of what model we are using to focus the research, we have a clear design in mind, we know the data we want to use and we have a plan for analysis. We just don’t write it down. Doing so is valuable. This is not to say that what we find unfolds exactly as we expected. However, failing to find what I was looking for is very instructive – I learn a lot from what is unexpected. Registration is simply a way of reminding me when something is unexpected.

Third, we need to change the way we teach our graduate students. In my first year graduate seminar or in my experimental design seminar I force students to write a pre-analysis paper. I want them to focus on the model, the hypotheses, the constructs and the design. I do not want them to focus on where they can find someone else’s data set where they can cobble together bad proxies for the things they want to measure. I would rather they focus on being creative about their research design and their outcome variables. Of course this means that graduate students will not get the jump on their peers by churning out research right from the beginning. However, I would rather they learn how to do the research properly in the first place, rather than learning by trial and error.

Finally, we need to move the focus away from the “big 3.” We put enormous pressure on our junior faculty to get their work into the general journals. The reality is that there is only limited space in the general journals. If researchers spend their time crafting all of their work to try to “hit” the big journals they will be spinning their wheels. I would much rather see us reward our junior colleagues for producing a full portfolio of work focused on a carefully defined research program. If this means producing a number of papers that carefully replicate and corroborate an interesting “fact” then that should be fine. It will give us greater confidence in our knowledge. Isn’t that what we’re supposed to be doing?

About these ads

One thought on “What Should Journals Do?

  1. mdwardlab

    In house replication assumes that analysis is available at the push of a button. Many procedures are no longer deterministic, and even with fast computers may take several days to run. The result will not necessarily be exact, but if done 100s of times will yield correct averages. I favor the archive approach rather than active monitoring.

    Reply

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s