Monthly Archives: May 2015

Transparency, Openness and Replication

It is ironic that I am writing this post today. On May 19 Don Green asked that an article he recently co-authored in Science be retracted. The article purported to show that minimal contact with an out-group member (in this case, someone noting that he was gay) had a long-term effect on attitudes. As it turns out the data appear to be a complete fabrication (see the analysis by Broockman, Kalla and Aronow). The irony stems from the fact that I have been sending letters to editors of political science journals, asking them to commit to Transparency and Openness Promotion (TOP) guidelines. These guidelines make recommendations for the permanent housing of data and code, what should be elaborated about a research design and the analytic tools, and issues for pre-registration of studies. Don Green is a signatory to the letter and he was instrumental in pushing forward many of the standards.

The furor over the LaCour and Green retraction (and the recent rulings on the Montana field experiments) has forced me to think a bit more sharply about ethics. There are four lessons to be learned here.

First, science works. Will Moore makes this point quite nicely.  If someone has a new and interesting finding, it should never be taken as the last word. Science requires skepticism. While I teach my students that they should be their own worst critic, this is not enough. The process of peer review (as much as it might be disparaged) provides some opportunity for skepticism. The most important source of skepticism, however, should come from the research community. A finding, especially one that is novel, needs to be replicated (more on that below). Andrew Gelman makes this point on the Monkey Cage. We should be cautious when we see a finding that stands out. It should attract research attention and be “stress-tested” by others. The positive outcome of many different researchers focusing on a problem is that it allows us to calibrate the value of a finding and it should deter misconduct.

Second, the complete fabrication of findings is a rare event. There have been few instances in political science of outright fraud. This is not so much because of close monitoring by the community, nor the threat of deterrence. It seems that most of us do a good job of transmitting ethics to our students. We stress the importance of scientific integrity. I suspect that this case will serve as a cautionary tale. Michael LaCour had a promising career ahead of him. I’ve seen him present several papers and I thought all of them were innovative and tackling hugely important questions. Now, however, I do not trust anything I have seen or heard. My guess is that his career is destroyed. While we stress that our students adopt ethical norms of scientific integrity, it is equally important to enforce those norms when violated. I assume that will happen in this case. This case also raises the question of the role of LaCour’s co-author in monitoring the work and of LaCour’s advisors. All of us who have co-authors trust what they have done. But at the same time, co-authors also serve as an important check on our work. I know that my co-authors constantly question what I have done and ask for additional tests to ensure that a finding is robust. I do the same when I see something produced by a co-author over which I had no direct involvement. This is a useful check on findings. Of course, it will not prevent outright fraud. In a different vein students are apprentices. Our role as an advisor is to closely supervise their work. Whether this role is sufficient to prevent outright fraud is an open question.

Third, there is enormous value in replication. These days there is little incentive to replicate findings, but it is important. The team of Broockman and Kalla were trying an extension of the LaCour and Green piece. And why not, the field experiment the latter designed seemed to be a good start for additional research. This quasi-replication quickly demonstrated some of the problems associated with the LaCour and Green study. The Broockman, Kalla and Aronow discussion has proven to be persuasive. I worry, however, that this spurs replications that are resemble “gotcha” journalism. I certainly encourage replications and extensions that demonstrate the implausibility of a finding. However, I also hope that corroborating replications and extensions get their due. We need to encourage replications that allow us to assess our stock of knowledge. The Journal of Experimental Political Science openly welcomes replications and the same is true of the new Journal of the Economic Science Association. There is a movement afoot among experimental economists to focus on major papers published in the past year and subject the key findings to replication. Psychology has mounted “The Reproducibility Project” in which 270 authors are conducting replications of 100 different studies. While this may be easier for those of us who use lab experiments, we should more generally address this issue.

Fourth, the incentives for scholars is a bit perverse. Getting a paper published in Science or Nature is a big hit. Getting media attention for a novel finding is valuable for transmitting our findings (but see the retraction by This American Life ). We put an enormous amount of pressure on junior faculty to produce in the Big 3. By doing so, we ignore how important it is for junior faculty (and for senior faculty as well) to build a research program that tackles and answers questions through sustained work. Incentivizing big “hits” reduces sustained work in a subfield. Of course major Journals (I’ll capitalize this so that you know I’m referring to general journals in disciplines) often are accused of sensationalizing science. The Journals are thought to prioritize novel findings. This is true. While Editor at AJPS I wanted articles that somehow pushed the frontiers of subfields and challenged the conventional wisdom. My view is that the Journals are part of a conversation about science and not the repository for what is accepted “truth.” Articles published in top Journals ought to challenge the community and spur further research.

In the end, I am not depressed by this episode. It is spurring me to push journals to adopt standards that ensure transparency, openness and reproducibility. It also revives my confidence in the discipline and the care that many scholars take with their work.