Tag Archives: TOP

DA-RT, TOP and Rolling Back Transparency

I am more than a little dismayed by efforts to roll back transparency and openness in political science. The “movement” began in mid-August with emails to editors of political science journals that had signed on to DA-RT (Data Access and Research Transparency) from the Executive Council of the Interpretive Methodologies and Methods Conference Group. This has been followed up with petition issued this month to delay DA-RT implementation. Of course, who the petition is aimed at and what it demands is an open question.

Personally, I am inclined to sign the DA-RT delay petition because DA-RT does not go far enough. In June 2015, I joined with people from across the social sciences in proposing a set of guidelines for Transparency and Openness Promotion (TOP). The TOP guidelines details best practices and are aimed at journals in the social sciences. These guidelines focus on quantitative analysis, computational analysis and formal theory. Because qualitative research involves more complicated issues, TOP has left this for the future and for input from the community.

I find it puzzling that there is resistance to making it clear how one reaches a conclusion. Suppose I naively divide research into two types: interpretative and empirical. Both make claims and should be taken seriously by scholars. Both should be held to high standards. Interpretative research often derives conclusions from impressions gleaned from listening, immersing, reading and carrying out thought experiments. Those conclusions are valuable for providing insight into complex processes. A published (peer reviewed) article provides a complete chain of reasoning so that a reader can reconstruct the author’s logic – or at least it should. In this sense I see little difference between a carefully crafted hermeneutic article or a game theoretic article. Both offer insight and the evidence for the conclusion is embedded in the article. Given that the chain of reasoning in the article is the “evidence” for the conclusion, it would be absurd to mandate stockpiling impressionistic data in some data warehouse.

What I am calling empirical work has a different set of problems. I acknowledge that such work heavily focuses on measurement and instrumentation that is socially constructed. Research communities build around their favorite tools and methods and, as such, instantiate norms about how information is collected and processed. Those communities appeal to TOP (or DA-RT) for standards by which to judge empirical claims. I see little harm in making certain that when someone offers an empirical claim that I am given the basis on which that claim rests. Being transparent about the process by which data are collected, manipulated, processed and interpreted is critical for me to draw any conclusion about the merit of the finding. Note both interpretative and empirical research (as I have naively labeled them) interpret their data. The difference is that the later can more easily hide behind a wall of research decisions and statistical manipulations that are skipped past in an article. This material deserves to be in the public domain and subject to scrutiny. An empirical article rarely produces the same chain of logic that I can read in an interpretative article.

There are two points that are clear in resisting TOP or DA-RT. First is the issue of privileging empirical work. I agree that there is some danger here. If Journals adopt TOP (or DA-RT) and insist that empirical work lives up to those standards, this may drive some authors from submitting their work to those Journals. This does not mean that authors working in the interpretative tradition should be fearful. Neither DA-RT nor TOP mandate data archiving (see the useful discussion in Political Science Replication). As I note above, it would be ridiculous to insist that this be done. However, “beliefs” about the motives of Editors are a common barrier to publication. When I edited AJPS I was occasionally asked why more interpretative work was not published. The simple answer was that not much was ever sent my way. I treated such work just like any other. If the manuscript looked important, I tried to find the very best reviewers to give me advice. Alas, rejection rates for general journals are very high, no matter the flavor of research. The barriers to entry are largely in the author’s head.

Second, there is the sticky problem of replication. Many critics of DA-RT complain that replication is difficult, if not impossible. The claim is that this is especially true for interpretative work where the information collected is unique. I have sympathy for that position. While it might be nice to see field notes, etc., I am less concerned with checking to see if a researcher has “made it all up” than with learning how the researcher did the work. Again, the interpretative tradition is usually pretty good with detailing how conclusions were reached.

I am also less interested in seeing a “manipulated” data set so that I can get the same results as the author (though as the recent AJPS policy shows, this can be useful in ensuring that the record is clear). I would much rather see the steps that the author took to reach a conclusion. For empirical work this generally means a clearly defined protocol, the instrumentation strategy and the code used for the analysis.

I am interested in a research providing as much information as possible about how claims were reached. This would allow me, in principle, to see if I could reach similar conclusions. The stronger the claim, the more I want to know just how robust it might be. To do so, I need to see how the work was done. All good science is about elucidating the process by which one reaches a conclusion.

In the end I hope the discipline continues to standup up for science. I certainly hope that the move to delay DA-RT is due to the community deciding it has clearer standards in mind. If not, then I’m afraid the movement is about fighting for a piece of the pie.