Monthly Archives: December 2015

Setting the Bar

I was hopeful that the kerfuffle around DA-RT (Data Access and Research Transparency) would be a boon for discussions about the philosophy of social science. After all, at the core of DA-RT is defining a minimal threshold for what constitutes evidence for an empirical claim. I have been disappointed.

Many years ago (1978 to be specific) I took a required graduate course at Indiana University on Scope and Methods taught by John Gillespie. That course covered elemental philosophy of science and forays into standards for the social sciences. That course continues to haunt me, forcing me to think about what standards I value in my own work and the work of others. I have spent a large bit of my career at Rice raising the same issues with graduate students, hopefully forcing them to consider what standards they use when evaluating their own work. Sadly, I wish that I had a clear set of principles, but I don’t. Whenever I meet someone who is a philosopher of science, I immediately ask what they are reading and who they think is making inroads. I haven’t been bowled over.

I thought Jeff Isaac’s editorial (and subsequent blog posts) might provide me a principled discussion of what constitutes a knowledge claim. After all, here is someone exercising editorial discretion to weigh in on evidentiary standards. I remain disappointed. Here is the crux of his claims:

  • Interpretivism needs a “safe space.” Much of Jeff’s discussion on this point retells the story of the Perestroika movement in political science and worries that its energy has been dissipated. In part he fears that there is a “resurgent neo-positivism” that is taking over publication outlets in the discipline. While this is a useful commentary on the history of science, it hardly addresses fundamentals.
  • DA-RT solves no central problem of concern to the social sciences. As he notes, no one objects to transparency. Consequently, do we need rules to mandate a stalking horse for methodological rigor? We need standards in order to judge our own work and the work of others. Even if there is no imminent danger of wholesale fraud in the discipline, we still owe it to ourselves to articulate what we value.
  • Jeff contends, and this comes out most forcefully in a blog post, that “a discipline that was serious about public relevance, credibility, and accessibility would be less and not more obsessed with methodological purity.” Herein lies a semi-normative claim that methodological infatuation leads to irrelevance, lack of credibility and inaccessibility. However, it seems that many discussions about evidentiary standards are precisely about credible claims and making the basis of those claims more accessible.
  • The “standard method of hypothesis-testing” should not become the normative standard for the discipline. This has the most promising direction and Jeff’s set of standards (as he notes seem very impressionistic) involvess research “that center on ideas that are interesting and important.” Three points are noteworthy here.

First, the conception of the “standard method” is one that has been challenged since the 1950s and is hardly in its ascendance. Of course, Jeff is really concerned with evolving quantitative norms and concerned that a one-size fits all view about such norms will disadvantage qualitative work. Elsewhere I’ve commented that this is a false distinction. Neither interpretative nor game theoretic work gets a free pass when making causal or outcome claims.

The second point is that quantitative methods, adhering to evidentiary standards, somehow cannot be interesting and important. This seems odd given that Jeff’s ex-colleague, Elinor Ostrom, spent decades tackling key issues concerning climate change and self-governance of resources held by the commons in a very public manner, while relying on methodological rigor.

The third point is that there are no boundaries on which ideas are interesting and important. This is akin to “knowing it when I see it.” Of course Jeff argues that an Editor calls on the expertise of reviewers and pushes them to think “outside their comfort zones.” But, like any Editor, Jeff knows how easy it is to stack the deck when selecting reviewers and he also knows how editorial judgment can easily overlook aspects of reviews that do not fit one’s own views.

Overall, the arguments have more to do with staking a political position than with tackling fundamental normative claims. In today’s public environment, where citizens and public leaders alike dismiss evidence-based claims, we need to seriously consider the standards we set for our colleagues and ourselves. I keep hoping for someone better than I to come along and articulate a set of normative principles by which I can evaluate good work.

Perhaps Feyerabend’s nihilism is good enough for the social sciences. Yet we make claims about our findings and project what they mean for setting public policy. Policy makers, however, remain skeptical of what we claim or sometimes belittle our findings because they don’t line up with their own opinions. Letting a thousand flowers bloom is not going to make the discipline relevant to policy makers.