The Prevalence of Hypotheses of "No Effect"
I've been actively arguing for a while now that political scientists often want to test hypotheses of "no effect" or "no meaningful effect" but don't do so in a compelling manner. (See blog posts here here, here, and here. Also see this project page, this poster, these slides, and this working paper.)
Today, I'd like to briefly argue that these hypotheses are indeed common.
When first learning statistics as both an undergraduate and graduate student (and even in some of my advanced classes in the statistics department!), I was taught that "null results" were a bad thing. As political scientists, we were supposed to look for effects. Much to my surprise, the very first academic presentation I ever attended, the speaker presented a hypothesis of no effect. This was not some hypothesis off to the side--it was central to the empirical argument.
I've been noticing researchers hypothesizing "no effect" ever since, even in good journals, but I wanted to do a more rigorous study. I gathered all the empirical research articles published in 2011 and 2012 in both the American Political Science Review and the American Journal of Political Science that present explicit hypotheses. Of these articles in the best journals, 30% (18/61) presented a hypothesis of no meaningful effect!
Unfortunately, the authors do not make strong empirical arguments for their hypotheses. Each uses the lack of statistical significance (i.e. insignificance) as evidence for their claim. However, the absence of evidence for an effect does not imply no effect and it does not imply the effect is not meaningful.
These hypotheses are both important and common. Political scientists should start thinking carefully about how we should evaluate these claims.