For a while now, political scientists have been trying to convince journalists that campaigns don't matter much or they matter in subtle ways, but journalists don't seem to get it, so political scientists continue trying to persuade. Judging by longevity of the dispute, journalists find this notion that campaigns don't matter much hard to swallow. I find it hard to swallow as well.
I think the confusion stems at least partly from differences in political scientists' and journalists' training. Political scientists are trained to think about "effects" in a very precise and non-obvious manner. Unfortunately, this thinking is often implicit and political scientists fail to communicate their reasoning to journalists. This leads to a good deal of discussion that seems to accomplish nothing. For example, see here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, and here, just to take a few examples.
I think that journalists' intuition that campaigns have large effects is correct. I also think that political scientists' research suggesting that campaigns have small effects is correct. How can campaigns have both small and large effects? I think that journalists and political scientists are using different counterfactuals to define "effect."
I think that confusion about the counterfactual leads to much of
the disagreement and confusion about the size of campaign effects. (Click here to continue reading.)
Over the last few days, I've written a couple of posts (here, here) about creating coefficient plots. I like them way better than tables, but I don't really see a need for them. In fact, I think they can be misleading. In this post, I explain why (for the most part) I do not include regression tables in my papers and what I use in their place. (Click here to continue reading.)
It is quite common in political science for researchers to run statistical models, find that a coefficient for a variable is not statistically significant, and then claim that the variable "has no effect." This is equivalent to proposing a research hypothesis, failing to reject the null, and then claiming that the null hypothesis is true (or discussing results as though the null hypothesis is true). This is a terrible idea. Even if you believe the null, you shouldn't use p > 0.05 as evidence for your claim. In this post, I illustrate why. (Click here to continue reading.)
During the recent Republican debate, Rick Perry was asked if he struggled to sleep at night after executing 234 people as governor of Texas. While the question was being asked, the Republican crowd broke into spontaneous applause. Gov. Perry replied that he had never struggled with it at all, and, commenting on the spontaneous applause, he noted that the vast majority of Americans are supportive of capital punishment for the most heinous of crimes. (See the video here.) In a separate, more challenging interview, Gov. Perry refused to back down from his position on the death penalty. (See the video here.) (Click here to continue reading.)
A large literature in political science argues that proportional rules increase voter turnout. This result is primarily empirical, with some post-hoc theoretical rationale. One (of several) explanation suggests that proportional rules eliminate non-competitive districts giving parties an incentive to mobilize everywhere. A recent paper by Karp, Banducci, and Bowler argues finds empirically that parties contact more voters in disproportional systems. I elaborate on specifically why this might be the case in a paper, but offer a brief theoretical explanation of their specific finding below. Using game theory, I argue that mobilization is an obvious strategy in winner-take-all systems, but the proper strategy is less clear under proportional rules. (Click here to continue reading.)