Jason Barabas, Jenn Jerit, Will Pollock, and I have a paper coming out in the APSR that looks at the types of facts that one might ask questions about in measuring political knowledge. We categorize facts by whether they came to be in the recent or distant past (i.e., "static" or "surveillance" facts) and by whether the question deal with policy-relevant or general facts (e.g., people and players). This leads to what we term the "temporal-topical space." We focus conceptually on the four "quadrants" or types of facts that people might learn about politics and discuss the implications for focusing on particular quadrants for particular research questions.
Aside from individual, environmental, and procedural determinants, two question-level characteristics influence how and whether a particular fact is learned. The first factor has to do with how recently the fact came into being (the “temporal dimension”). The second characteristic pertains to the type of fact—in particular, whether the question has to do with public policy concerns or the institutions and people/players of government (the “topical dimension”). Figure 1 illustrates that the two dimensions can be crossed to yield four theoretically relevant types of knowledge questions.
For the example of education on political knowledge, we write:
...the information transmitted in primary and secondary schools does not relate uniformly to subjects in the temporal-topical space. Indeed, Delli Carpini and Keeter (1996, 190) observe that schools “teach particular aspects of politics, most notably the institutions and processes of government.” It is noteworthy that although dozens of studies have documented the association between education and political knowledge, the empirical analyses often focus on questions at the top of Figure 1’s vertical dimension—namely, items measuring general political knowledge (e.g., Bennett 1988, 1989; Bennett and Bennett 1993; Elo and Rapeli 2010; Mondak 2000; Lambert et al. 1988). There is emerging evidence that education is associated with greater levels of policy-specific knowledge (e.g., Jerit, Barabas, and Bolsen 2006 or Barabas and Jerit 2009), but the evidentiary basis for this claim is sparse.
Moreover, the logic of the ability-opportunity-motivation framework suggests that although the relationship between years of formal education and political knowledge may be evident across all four quadrants, the strength of that association will vary. For questions having to do with general political knowledge, the relationship should be strong because these topics comprise the curriculum of primary and secondary schools (Delli Carpini and Keeter 1996, 190). By contrast, policy-specific knowledge may exhibit a weaker relationship with education if particular issue publics (e.g., the elderly, African-Americans) are motivated to seek out information despite being otherwise disadvantaged in terms of level of education or the gains reaped from formal schooling (e.g., Dow 2009).
The key point is that the effect education varies across the quadrants of the temporal-topical space. This is exactly what we find in a large data set with over 300 knowledge questions.
In the figure below, the grey lines represent the estimated relationship between education and political knowledge for each quadrants (for an otherwise "typical" respondent asked and otherwise "typical" question). The black lines represent the average effects across questions in each quadrant. Notice that the effects are about twice as large for the general questions.
Perhaps most importantly, we also think our typology can improve the way scholars link political knowledge and democratic theory.
Additionally, our typology may reshape scholarly thinking about the relevance of knowledge among some critics. For example, Lupia (2006, 219) observes that “Most political knowledge questions are not derived from a replicable or transparent logic about how their answers bear on a voter’s ability to make decisions of a particular quality in the voting booth.”Developing such a rationale may come more naturally when thinking about analytical types (e.g., surveillance-general facts) rather than the overarching concept (e.g., political knowledge). Above all, the temporal-topical framework provides a more differentiated conceptualization of political knowledge, which may create interesting possibilities for future research and, ideally, a better understanding of the causes and consequences of this important political resource.
For a while now, political scientists have been trying to convince journalists that campaigns don't matter much or they matter in subtle ways, but journalists don't seem to get it, so political scientists continue trying to persuade. Judging by longevity of the dispute, journalists find this notion that campaigns don't matter much hard to swallow. I find it hard to swallow as well.
I think the confusion stems at least partly from differences in political scientists' and journalists' training. Political scientists are trained to think about "effects" in a very precise and non-obvious manner. Unfortunately, this thinking is often implicit and political scientists fail to communicate their reasoning to journalists. This leads to a good deal of discussion that seems to accomplish nothing. For example, see here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, and here, just to take a few examples.
I think that journalists' intuition that campaigns have large effects is correct. I also think that political scientists' research suggesting that campaigns have small effects is correct. How can campaigns have both small and large effects? I think that journalists and political scientists are using different counterfactuals to define "effect."
I think that confusion about the counterfactual leads to much of
the disagreement and confusion about the size of campaign effects. (Click here to continue reading.)
Over the last few days, I've written a couple of posts (here, here) about creating coefficient plots. I like them way better than tables, but I don't really see a need for them. In fact, I think they can be misleading. In this post, I explain why (for the most part) I do not include regression tables in my papers and what I use in their place. (Click here to continue reading.)
It is quite common in political science for researchers to run statistical models, find that a coefficient for a variable is not statistically significant, and then claim that the variable "has no effect." This is equivalent to proposing a research hypothesis, failing to reject the null, and then claiming that the null hypothesis is true (or discussing results as though the null hypothesis is true). This is a terrible idea. Even if you believe the null, you shouldn't use p > 0.05 as evidence for your claim. In this post, I illustrate why. (Click here to continue reading.)
During the recent Republican debate, Rick Perry was asked if he struggled to sleep at night after executing 234 people as governor of Texas. While the question was being asked, the Republican crowd broke into spontaneous applause. Gov. Perry replied that he had never struggled with it at all, and, commenting on the spontaneous applause, he noted that the vast majority of Americans are supportive of capital punishment for the most heinous of crimes. (See the video here.) In a separate, more challenging interview, Gov. Perry refused to back down from his position on the death penalty. (See the video here.) (Click here to continue reading.)