The Temporal-Topical Space

Jason Barabas, Jenn Jerit, Will Pollock, and I have a paper coming out in the APSR that looks at the types of facts that one might ask questions about in measuring political knowledge. We categorize facts by whether they came to be in the recent or distant past (i.e., "static" or "surveillance" facts) and by whether the question deal with policy-relevant or general facts (e.g., people and players). This leads to what we term the "temporal-topical space." We focus conceptually on the four "quadrants" or types of facts that people might learn about politics and discuss the implications for focusing on particular quadrants for particular research questions.

We write:

Aside from individual, environmental, and procedural determinants, two question-level characteristics influence how and whether a particular fact is learned. The first factor has to do with how recently the fact came into being (the “temporal dimension”). The second characteristic pertains to the type of fact—in particular, whether the question has to do with public policy concerns or the institutions and people/players of government (the “topical dimension”). Figure 1 illustrates that the two dimensions can be crossed to yield four theoretically relevant types of knowledge questions.

temporal-topical-space

For the example of education on political knowledge, we write:

...the information transmitted in primary and secondary schools does not relate uniformly to subjects in the temporal-topical space. Indeed, Delli Carpini and Keeter (1996, 190) observe that schools “teach particular aspects of politics, most notably the institutions and processes of government.” It is noteworthy that although dozens of studies have documented the association between education and political knowledge, the empirical analyses often focus on questions at the top of Figure 1’s vertical dimension—namely, items measuring general political knowledge (e.g., Bennett 1988, 1989; Bennett and Bennett 1993; Elo and Rapeli 2010; Mondak 2000; Lambert et al. 1988). There is emerging evidence that education is associated with greater levels of policy-specific knowledge (e.g., Jerit, Barabas, and Bolsen 2006 or Barabas and Jerit 2009), but the evidentiary basis for this claim is sparse.

Moreover, the logic of the ability-opportunity-motivation framework suggests that although the relationship between years of formal education and political knowledge may be evident across all four quadrants, the strength of that association will vary. For questions having to do with general political knowledge, the relationship should be strong because these topics comprise the curriculum of primary and secondary schools (Delli Carpini and Keeter 1996, 190). By contrast, policy-specific knowledge may exhibit a weaker relationship with education if particular issue publics (e.g., the elderly, African-Americans) are motivated to seek out information despite being otherwise disadvantaged in terms of level of education or the gains reaped from formal schooling (e.g., Dow 2009).

The key point is that the effect education varies across the quadrants of the temporal-topical space. This is exactly what we find in a large data set with over 300 knowledge questions.

In the figure below, the grey lines represent the estimated relationship between education and political knowledge for each quadrants (for an otherwise "typical" respondent asked and otherwise "typical" question). The black lines represent the average effects across questions in each quadrant. Notice that the effects are about twice as large for the general questions.

edu

 

 

Perhaps most importantly, we also think our typology can improve the way scholars link political knowledge and democratic theory.

Additionally, our typology may reshape scholarly thinking about the relevance of knowledge among some critics. For example, Lupia (2006, 219) observes that “Most political knowledge questions are not derived from a replicable or transparent logic about how their answers bear on a voter’s ability to make decisions of a particular quality in the voting booth.”Developing such a rationale may come more naturally when thinking about analytical types (e.g., surveillance-general facts) rather than the overarching concept (e.g., political knowledge). Above all, the temporal-topical framework provides a more differentiated conceptualization of political knowledge, which may create interesting possibilities for future research and, ideally, a better understanding of the causes and consequences of this important political resource.


The Front-End of Methods Training

Based on my own experience and interactions with other professors and students, most methods training in political science starts with a “baby stats” course, continues into a more detailed course on linear models, and finishes with a fairly rigorous course on the generalized linear model that includes a grab bag of the latest and greatest methods. In my experience the detail and breadth of these courses increases as the students goes along. Related to this, departments (with limited methods-oriented faculty) tend to devote their methodologists to the more advanced courses and, if necessary, use more substantive-oriented faculty for their introductory courses. My experience in the statistics department at Florida State suggests a slightly different approach might train students more effectively. While a course on the GLM is crucial (I think I’ve used logit in every paper I’ve ever written), a thorough course in probability seems just as important to me. So what are the key ideas that students should learn in an introductory methods course?

  1. point estimation
  2. interval estimation
  3. hypothesis testing

This could be done in the context of differences-in-means and a simple linear regression with a single explanatory variable (or even multiple regression). I’ve never used a Chi-square test in an actually application and I’ve certainly never done one by hand, so I don’t really see the point of doing several by hand as part of an applied methods class. Methods training in political science falls short of it’s potential because early methods classes fail to deal head on with these key concepts and then try to build on a nonexistent foundation. To really get a handle on the three key ideas of point estimation, interval estimation, and hypothesis testing, students need to be familiar with some basic principles of probability theory.

  1. probability distributions and random variables  (pdfs/pmfs, cdfs/cmfs, computer simulation)
  2. Bayes’ rule for discrete and continuous events
  3. mean and variance (of a random variable, not the sample mean and variance)
  4. conditional expectation
  5. central limit theorem
  6. sampling distributions

I’d start the class with a scatterplot of two theoretically related variables, such as the incumbent party’s presidential vote share and change in GDP.  I’d ask students to think about how these two things might be related. Based on simply inspecting the scatterplot, I’d ask them two specific questions.

  1. For every percentage point increase in the GDP growth rate, how many percentage points does the incumbent party’s vote share increase? Don’t worry about being exactly correct just come up with a “good estimate." Call this quantity the “effect."
  2. Choose two values that you are “fairly confident” lie above and below the actual effect.
  3. Are you "fairly confident” that that the actually effect is greater than zero?

I’d then set out to tackle these questions throughout the class. These imply other questions as well, such as what makes and estimate a “good estimate” and the technical meaning of “fairly confident.” I’d note that to answer these question, we need a statistical model, so I’d suggest \(y_i \sim N(\mu_i, \sigma^2)\), where \(\mu_i = \beta_{cons} + \beta_{x}x\). I could then note that this and similar models serve as powerful tools for answering these types of questions and that it’s really important to understand the details. I’d jump in with the normal distribution, expanding to other distributions, and working my way down the list, always coming back to the fundamental concepts of point estimation, interval estimation, and hypothesis testing, being vary careful with details and not shying away from the mathematical background. I don’t know what sort of textbook would be appropriate for this style of class. My favorite is Casella and Berger, but that’s much too advanced for an introductory class for political science graduate students. I haven’t spent a lot of time with it, but DeGroot and Schervish seems promising. These are just some initial ideas, so let me know what you think, especially if you disagree.


Advice for Teaching Undergraduate Methods

A friend writes:

I'm wondering if you have any materials and/or advice you'd be willing to share for teaching undergrad research methods.

I have four bits of advice.

  1. Use a textbook. I haven't found one that I love, but choose something and follow it closely. You don't necessarily need to assign readings from it, but you need something to follow. I once tried to teach a class "off the cuff" so that I could adjust to students' needs, progress, and interest. That was a terrible idea. Get a book and follow it. However, don't feel obligated to reach the end of the book.
  2. Do lots of examples. You cannot do too many examples, especially for the calculations. Depending on the amount of math you require them to do "by hand," this might take a whole class period. That's okay. Do lot's of examples. One or two is not enough.
  3. Do lots of in-class exercises. Introduce a concept, discuss it in some detail, and then let students work with it. There is much more opportunity for this in methods classes than in substantive political science classes. Three examples that might be helpful.
    1. I usually talk about concepts and measurement a lot throughout the class. I'll take three abstract political concepts such as war, income inequality, and partisanship and ask students to carefully define the underlying concepts and then develop a concrete, plausible way to measure these concepts. This usually leads to a long and interesting class discussion.
    2. Scatterplots and regression usually take up a substantial chunk as well. I always give students a scatterplot with just a few data points and ask them to draw the line that "best fits" the data. I tell them that we will see who can draw the best line.  We then find the slope and intercept and use that to compute the residuals and then the sum of squared errors. I award an honorary title of their choosing to whoever has the lowest sum of squares.
    3. I spend a lot of time talking about p-values as well. I usually reenact the lady tasting tea with Pepsi versus Pepsi Max. I think this is a nice example for working through the convoluted logic of hypothesis testing.
  4. Quiz often. Depending on your preferences, you may want to include graded quizzes as part of the class. If not, then I recommend doing self assessments at the beginning of each class, just so student can see if they don't quite understand the material. I think it will help you to do these often. It also probably makes sense to discuss these questions after the quiz with the students.

I am indifferent toward software--I've included it and I have excluded it. This semester, I'm going to try doing a little R.


Is OLS BLUE or JUNK?

My favorite pontificators in political science is Fernando Martel Garcia. I got to know him at replication panel ISA, where he quite vigorously opposed the APSR's policy of auto-rejecting replication papers. Fernando recently posted this gem to the PolMeth mailing list.

In the real world computers do not work alone but at the behest of the researcher operating them.  And the problem is that the latter are often trying to solve a different minimization problem. Namely, choosing regressors, samples, time periods, functional forms, measures, proxies, etc. that minimize the p-value of interest. Thus, in the context of research practice, or how scientists go about doing science, it might be more appropriate to say that most OLS estimates are JUNK rather than BLUE.  And so educators ought to do a much better job of teaching research practice and good research design, over and above OLS.

p-values get a lot of hate from many in the methodology community, but I actually like them. In fact, I'm growing more and more frequentist in my thinking. However, if researchers use p-values as their optimization criterion, then we are in rough shape. But what can we expect, since it seems that journal use p-values as a rejection criterion?


What I'm Up to at APSA

I'm doing a paper and a poster at APSA this year. These are a couple of projects that I'm excited about, so I'm looking forward to talking about them.

Friday, 8/29, 10:15-12:00, Marriott Exhibit Hall B North.

You should stop by and chat with me on Friday morning. I'm presenting a poster discussing the nuances of product terms, interaction, and logit models. The key point of the paper is that you need product terms in order to draw confident conclusions about interaction. If you like, you can go ahead and preview the poster, read the paper, and get the code and data from GitHub.

You can click here to add it to you calendar.

Friday, 8/29, 4:15-6:00, Hilton Columbia 4

On Friday afternoon, I'm presenting at a panel on Representation and Electoral Systems. We've got several interesting papers and a couple of great discussants, so it should be fun. You can go ahead and read my paper, get the code from GitHub, and preview my slides.

You can click here to add it to your calendar.


For more posts, see the Archives.