This summer, I presented poster of my paper on product terms in logit models at PolMeth and APSA. I've never felt really comfortable with making posters, but lately I used Apple's Pages to make this latest poster and it worked great. Here's a quick summary of the software that I've tried and why I love or hate it. I also added a few links to resources I found helpful near the bottom.
- Inkscape: I did my first poster in Inkscape. It worked fairly well, but it has a steep learning curve. I already use it for post-editing R graphics, so I already know how to use it. Inkscape also does not do a good job of laying out text. For example, I believe I had to wrap my sentences by hand. Finally, it lacks spell-check, which I need badly. You'll notice a couple of ugly typos in my first poster. However, it does offer a lot of control, which I love. Adobe Illustrator serves as a proprietary, but similar, tool. I've read some nice things about it, but I don't have access to it. It might deal with some of the difficulties while offering similar control.
- LaTeX: Many of my friends use LaTeX, but I just don't like the look of a LaTeX poster. The best I've found is from Nathaniel Johnston, but it is still to busy for me. While I love LaTeX for writing papers, I prefer WYSIWYG for posters (and presentation slides). LaTeX makes inserting equations really easy and perfectly justifies the text, but it doesn't give me amount of flexibility and control that I want (or need).
- PowerPoint: I couldn't include .pdf graphics, so I didn't give it serious consideration.
- Keynote: Keynote makes beautiful presentations, so I thought it would make beautiful posters. However, I couldn't get the text to wrap around the images the way that I really wanted.
- Pages: The best software I've tried. Equations are not easy, but I use this site to export the few equations that I need to .pdfs and include these as images in the poster. Pages makes it easy to set the size of the poster to something unusual (i.e., 48" x 36"). Pages also handles .pdf graphics nicely. There's a copy of the poster that I made here in case you are interested.
Here are several links to information that I've found helpful in designing my own posters.
- The Better Posters Blog has lots of interesting tips and even critiques of other's posters.
- Colin Purrington gives lots of useful tips for designing and presenting posters.
- GradHacker gives five tips for a better poster. My favorite: "Never underestimate the value of blank space." Most posters are too busy for my tastes.
If you have a different take on any of the software I discussed above, or have experience with other software, let me know in the comments or on Twitter (@carlislerainey). Also, if you have any favorite resources that might help me or other improve our posters, let me know.
I'm always trying to convince my graduate students that it takes a long time to publish a paper. Motivated by this post by Nathan Jensen, I thought I'd share the timeline for my paper "Arguing for a Negligible Effect" that just came out in AJPS.
Started fourth year of graduate school.
2011-10-05: Finished first draft.
One year passed. Rewrote, defended prospectus, received comments, etc.
2012-10-07: Submitted to AJPS.
2012-10-09: Failed technical check, but successfully resubmitted.
Six months passed. Got a job at UB in the meantime.
2013-04-23: Received invitation to revise and resubmit. Accepted invitation.
Four months passed. Started new job at UB. The paper got a lot better during this time, thanks mostly to several great anonymous reviewers.
Three months passed.
2013-11-21: Accepted unconditionally.
2013-12-10: Submitted final version. Received another technical reject. Resubmitted successfully.
2013-12-30: Final version accepted and sent to press.
2014-02-11: Received proof from Wiley.
2014-02-18: Sent corrections back to Wiley.
2014-03-07: Published online.
Seven months passed.
2014-10-13: Published in print.
This was a very smooth process. It never faced any major obstacles and it was accepted at the first journal I sent it to. It still took three years to go from first draft to in-print and 13 months to go from submission to acceptance.
Takeaway point to graduate students: I had a rough draft of a great idea at the start of my fourth year in graduate school. It wasn't accepted until after I got a job. It is the papers that you write in your second and third years that make you attractive on the job market. Make your second and third years count.
I use the phrases "statistical insignificance" and "statistically insignificant" often, but I was recently informed that these terms are not correct. Instead, I was told to say something like "non-statistically significant." In light of this, I'm careful to say "not statistically significance" or "a lack of statistical significance" in my forthcoming AJPS article.
Since then, though, I've been paying attention and notice that researchers smarter than me use both, so I'm not too worried about the distinction.
In Bayesian Methods and "The Insignificance of Null Hypothesis Significance Testing", Jeff Gill uses the phrase "non-statistically significant."
I think I prefer "statistically insignificant," since the negation is more clearly on the significant. "Non-statistically significant" makes it sound like we're talking about some other kind of significance, such as substantive significance.
The danger is that the word "insignificant" implies there is "no effect."
I'd be curious to know what others think.
I taught the graduate course in linear models at UB last semester and a major portion of the course was a replication project. Here are a few quick observations.
- Building the course around a replication project has made organizing the course a lot easier. After all, my ultimate goal at the end of the semester is that students be able to run their own regression. Since students are working on their replication projects and asking questions, I have a pretty good sense of what I should be talking about next.
- Based on my discussion with some of the students in the class, the replication projects gives the in-class discussions and readings a sense of purpose. I usually try to set up the readings during class, have the students do the necessary reading, and then apply those ideas to their projects. I've given them a clear target to reach by the end of the semester (a high-quality quantitative analysis) and they can see how each topic we discuss helps them get closer to that goal. It's still early, but I think it's been effective so far.
- I'm encouraged by the availability of data--it seems like researchers are doing a better job of that. Under Rick Wilson's editorship, AJPS has become an example for other journals to follow.
- We've had a couple of cases, though, in which the posted data was incomplete. In each case, a variable was missing.
- I'm a little surprised at how difficult it is to understand what is going on in the analysis from the paper. I just try to imagine how hard it would be to replicate someone's results if they did not provide the computer code. I encouraged students to focus their efforts on papers that provided data and code. In two cases, we still weren't able to replicate due to missing data. In a separate cases, the students didn't have a script and were totally lost. They had a complete and well-documented data source, but we eventually had to e-mail the authors for a Stata .do file. With this, we were able to replicate the results.
In the end, these replications were very popular with the students and they managed to write a few great papers. I'm so satisfied with the outcome, I'm doing it again this semester in my advanced methods class.
Jason Barabas, Jenn Jerit, Will Pollock, and I have a paper coming out in the APSR that looks at the types of facts that one might ask questions about in measuring political knowledge. We categorize facts by whether they came to be in the recent or distant past (i.e., "static" or "surveillance" facts) and by whether the question deal with policy-relevant or general facts (e.g., people and players). This leads to what we term the "temporal-topical space." We focus conceptually on the four "quadrants" or types of facts that people might learn about politics and discuss the implications for focusing on particular quadrants for particular research questions.
Aside from individual, environmental, and procedural determinants, two question-level characteristics influence how and whether a particular fact is learned. The first factor has to do with how recently the fact came into being (the “temporal dimension”). The second characteristic pertains to the type of fact—in particular, whether the question has to do with public policy concerns or the institutions and people/players of government (the “topical dimension”). Figure 1 illustrates that the two dimensions can be crossed to yield four theoretically relevant types of knowledge questions.
For the example of education on political knowledge, we write:
...the information transmitted in primary and secondary schools does not relate uniformly to subjects in the temporal-topical space. Indeed, Delli Carpini and Keeter (1996, 190) observe that schools “teach particular aspects of politics, most notably the institutions and processes of government.” It is noteworthy that although dozens of studies have documented the association between education and political knowledge, the empirical analyses often focus on questions at the top of Figure 1’s vertical dimension—namely, items measuring general political knowledge (e.g., Bennett 1988, 1989; Bennett and Bennett 1993; Elo and Rapeli 2010; Mondak 2000; Lambert et al. 1988). There is emerging evidence that education is associated with greater levels of policy-specific knowledge (e.g., Jerit, Barabas, and Bolsen 2006 or Barabas and Jerit 2009), but the evidentiary basis for this claim is sparse.
Moreover, the logic of the ability-opportunity-motivation framework suggests that although the relationship between years of formal education and political knowledge may be evident across all four quadrants, the strength of that association will vary. For questions having to do with general political knowledge, the relationship should be strong because these topics comprise the curriculum of primary and secondary schools (Delli Carpini and Keeter 1996, 190). By contrast, policy-specific knowledge may exhibit a weaker relationship with education if particular issue publics (e.g., the elderly, African-Americans) are motivated to seek out information despite being otherwise disadvantaged in terms of level of education or the gains reaped from formal schooling (e.g., Dow 2009).
The key point is that the effect education varies across the quadrants of the temporal-topical space. This is exactly what we find in a large data set with over 300 knowledge questions.
In the figure below, the grey lines represent the estimated relationship between education and political knowledge for each quadrants (for an otherwise "typical" respondent asked and otherwise "typical" question). The black lines represent the average effects across questions in each quadrant. Notice that the effects are about twice as large for the general questions.
Perhaps most importantly, we also think our typology can improve the way scholars link political knowledge and democratic theory.
Additionally, our typology may reshape scholarly thinking about the relevance of knowledge among some critics. For example, Lupia (2006, 219) observes that “Most political knowledge questions are not derived from a replicable or transparent logic about how their answers bear on a voter’s ability to make decisions of a particular quality in the voting booth.”Developing such a rationale may come more naturally when thinking about analytical types (e.g., surveillance-general facts) rather than the overarching concept (e.g., political knowledge). Above all, the temporal-topical framework provides a more differentiated conceptualization of political knowledge, which may create interesting possibilities for future research and, ideally, a better understanding of the causes and consequences of this important political resource.