What I'm Up to at APSA

I'm doing a paper and a poster at APSA this year. These are a couple of projects that I'm excited about, so I'm looking forward to talking about them.

Friday, 8/29, 10:15-12:00, Marriott Exhibit Hall B North.

You should stop by and chat with me on Friday morning. I'm presenting a poster discussing the nuances of product terms, interaction, and logit models. The key point of the paper is that you need product terms in order to draw confident conclusions about interaction. If you like, you can go ahead and preview the poster, read the paper, and get the code and data from GitHub.

You can click here to add it to you calendar.

Friday, 8/29, 4:15-6:00, Hilton Columbia 4

On Friday afternoon, I'm presenting at a panel on Representation and Electoral Systems. We've got several interesting papers and a couple of great discussants, so it should be fun. You can go ahead and read my paper, get the code from GitHub, and preview my slides.

You can click here to add it to your calendar.


Thoughts on Giving a Great Conference Presentation

Political science conference presentations are typically boring. The presenter mumbles past their time limit about some vague experiment, the audience asks off-topic questions, and I'm engaging in the interesting discussion (that's happening simultaneously on Twitter and is unrelated to the panel). I care about efficiency more than most people. If I believe you are wasting my time, I will tune you out so fast. I carry all my stuff around in my pocket, so I can work. I am hardly a captive audience. Because I usually sit in the back and can see a number of laptop screens, I assure you that others feel similarly.

So, just in time for APSA, below are a list of suggestions to help you jump out of this tiresome, terrible mold, presented roughly in order of importance.

  1. Make your point early and often. I think a great way to start a presentation is "Today, I'm going to try to convince you that..." Be simple and direct from the very beginning. At no point after the first 30 seconds of the talk should anyone need to ask you what your point is.
  2. Never go over your allotted time. If the chair allots 12 minutes, finish in ten. Going over your allotted time is disrespectful to the audience and the other panelists.
  3. Practice, practice, practice. Own it. I think practicing about 10 times is a minimum. The first 30 seconds is the most important.
  4. Start with some sort of hook. You have 30 seconds to earn your audience's attention for 12 minutes. You can find plenty of suggestions for this using Google.
  5. Include little text on your presentation slides. You must recognize that your audience cannot read and listen at the same time. If you put a large chuck of text on the slides, you must give your audience time to read it, before talking. If you put all your thoughts in the slides, you might as well simply email them around and skip the talk--it is not doing anyone any good. Instead, use the slides for short statement to orient your audience in the direction of your talk and graphs. As an example, have a look at some slides I've used in the past.
  6. Pause, often, throughout the talk. Give your audience a chance to catchup. Periods, paragraphs, section heading, and chapters all signal readers that a transition is happening. You need to pause at the end of thoughts and give your audience a chance to digest the point, gather themselves, and get ready for the next point. What seems like an eternity to you as a presenter is like a cool summer breeze to your audience. Pauses are incredibly powerful. It sometimes takes people a while to wrap their head around something and collect their thoughts.
  7. Give pointers often throughout the talk. "Before jumping into why I think that [your point], let me explain why this is an import point to make." "Now that I've explained why I think that [your point] from a theoretical perspective, I'd like to show you some data that support my point as well." This goes along nicely with the pauses above.
  8. Have notes. Look at them--not your slides. No one will freak out if you stop talking and look at your notes. In fact, they'll appreciate the breather.
  9. Choose carefully what goes into your talk. Your job is not to go through everything in the paper. It is to state the main point of the paper and a brief argument for it. This might mean that you talk about only one of the twelve hypotheses. I might mean you talk only about the theoretical model or empirical results. It might mean that you skimp on one or the other.  For example, here's a 12 mintute presentation I'm giving about this paper at APSA. The paper has a formal model and an empirical analysis. I don't feel like 12 minutes is enough time for both, so the presentation only makes a passing mention of the formal model. Instead, I focus on (1) the theoretical intuition and (2) plots of the data.
  10. Never apologize to start a presentation--own it. Never start with administrative stuff, own it. Make your point. If you need to say something like please interrupt with questions, do it after getting the audience's attention. If you want people to hold questions until the end, at least don't tell them that.
  11. Connect with people. Look them in the eye. I struggle with this more than anything.

I have strong views on a lot of things. Feel free to take my views seriously or not. I hope, however, that you'll find them useful.


New Format for Blog Posts

In the past, I have updated the blog quite sporadically, usually to make a point that I felt particularly strongly about or highlight a paper I'd written. I'm trying to change that, because I think my blog can serve as a useful medium for promoting my ideas.

For the fall semester, I'm going to post every Thursday at 9 am. I've planned three types of posts.

  1. Highlight key findings from my working or forthcoming papers. As papers, work their way through the review process, I'll post updates as well.
  2. Some quick thoughts on a range of professionalization topics, such as writing, productivity, presentations, etc.
  3. An occasional longer post on a variety of topics.

I'm kicking things off with a post about presentations, just in time for APSA.

If you are interested, you can receive posts automatically by subscribing via RSS.


Another Benefit of Publicly Version-Controlled Research

I've been thinking quite a bit lately about why and how political scientists should publicly version control their research projects. By research projects, I mean data, manuscript, and code. And by publicly version control, I mean use Git to version-control and post a public GitHub repository, from the beginning of the project, so that other researchers are free to follow and borrow as needed. Below I quickly summarize some of the benefits of version control and discuss another benefit that Git and GitHub have had on my own research.

My thinking about public version control for research projects began Zach Jones' discussion of the idea in a recent Political Methodologist. Teaching the linear models class here at UB last semester solidified its importance in my mind. We used Dropbox for version control and sharing, but Git and GitHub are better.

Several recent articles and posts outline why researchers (as opposed to programmers) might use Git and GitHub. Here's a brief summary:

  1. A paper by Karthik Ram.
  2. A short essay by Zach Jones.
  3. A short essay by Christopher Gandrud.

If I've missed something, please let me know.

There are a lot of good reasons to do this:

  • History. We all use version control. Most of us do it poorly. Using Git/GitHub, I'm learning to do it better. Git/GitHub offers a formal way to easily maintain a complete history of a project. In general, it's good to avoid filenames like new_final_carlisle_v3c_updated.docx. A recent comic makes this point clear. We need a method of updating files while keeping track of the old versions so that we can go back if needed. But the approach of giving different filenames to different version is inefficient at best. My approach of keeping "old files" in a designated folder is no better. Git/GitHub solves these issues. Second, Git allows you to tag a project at certain stages, such as "Initial submission to AJPS." After getting an invitation to revise and resubmit and making the required changes, I can compare the current version of the project to the (now several months old) version I initially submitted. This makes writing response memos much easier.
  • Transparency. Zach Jones most clearly makes the point that Git/GitHub increases transparency in the context of political science research. Git/GitHub essentially allows others to actively monitor the progress of a project or study its past development. Related to the motivation to using GitHub in an open manner is the idea of an "open notebook." Carl Boettiger is one of the most well-know proponents of open notebooks. This kind of openness provides a wonderful opportunity to receive comments and suggestions from a wide audience. This allows other to catch errors that might otherwise go unnoticed. It also gives readers a formal avenue to make suggestions, not to mention keeping a complete history of the suggestions and any subsequent discussion. GitHub allows the public to open issues, which is a wonderful way to receive and organize feedback on a paper.
  • Accessibility. Christopher Gandrud makes the point clearly in a clearly in a recent edition of The Political Methodologist, though he discusses accessibility purely in the context of building data sets. But similar arguments could be made for code. I recently had a graduate student express interest in some MRP estimates of state-level opinion on the Affordable Care Act. I told her that I had spent some time collecting surveys and writing code to produce the estimates. I noted that, ideally, she would not duplicate my work, but, if possible, build on it. I was able to point her to the GitHub repository for the project, which hopefully she'll find useful as a starting point for her own work. As part of my experience supervising replication projects as part of graduate methods classes and my own experience with replication data, the clean, final versions of the data that researchers typically post publicly do not allow future researchers to build on easily build on previous work. If authors posted the raw data and all the (possibly long and messy) code to do the cleaning and recoding, it would be much easier for future researcher to build on past contribution. Indeed, research shows that making the data and code freely available lowers the barriers to reuse and increases citations.

But these are the commonly cited reasons for using Git and GitHub. But in my practice, I've found another reason, perhaps more important than the above.

One thing that I first noticed in my students, but now I see that I'm just as guilty of, is "the race to a regression." That is, I devote the absolute minimum required effort (or less) to everything leading up to the regression. My attitude is usually that I'll go back later and clean up everything, double checking along the way, if the line of investigation "proves useful" (i.e., provides stars). I rarely go back later. I find that the script  let_me_just_try_this_really_quickly.R quickly becomes a part of  analysis.R . This is bad practice and careless.

Instead of a race to the regression, Git encourages me to develop projects a little more carefully, thinking about projects in tiny steps, each to be made public, and each done right and summarized in a nice commit message. The care in my research has noticeably improved. I think about how to do something better, do it better, and explain it in a commit message that I can refer to later.

In my view, project development in Git/GitHub works best when users make small, discrete changes to a project. This takes some thought and discipline, but it is the best way to go. I'm guilty of coming back from a conference and making dozens of small changes to an existing projects, incorporating all the suggestions in a single update. I just did it after the State Politics and Policy Conference. It is a poor way to go about developing a project. It is a poor way to keep track of things. It is a poor strategy, but I'm learning.


Publishing Replication Studies

I was at a replication panel at ISA on Friday and the editor of the APSR, John Ishiyama, offered his thoughts on publishing replications in the APSR. I tweeted about it, which generated quite a discussion (relative to my other Tweets, at least). John discusses these thoughts in more detail in this paper that was previously discussed on the Political Science Replication Blog.

The editors of the APSR have been discussing this issue for some time. In many ways this was prompted by several recent exchanges we had with a scholar who had obtained the replication data from the authors of a manuscript that had appeared in an earlier issue of the Review (in 2010, prior to the University of North Texas’ team taking the reins of the journal). After obtaining the replication data from the authors of the original piece (with the editors’ help) they proceeded to attempt to replicate the results, but were unable to do so. The authors notified us and asked where to publish such a replication study. Our policy at the APSR (which was also the policy of all of our predecessors and the policy of most major journals in the social sciences as well ) is not to publish works that are only replication studies because they do not represent the kind of original work we publish in the Review.

There are very good reasons for APSR’s policy, and we strongly believe in continuing it. We do believe, however, that a very good point was made. A venue for the publication of replication studies is necessary, especially the discipline aspires to raise the degree of scientific rigor in the field. However, as editors of the APSR we are also reluctant to publish such studies in the Review, because this would open up a “cheap” way for authors to have their work published in the APSR, and every Tom, Dick, and Harriet (pardon the expression) could potentially seek to replicate some study, just to get published in the Review. Most all other major journals in the field, we believe, do not to publish solely replication studies (certainly this is true of APSR, AJPS and JOP, as well as the major international relations journals).

I feel that all journals (including the APSR) should evaluate articles on the strength of the contribution, not the time spent working on the paper (I guess this is what John means by "cheap"). In my mind, if an article makes a claim that is deemed a substantial contribution worthy of the APSR, then an article that refutes this claim has made a similar contribution. 

Further, I don't think that most replication studies start out as witch hunts with "Tom, Dick, and Harriet" looking for a "cheap" publication in the APSR. Every replication that I've been a part of and that my students have worked on started as extensions trying to build on the original research.

I can certainly speak for myself. I'm never "out to get" the original authors. I never start out trying to destroy someone else's work. I replicate studies for two reasons. First, I'm trying to build on their work. Second, I'm trying to make a methodological point to help strengthen future work. Along the way, I've found many "mistakes" (or at least things that people citing/believing the key findings should be aware of). In one case, the results changed dramatically when I slightly changed the model specification--the specification the author reports seemed to be the only one with the appropriate stars. In another example, a single case drove all of the authors' key findings. (In both cases, I swept the mistakes under the rug and proceeded as though everything was hunky-dory.) In both cases, a summary of the findings deserves publication in good journals, but I don't expect either to ever see daylight.

Finally, I don't think it would be such a bad thing if the APSR took responsibility for its publications and published papers by "every Tom, Dick, and Harriet" informing us that some of the the major results we're all citing in the APSR are (or might be) wrong. It might be "cheap," but perhaps that makes it a good investment.


For more posts, see the Archives.