Publishing Replication Studies

I was at a replication panel at ISA on Friday and the editor of the APSR, John Ishiyama, offered his thoughts on publishing replications in the APSR. I tweeted about it, which generated quite a discussion (relative to my other Tweets, at least). John discusses these thoughts in more detail in this paper that was previously discussed on the Political Science Replication Blog.

The editors of the APSR have been discussing this issue for some time. In many ways this was prompted by several recent exchanges we had with a scholar who had obtained the replication data from the authors of a manuscript that had appeared in an earlier issue of the Review (in 2010, prior to the University of North Texas’ team taking the reins of the journal). After obtaining the replication data from the authors of the original piece (with the editors’ help) they proceeded to attempt to replicate the results, but were unable to do so. The authors notified us and asked where to publish such a replication study. Our policy at the APSR (which was also the policy of all of our predecessors and the policy of most major journals in the social sciences as well ) is not to publish works that are only replication studies because they do not represent the kind of original work we publish in the Review.

There are very good reasons for APSR’s policy, and we strongly believe in continuing it. We do believe, however, that a very good point was made. A venue for the publication of replication studies is necessary, especially the discipline aspires to raise the degree of scientific rigor in the field. However, as editors of the APSR we are also reluctant to publish such studies in the Review, because this would open up a “cheap” way for authors to have their work published in the APSR, and every Tom, Dick, and Harriet (pardon the expression) could potentially seek to replicate some study, just to get published in the Review. Most all other major journals in the field, we believe, do not to publish solely replication studies (certainly this is true of APSR, AJPS and JOP, as well as the major international relations journals).

I feel that all journals (including the APSR) should evaluate articles on the strength of the contribution, not the time spent working on the paper (I guess this is what John means by "cheap"). In my mind, if an article makes a claim that is deemed a substantial contribution worthy of the APSR, then an article that refutes this claim has made a similar contribution. 

Further, I don't think that most replication studies start out as witch hunts with "Tom, Dick, and Harriet" looking for a "cheap" publication in the APSR. Every replication that I've been a part of and that my students have worked on started as extensions trying to build on the original research.

I can certainly speak for myself. I'm never "out to get" the original authors. I never start out trying to destroy someone else's work. I replicate studies for two reasons. First, I'm trying to build on their work. Second, I'm trying to make a methodological point to help strengthen future work. Along the way, I've found many "mistakes" (or at least things that people citing/believing the key findings should be aware of). In one case, the results changed dramatically when I slightly changed the model specification--the specification the author reports seemed to be the only one with the appropriate stars. In another example, a single case drove all of the authors' key findings. (In both cases, I swept the mistakes under the rug and proceeded as though everything was hunky-dory.) In both cases, a summary of the findings deserves publication in good journals, but I don't expect either to ever see daylight.

Finally, I don't think it would be such a bad thing if the APSR took responsibility for its publications and published papers by "every Tom, Dick, and Harriet" informing us that some of the the major results we're all citing in the APSR are (or might be) wrong. It might be "cheap," but perhaps that makes it a good investment.


More on Product Terms and Interaction in Logistic Regression Models

I noticed that Bill Berry, Justin Esarey, and Jackie DeMeritt's (BDE) long-time R&R'ed paper at AJPS is finally forthcoming. I really like seeing highly applied, but rigorous, work like this being published at top journals. You should definitely have a look at their paper if you use logit or probit models to argue for interaction.

First, the background. BDE published a paper back in 2010 that examines whether researchers need to include a product term in order to argue for interaction.  This first paper examines the situation in which the researcher expects interaction due to compression (when the researcher expects changes in predicted probabilities to be smaller and the probabilities approach zero and one). BDE argue that the logit model with no product term is able to capture this type of interaction and, therefore, no product term is needed in this situation. I've previously discussed this paper here and mentioned that I have a paper arguing that one should include a product term even when interaction is expected on the basis of compression alone. While I disagree with his particular situation, the rest of the paper is fantastic. In particular, I've found their advice about hypothesizing interaction in terms of \(Pr(Y)\) or \(Y^*\) especially valuable.

In the forthcoming paper, BDE extend their analysis to the situation in which researchers expect effects (i.e., changes in predicted probabilities) to vary, but do not have a theoretically motivated specification. They refer to this as "specification ambiguity." In this situation, I was delighted to read that BDE recommend always include a product term. They find that excluding the product term biases the researcher toward finding interaction. This is the same reason I disagree with their recommendation to exclude product term in the situation of strong theory. With the publication of this new paper, the literature is almost where I'd like it to be, with the exception of the tiny point I mentioned above.


Creating Marginal Effect Plots for Linear Regression Models in R

Suppose we estimate the model \(E(y) = \beta_0 + \beta_xx + \beta_zz + \beta_{xz}xz\) and want to calculate the the marginal effect of \(x\) on \(E(y)\) as \(z\) varies \(\left(\dfrac{\partial E(y)}{\partial x}\right)\) and its 90% confidence interval. Brambor, Clark, and Golder describe exactly how to do this and provide Stata code. Below is a bit of R code to do something similar. (Click here to continue reading.)


compactr is now on CRAN

I've been working on a package called compactr  that helps create nice-looking plots in R and it is now up on CRAN.

You can get it by typing

directly into the command line in R. See how it works by typing  example(eplot)  or reading the details here. Below I describe the basic structure and functions. (Click here to continue reading.)


Arguing for Negligible Effects

I just uploaded a newer version of my re-titled paper "Arguing for a Negligible Effect." You can find the latest version here. It has a "revise and resubmit" at AJPS and I'm sending it back on August 15 (when they re-open after the summer break), so I'd certainly appreciate any suggestions. The abstract is below:

Political scientists often theorize that explanatory variables should have "no effect" and support these claims by demonstrating that the estimated effects are not statistically significant. These empirical arguments are not particularly compelling, but I introduce applied researchers to simple, powerful tools that can strengthen their arguments for these hypotheses. With several supporting examples, I illustrate that researchers can use 90% confidence intervals to argue against meaningful effects and provide persuasive evidence for their hypotheses. 


For more posts, see the Archives.