# Innumeracy and the Evidence for Higgs Boson

##### Posted: 07.07.2012

Last week, scientists presented evidence for a "Higgs-like" particle, and this got science reporters really excited. Unfortunately, they got really excited about understanding the particle, but not the interpretation of the evidence for the particle. In this post, I show that reporters are misinterpreting the evidence that scientists have for a Higgs-like particle and discuss why it matters.

# The Scientists Get It Right

The logic of inference used by the scientists works as follows (read carefully--lots of double negatives):

1. Assume that a Higgs-like particle does not exist.
2. Calculate the probability of observing results at least as extreme as those obtained in the experiment if no Higgs-like particle exists. This statistic is called a p-value.
3. If this probability is small, then use the following reasoning:
1. If the data are extremely unlikely under the assumption that a Higgs-like does not exist, then the assumption must be false.
2. If the assumption that a Higgs-like particle does not exist is false, then a Higgs-like particle must exist.

Notice the interpretation of the p-value. It is the probability of observing results as or more extreme than those obtained in the experiment if there were no Higgs-like particle. David Spiegelhalter at Understanding Uncertainty notes that the scientists got it right in the initial report:

The significance of an excess is quantified by the probability (p0) that a background-only experiment is more signal-like than that observed.

# The Reporters Get it Wrong

I have been reading through some of the news articles reporting on this and was surprised to see (okay, not so surprised so see) that many good science writers get this wrong and interpret a p-value as a the probability of a false positive result. Some examples:

What the scientists leading the Higgs boson hunt did was calculate a probability of a false positive... - An article at the Wall Street Journal

In particle physics parlance, strong evidence means that the probability of an observation being attributable to a statistical fluctuation is less than one per cent. - An article on Business Insider

The 5-sigma level is equivalent to 99.99994 percent confidence. - An article from MSNBC

...the likelihood that their signal was a result of a chance fluctuation was less than one chance in 3.5 million, “five sigma,” which is the gold standard in physics for a discovery. - An article from the New York Times

Scientists at CERN say they've found a new particle consistent with the Standard Model Higgs boson with 5-sigma certainty — a false positive probability of about 1 in 9 trillion. - The very first sentence of an article posted on The Verge

# But Does It Matter?

But why is this a problem? Let's examine it using Bayes' Rule.

Suppose we have two mutually exclusive and exhaustive hypotheses $H_R$ (that a Higgs-like particle exists) and $H_0$ (that a Higgs-like particle does not exist). Because the two hypotheses are mutually exclusive and exhaustive, the probability that $H_R$ is true is simply one minus the probability that $H_0$ is true. Because of this, we can restrict the discussion to $H_0$.

We also have a set of data, denoted by $D$. We are hoping to find a set of data $D$ that is "more extreme" that some (hypothetical) critical data set $D^*$. If $D > D^*$, then we conclude the Higgs-like particle must exist.

The physicists chose $D^*$ such that $Pr(D > D^* | H_0)$ is extremely small ($5 \sigma$ or 0.0000003). Using the traditional hypothesis testing logic, they concluded if they observed an event that is extremely rare under the $H_0$, then $H_0$ must be false and $H_R$ must be true.

What science reporters are discussing is the probability of there being no Higgs-like particle given that we've observed these data. That is, the probability of $H_0$ given that $D > D^*$, which is denoted by $Pr(H_0 | D > D^*)$. Bayes' Rule tell us how to calculate this probability.

However, $Pr(D > D^* | H_0)$ is not the probability of a false positive error. It is the probability of a false positive error if the null were true. So what is the probability of a false positive error? Well, there are two potential meanings.

## The Probability of a False Positive Error (Before Observing Any Data)

Let $FP$ denote a false positive error. Going into the experiment (before observing $D$), we can use Bayes' Rule to conclude that

Thus, going into the experiment, we can conclude that the probability of making a false positive error is at most 0.0000003. But this is not what reporters are interested in or talking about.

## The Probability of a False Positive Error (After Observing "Extreme" Data)

Almost all of the reporters above are discussing the probability of a false positive error, conditional on the data. We are most interested in the probability of making a false positive error given that we observed data more extreme that the critical data set. This probability is just $Pr(H_0 | D >D^*)$ and we know that

### The Objectivist View

Under the objectivist view that take probabilities and real or physical properties of objects, there are two possibilities: the Higgs-like particle exists ($Pr(H_R = 1)$) or it does not ($Pr(H_o= 1$)). If the Higg-like particle exists, then the probability of a false positive error is zero. After all, can't make a false positive error if the particle exists. However, if the particle does not exist, then the probability of a false positive error (after observing $D > D^*$) is one. Indeed, we can't help but make a false positive error if we conclude there is a Higgs-like particle and there is actually not.

### The Subjectivist Position

If we adopt a subjectivist perspective and view probabilities as beliefs rather than as physical properties, we can assign values to $Pr(H_0)$ and $Pr(H_R)$ other than zero or one. In this case, the calculation of the $Pr(H_0 | D > D^*)$ becomes more difficult (because we have to compute $Pr(D > D^* | H_R)Pr(H_R)$, which probably involves integrating over our beliefs about values consistent with a Higgs-like particle). However, it is clear that this probability ranges between zero and one.

# Without Assigning Prior Beliefs, We Can't Conclude Anything about the Probability of a False Positive Error

What can we conclude from this? If we adopt the common objectivist view of probability, then the probability of a false positive error, given the data, is either one or zero, though we can't be sure which. If we adopt a more reasonable subjectivist position, then the probability of a false positive error, given the data, lies somewhere between zero and one,  and the exact values depends on the individuals beliefs about the hypotheses.

The distinction is subtle, but important. These data do not tell us the probability of making a false positive error. As a parting illustration, one commentator almost got this right, but then slips into the common error.

What precisely the five-sigma mark means, however, wasn't always clearly explained in the coverage of a ground-breaking development that could explain how particles have mass and, by extension, why planets and all other objects exist at all.

That is partly because the five-sigma concept is somewhat counterintuitive. It has to do with a one-in-3.5-million probability. That is not the probability that the Higgs boson doesn't exist. It is, rather, the inverse: If the particle doesn't exist, one in 3.5 million is the chance an experiment just like the one announced this week would nevertheless come up with a result appearing to confirm it does exist.

This looks really good so far, but the commentator continues...

In other words, one in 3.5 million is the likelihood of finding a false positive—a fluke produced by random statistical fluctuation—that seems as definitive as the findings released by two teams of researchers at the CERN laboratory in Geneva.

Oh well. Innumeracy abounds.

• arin

From wikipedia http://en.wikipedia.org/wiki/Type_I_and_type_II_errors:
A type I error, also known as an error of the first kind, occurs when the null hypothesis (H0) is true, but is rejected. It is asserting something that is absent, a false hit.
A false positive error is a Type I error

This denies what you write: However, Pr(D>D∗|H0) is not the probability of a false positive error. It is the probability of a false positive error if the null were true.

Maybe your statement is a false positive error.

• Carlisle Rainey

Perhaps my language was ambiguous. I was trying to say that the probability of a false positive is not Pr(D > D* | Ho). Rather, the probability of a false positive is Pr(D > D* | Ho)Pr(Ho). Imagine if the null hypothesis is false , then doesn't the probability of making a false positive error have to be zero?

[edited: 7/8/12, thanks arin]

This distinction hinges on recognizing the difference between Pr(reject null AND null is true) and Pr(reject null WHEN null is true).

• arin

Indeed the the false positive rate is univerally known as Pr(D > D* | Ho).
And no, if the null hypothesis is true, then false positive rate should not be zero since it is always possible to have a false positive.

• Carlisle Rainey

I modified my comment. I definitely should not have used the phrase "false positive rate," since, as you point out, that has a technical meaning. I hope your second point was the result of a typo, but I fixed it nonetheless.

• lovatto

Let me see if I understand this.

To give a concrete example lets assume my hypothesis is that Whenever a particular chef cooks food he burns it.

The null hypothesis would be that at least one time the chef cooks food he doesn't burn it.

So i run the experiment 100 times and examine what comes out of the kitchen.

I see that all 100 times the food was burnt.

The probability that the food was burnt because of natural events, (a fire, a magnifying glass left in a unfortunate place etc) producing these 100 results would be my sigma value?

Does this mean that using the null hypothesis value you can generate the number of times an experiment has to be run to get 5 sigma?

Is that a correct understanding of sigma values, the probability that the results you see are explained by the null hypothesis(s)?

• Diego

Maybe you made a mistake here:
Pr(FP)=Pr(D>D∗∪H0)=Pr(D>D∗|H0)Pr(H0)
I think you put a "union symbol" instead of an "intersection symbol".

• Carlisle Rainey

Good catch.