False Positives

There was an interesting article on the BBC website this week that, for once, contains an example of a reasonable discussion of statistics in the mass media. I’m indebted to my friend Anton for pointing it out to me. I’ve filed it along with examples of Bad Statistics because the issue is usually poorly explained. I don’t think the article itself is bad. In fact, it’s rather good.

The question is all about cancer screening, specifically for breast cancer, but the lesson could apply to a host of other situations. In the original context, the question goes as follows:

Say that routine screening is 90% accurate. Say you have a positive test. What’s the chance that your positive test is accurate and you really have cancer?

Presumably there will be many of you that think the answer is 90%. Hands up if you think this!

If you don’t think it’s 90% then what do you think it is?

The correct answer is that you have no idea. I haven’t given you enough information.

To see why, imagine that the prevalence of cancer in the population is such that 1% of a randomly selected sample will have it. Out of a thousand people one would expect that, on average, ten would have cancer. If the test is 90% accurate then 9 of these will show positive signs and only one won’t.

However, 990 people out of the original thousand don’t have cancer. If the test is only 90% accurate then 10%, i.e. 99 of these will show a false positive.

Thus the total number of positive tests is 108 and only 9 of the individuals concerned actually have cancer. The odds are therefore 9/108. That’s only about a 1-in-12 chance that you have cancer.

But that depends on my assumption about the overall rate in the population. If that number is different it changes the odds. Without this information, the problem is ill-posed.

The more general way of looking at this is in terms of conditional probabilities. What you are given is that P(positive test| cancer)=P(+|C)=0.9 and P(negative test|no cancer)=0.9, while P(negative test|cancer)= 0.1 and P(positive test|no cancer)=P(+|N)=0.1. What you want to know is P(cancer|positive test)=P(C|+). This can be obtained from Bayes’ Theorem but only if you know P(cancer)=P(C)=1-P(N), since people either have cancer or they don’t.

The answer is given by P(C|+)=P(C)P(+|C)/[P(C)P(+|C)+P(N)P(+|N)], which for the numbers I gave above= 0.01 x 0.9/[0.01 x 0.9 + 0.99 x 0.1]=0.009/[0.009+0.099], which gives the same answer as before.

So the moral is that you shouldn’t panic if you get a positive test from a screening test of this type. As long as the condition being tested is relatively rarer than the likelihood of an error in the test result then the chances are high that you’ve got nothing to worry about. But of course, you should take more detailed tests.

The Bayesian way is the easy way!

Advertisements

4 Responses to “False Positives”

  1. I think it’s worth emphasizing that whilst it’s still unlikely that one has cancer after testing positive (given the various odds you’ve quoted), it is significantly more likely than before you took the test: the probability has increased from about 1% to about 8% (1-in-12, not 1-in-8, incidentally). Not time to panic, but probably time to get more tests done.

    Another thought: these examples are usually given with the probabilities of the two sorts of error (a positive result for a patient without cancer; a negative for someone with cancer) as being the same. But is this actually the case for real tests? I could imagine plenty of reasons why the two error probabilities were almost independent. Anybody know?

  2. telescoper Says:

    Good point. I imagine such tests would be tuned in an attempt to reduce the probability of a failure to detect real conditions even if the false positives increase, but I’ve no idea what the numbers are like in real situations.

  3. I am not an expert but as you imply there’s generally a trade-off between sensitivity & specificity.

  4. Anton Garrett Says:

    Bayes’ theorem can easily handle differing probabilities of false positives and false negatives. But a further issue is whether false positives and false negatives are systematic or “random” (a word that might be better abolished). In other words, if the test is repeated on a person, will the results be consistent?

    Probability is only one part of the issue. The other is decision theory – to operate or not to operate? And how to explain *that* to the patient, who should be the one to make the decision? The patient is certainly going to die one day, so some say that expected years of life should be the criterion…

    Anton

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: