Archive for Bayes

More fine structure silliness …

Posted in Bad Statistics, The Universe and Stuff with tags , on March 17, 2016 by telescoper

Wondering what had happened to claims of a spatial variation of the fine-structure constant?

Well, they’re still around but there’s still very little convincing evidence to support them, as this post explains…

Advertisements

Bunn on Bayes

Posted in Bad Statistics with tags , , , , on June 17, 2013 by telescoper

Just a quickie to advertise a nice blog post by Ted Bunn in which he takes down an article in Science by Bradley Efron, which is about frequentist statistics. I’ll leave it to you to read his piece, and the offending article, but couldn’t resist nicking his little graphic that sums up the matter for me:

Untitled-drawing1

The point is that as scientists we are interested in the probability of a model (or hypothesis)  given the evidence (or data) arising from an experiment (or observation). This requires inverse, or inductive, reasoning and it is therefore explicitly Bayesian. Frequentists focus on a different question, about the probability of the data given the model, which is not the same thing at all, and is not what scientists actually need. There are examples in which a frequentist method accidentally gives the correct (i.e. Bayesian) answer, but they are nevertheless still answering the wrong question.

I will make one further comment arising from the following excerpt from the Efron piece.

Bayes’ 1763 paper was an impeccable exercise in probability theory. The trouble and the subsequent busts came from overenthusiastic application of the theorem in the absence of genuine prior information, with Pierre-Simon Laplace as a prime violator.

I think this is completely wrong. There is always prior information, even if it is minimal, but the point is that frequentist methods always ignore it even if it is “genuine” (whatever that means). It’s not always easy to encode this information in a properly defined prior probability of course, but at least a Bayesian will not deliberately answer the wrong question in order to avoid thinking about it.

It is ironic that the pioneers of probability theory, such as Laplace, adopted a Bayesian rather than frequentist interpretation for his probabilities. Frequentism arose during the nineteenth century and held sway until recently. I recall giving a conference talk about Bayesian reasoning only to be heckled by the audience with comments about “new-fangled, trendy Bayesian methods”. Nothing could have been less apt. Probability theory pre-dates the rise of sampling theory and all the frequentist-inspired techniques that modern-day statisticians like to employ and which, in my opinion, have added nothing but confusion to the scientific analysis of statistical data.

Guest Post – Bayesian Book Review

Posted in Bad Statistics, Books, Talks and Reviews with tags , , , on May 30, 2011 by telescoper

My regular commenter Anton circulated this book review by email yesterday and it stimulated quite a lot of reaction. I haven’t read the book myself, but I thought it would be fun to post his review on here to see whether it provokes similar responses. You can find the book on Amazon here (UK) or here ( USA). If you’re not completely au fait with Bayesian probability and the controversy around it, you might try reading one of my earlier posts about it, e.g. this one. I hope I can persuade some of the email commenters to upload their contributions through the box below!

-0-

The Theory That Would Not Die: How Bayes’ Rule Cracked the Enigma Code, Hunted Down Russian Submarines, and Emerged Triumphant from Two Centuries of Controversy

by Sharon Bertsch Mcgrayne

I found reading this book, which is a history of Bayes’ theorem written for the layman, to be deeply frustrating. The author does not really understand what probability IS – which is the key to all cogent writing on the subject. She never mentions the sum and product rules, or that Bayes’ theorem is an easy consequence of them. She notes, correctly, that Bayesian methods or something equivalent to them have been rediscovered advantageously again and again in an amazing variety of practical applications, and says that this is because they are pragmatically better than frequentist sampling theory – ie, she never asks the question: Why do they work better and what deeper rationale explains this? RT Cox is not mentioned. Ed Jaynes is mentioned only in passing as someone whose Bayesian fervour supposedly put people off.

The author is correct that computer applications have catalysed the Bayesian revolution, but in the pages on image processing and other general inverse problems (p218-21) she manages to miss the key work through the 1980s of Steve Gull and John Skilling, and you will not find “Maximum entropy” in the index. She does get the key role of Markov Chain Monte Carlo methods in computer implementation of Bayesian methods, however. But I can’t find Dave Mackay either, who deserves to be in the relevant section about modern applications.

On the other hand, as a historian of Bayesianism from Bayes himself to about 1960, she is full of superb anecdotes and information about
people who are to us merely names on the top of papers, or whose personalities are mentioned tantalisingly briefly in Jaynes’ writing.
For this material alone I recommend the book to Bayesians of our sort and am glad that I bought it.