## Bayes, Bridge and the Brain

I was having a chat over coffee yesterday with some members of the Mathematics Department here at the University of Cape Town, one of whom happens to be an expert at Bridge, actually representing South Africa in international competitions. That’s a much higher level than I could ever aspire to so I was a bit nervous about mentioning my interest in the game, but in the end I explained that I have in the past used Bridge (and other card games) to describe how Bayesian probability works; see this rather lengthy post for more details. The point is that as cards are played, one’s calculation of the probabilities of where the important cards lie changes in the light of information revealed. It makes much more sense to play Bridge according to a Bayesian interpretation, in which probability represents one’s state of knowledge, rather than what would happen over an ensemble of “random” realisations.

This particular topic – and Bayesian inference in general – is also discussed in my book *From Cosmos to Chaos* (which is, incidentally, now available in paperback). On my arrival in Cape Town I gave a copy of this book to my genial host, George Ellis, and our discussion of Bridge prompted him to say that he thought I had missed a trick in the book by not mentioning the connections between Bayesian probability and neuroscience. I hadn’t written about this because I didn’t know anything about it, so George happily enlightened me by sending a few review articles, such as this:

I can’t post it all, for fear of copyright infringement, but you get the idea. Here’s another one:

And another…

*Nature Reviews Neuroscience 11, 605 (August 2010) | doi:10.1038/nrn2787-c1*

**A neurocentric approach to Bayesian inference ** Christopher D. Fiorillo

**Abstract** A primary function of the brain is to infer the state of the world in order to determine which motor behaviours will best promote adaptive fitness. Bayesian probability theory formally describes how rational inferences ought to be made, and it has been used with great success in recent years to explain a range of perceptual and sensorimotor phenomena.

As a non-expert in neuroscience, I find these very interesting. I’ve long been convinced that from the point of view of formal reasoning, the Bayesian approach to probability is the only way that makes sense, but until reading these I’ve not been aware that there was serious work being done on the possibility that it also describes how the brain works in situations where there is insufficient information to be sure what is the correct approach. Except, of course, for players of Bridge who know it very well.

There’s just a chance that I may have readers out there who know more about this Bayes-Brain connection. If so, please enlighten me further through the comments box!

Follow @telescoper
April 12, 2012 at 10:02 am

Not really my field either, but I don’t think that Bayes is hard-wired; we *learn* to be Bayesian, ie to reason correctly.

April 12, 2012 at 10:05 am

I suspect that is true, but the question is how do we learn to reason that way?

Some don’t, of course.

April 12, 2012 at 10:13 am

Bertrand Russel said: Some people would rather die before they think. In fact, they do.

April 14, 2012 at 7:29 pm

Thinking on it more, after a hectic week, if the question is simply how do we learn to reason correctly then the Bayesian stuff enters only *after* the word ‘correctly’ has been defined (ie Bayesianly) – and even then only in the inductive rather than the pure-syllogistic case. If we learn to reason at all then we learn to reason Bayesianly (plus the generation of propositions to toss into Bayesian reasoning).

Bridge of course requires decision theory, the tack-on to (Bayesian) probability theory. But the inductive reasoning involed is made an order of amgnitude harder by the bidding aspect, which like poker imparts further information which may or may not be bluff.

April 12, 2012 at 2:57 pm

A few of us have been working on the Bayes-heart connection for many years – the trick is to think up a prior that makes your favourite model more probable than merely indicated by the data!

Warning! Wind-up – plz ignore.

April 12, 2012 at 2:59 pm

Such as a prior on H_0 that peaks at 15?

April 12, 2012 at 3:48 pm

I suppose so – if the prior more weighted beauty, simplicity and Occam’s razor then that might be the result. Maybe you’ve converted me from my more unproductive frequentist obsession with the experimental data!

April 12, 2012 at 4:07 pm

Actually, one can get a good prior on the Hubble constant from essentially no direct observational data. Watch this space. (It might take a few months, so you don’t have to check that often.)

April 12, 2012 at 4:18 pm

Phillip – I did that years ago…

April 13, 2012 at 3:33 pm

Yes, but you were wrong.

September 2, 2013 at 11:38 am

Ah! Coles, Shanks and Helbig and the topic is (still???) H_0! Good to see astronomers discussing Friston though

I found Fiorillo’s counter a bit off the mark – Friston doesn’t argue about implementation (wet brains) but about how the problem is framed (minimisation).

As to Bayes being hard-wired, Friston’s work suggests it is: evolution takes care of that if the requirement for survival is a machine that minimises surprise in dynamic hostile environments (so Russell’s observation is very pertinent).

I can also argue that we don’t learn to be Bayesian, only to justify a Baysian approach. Babies learn that hot hurts long before they can say “prior”.

But back to astronomical references: have you also seen “Free-energy minimization and the dark-room problem”? http://www.fil.ion.ucl.ac.uk/~karl/ We are working on a response that starts “A philosopher, a theorist and a physicist go into a dark room. How do they escape?”