Archive for Bayesian probability

Literary Bayesianism

Posted in Bad Statistics with tags , on July 3, 2018 by telescoper

I’m a bit busy today doing job interviews and other things, so I’ve just got time for a quick post to point out that there’s a nice polemical piece by David Papineau in the online version of the Times Literary Supplement recently called Thomas Bayes and the crisis in science. I get the print version of the TLS every week, largely for the crossword, but I think the online version of Papineau’s piece is public (i.e. there’s no paywall).

The piece touches on a number of themes I’ve covered on this blog over the years, in particular the widespread use of dodgy statistical methods in science. Here’s a little taster:

One of the great scandals of modern intellectual life is the way generations of statistics students have been indoctrinated into the farrago of significance testing.

I couldn’t agree more!

Advertisements

Hubble Constant Catch-Up

Posted in Bad Statistics, The Universe and Stuff with tags , , , , on May 2, 2018 by telescoper

Last week when I wrote about the 2nd Data Release from Gaia, somebody emailed me to ask whether the new results said anything about the cosmological distance ladder and hence the Hubble Constant. As far as I could see, no scientific papers were released on this topic at the time and I thought there probably wasn’t anything definitive at this stage. However, it turns out that there is a paper now, by Riess et al., which focuses on the likely impact of Gaia on the Cepheid distance scale. Here is the abstract:

We present HST photometry of a selected sample of 50 long-period, low-extinction Milky Way Cepheids measured on the same WFC3 F555W, F814W, and F160W-band photometric system as extragalactic Cepheids in SN Ia hosts. These bright Cepheids were observed with the WFC3 spatial scanning mode in the optical and near-infrared to mitigate saturation and reduce pixel-to-pixel calibration errors to reach a mean photometric error of 5 millimags per observation. We use the new Gaia DR2 parallaxes and HST photometry to simultaneously constrain the cosmic distance scale and to measure the DR2 parallax zeropoint offset appropriate for Cepheids. We find a value for the zeropoint offset of -46 +/- 13 muas or +/- 6 muas for a fixed distance scale, higher than found from quasars, as expected, for these brighter and redder sources. The precision of the distance scale from DR2 has been reduced by a factor of 2.5 due to the need to independently determine the parallax offset. The best fit distance scale is 1.006 +/- 0.033, relative to the scale from Riess et al 2016 with H0=73.24 km/s/Mpc used to predict the parallaxes photometrically, and is inconsistent with the scale needed to match the Planck 2016 CMB data combined with LCDM at the 2.9 sigma confidence level (99.6%). At 96.5% confidence we find that the formal DR2 errors may be underestimated as indicated. We identify additional error associated with the use of augmented Cepheid samples utilizing ground-based photometry and discuss their likely origins. Including the DR2 parallaxes with all prior distance ladder data raises the current tension between the late and early Universe route to the Hubble constant to 3.8 sigma (99.99 %). With the final expected precision from Gaia, the sample of 50 Cepheids with HST photometry will limit to 0.5% the contribution of the first rung of the distance ladder to the uncertainty in the Hubble constant.

So, nothing definitive yet but potentially very interesting in the future and this group, led by Adam Riess, is now claiming a 3.8σ tension between measurements of the Hubble constant from cosmic microwave background measurements and from traditional `distance ladder’ approaches, though to my mind this is based on some rather subjective judgements.

The appearance of that paper reminded me that I forgot to post about a paper by Bernal & Peacock that appeared a couple of months ago. Here is the abstract of that one:

When combining data sets to perform parameter inference, the results will be unreliable if there are unknown systematics in data or models. Here we introduce a flexible methodology, BACCUS: BAyesian Conservative Constraints and Unknown Systematics, which deals in a conservative way with the problem of data combination, for any degree of tension between experiments. We introduce hyperparameters that describe a bias in each model parameter for each class of experiments. A conservative posterior for the model parameters is then obtained by marginalization both over these unknown shifts and over the width of their prior. We contrast this approach with an existing hyperparameter method in which each individual likelihood is scaled, comparing the performance of each approach and their combination in application to some idealized models. Using only these rescaling hyperparameters is not a suitable approach for the current observational situation, in which internal null tests of the errors are passed, and yet different experiments prefer models that are in poor agreement. The possible existence of large shift systematics cannot be constrained with a small number of data sets, leading to extended tails on the conservative posterior distributions. We illustrate our method with the case of the H0 tension between results from the cosmic distance ladder and physical measurements that rely on the standard cosmological model.

This paper addresses the long-running issue of apparent tension in different measurements of the Hubble constant that I’ve blogged about before (e.g. here) by putting the treatment of possible systematic errors into a more rigorus and consistent (i.e. Bayesian) form. It says what I think most people in the community privately think about this issue, i.e. that it’s probably down to some sort of unidentified systematic rather than exotic physics.

The title of the paper includes the phrase `Conservative Cosmology’, but I think that’s a bit of a misnomer. I think `Sensible Cosmology’. Current events suggest `conservative’ and `sensible’ have opposite meanings. You can find a popular account of it here, from which I have stolen this illustration of the tension:

A chart showing the two differing results for the Hubble constant – The expansion rate of the universe (in km/s/Mpc)
Result 1: 67.8 ± 0.9 Cosmic microwave background
Result 2: 73.52 ± 1.62 Cosmic distance ladder

Anyway, I have a poll that has been going on for some time about whether this tension is anything to be excited about, so why not use this opportunity cast your vote?

Have you got a proper posterior?

Posted in Bad Statistics, The Universe and Stuff with tags , , , , on December 12, 2017 by telescoper

There’s an interesting paper on the arXiv today by Tak et al. with the title `How proper are Bayesian models in the astronomical literature?’ The title isn’t all that appropriate, because the problem is not really with `models’, but with the choice of prior (which should be implied by the model and other information known or assumed to be true). Moreover, I’m not sure whether the word `Bayesian’ applies to the model in any meaningful way.

Anyway, The abstract is as follows:

The well-known Bayes theorem assumes that a posterior distribution is a probability distribution. However, the posterior distribution may no longer be a probability distribution if an improper prior distribution (non-probability measure) such as an unbounded uniform prior is used. Improper priors are often used in the astronomical literature to reflect on a lack of prior knowledge, but checking whether the resulting posterior is a probability distribution is sometimes neglected. It turns out that 24 articles out of 75 articles (32\%) published online in two renowned astronomy journals (ApJ and MNRAS) between Jan 1, 2017 and Oct 15, 2017 make use of Bayesian analyses without rigorously establishing posterior propriety. A disturbing aspect is that a Gibbs-type Markov chain Monte Carlo (MCMC) method can produce a seemingly reasonable posterior sample even when the posterior is not a probability distribution (Hobert and Casella, 1996). In such cases, researchers may erroneously make probabilistic inferences without noticing that the MCMC sample is from a non-existent probability distribution. We review why checking posterior propriety is fundamental in Bayesian analyses when improper priors are used and discuss how we can set up scientifically motivated proper priors to avoid the pitfalls of using improper priors.

This paper makes a point that I have wondered about on a number of occasions. One of the problems, in my opinion, is that astrophysicists don’t think enough about their choice of prior. An improper prior is basically a statement of ignorance about the result one expects in advance of incoming data. However, very often we know more than we think we do. I’ve lost track of the number of papers I’ve seen in which the authors blithely assume a flat prior when that makes no sense whatsoever on the basis of what information is available and, indeed, on the structure of the model within which the data are to be interpreted. I discuss a simple example here.

In my opinion the prior is not (as some frequentists contend) some kind of aberration. It plays a clear logical role in Bayesian inference. It can build into the analysis constraints that are implied by the choice of model framework. Even if it is used as a subjective statement of prejudice, the Bayesian approach at least requires one to put that prejudice on the table where it can be seen.

There are undoubtedly situations where we don’t know enough to assign a proper prior. That’s not necessarily a problem. Improper priors can – and do – lead to proper posterior distributions if (and it’s an important if) they include, or the  likelihood subsequently imposes, a cutoff on the prior space. The onus should be on the authors of a paper to show that their likelihood is such that it does this and produces a posterior which is well-defined probability measure (specifically that it is normalisable, ie can be made to integrate to unity). It seems that astronomers don’t always do this!

The Bayesian Second Law of Thermodynamics

Posted in The Universe and Stuff with tags , , , on April 3, 2017 by telescoper

I post occasionally about Bayesian probability, particularly with respect to Bayesian inference, and related applications to physics and other things, such as thermodynamics, so in that light here’s a paper I stumbled across yesterday. It’s not a brand new paper – it came out on the ArXiv in 2015 – but it’s of sufficiently long-term interest to warrant sharing on here. Here’s the abstract:

You can download the full paper here. There’s also an accessible commentary by one of the authors here.

The interface between thermodynamics, statistical mechanics, information theory  and probability is a fascinating one, but too often important conceptual questions remain unanswered, or indeed unasked, while the field absorbs itself in detailed calculations. Refreshingly, this paper takes the opposite approach.

 

 

 

Cosmology: A Bayesian Perspective

Posted in Talks and Reviews, The Universe and Stuff with tags , , on July 14, 2016 by telescoper

For those of you who are interested, here are the slides I used in my invited talk at MaxEnt 2016 Maximum Entropy and Bayesian Methods in Science and Engineering, yesterday (13th July 2016) in Ghent (Belgium).

Falisifiability versus Testability in Cosmology

Posted in Bad Statistics, The Universe and Stuff with tags , , , , , on July 24, 2015 by telescoper

A paper came out a few weeks ago on the arXiv that’s ruffled a few feathers here and there so I thought I would make a few inflammatory comments about it on this blog. The article concerned, by Gubitosi et al., has the abstract:

Inflation_falsifiabiloty

I have to be a little careful as one of the authors is a good friend of mine. Also there’s already been a critique of some of the claims in this paper here. For the record, I agree with the critique and disagree with the original paper, that the claim below cannot be justfied.

…we illustrate how unfalsifiable models and paradigms are always favoured by the Bayes factor.

If I get a bit of time I’ll write a more technical post explaining why I think that. However, for the purposes of this post I want to take issue with a more fundamental problem I have with the philosophy of this paper, namely the way it adopts “falsifiablity” as a required characteristic for a theory to be scientific. The adoption of this criterion can be traced back to the influence of Karl Popper and particularly his insistence that science is deductive rather than inductive. Part of Popper’s claim is just a semantic confusion. It is necessary at some point to deduce what the measurable consequences of a theory might be before one does any experiments, but that doesn’t mean the whole process of science is deductive. As a non-deductivist I’ll frame my argument in the language of Bayesian (inductive) inference.

Popper rejects the basic application of inductive reasoning in updating probabilities in the light of measured data; he asserts that no theory ever becomes more probable when evidence is found in its favour. Every scientific theory begins infinitely improbable, and is doomed to remain so. There is a grain of truth in this, or can be if the space of possibilities is infinite. Standard methods for assigning priors often spread the unit total probability over an infinite space, leading to a prior probability which is formally zero. This is the problem of improper priors. But this is not a killer blow to Bayesianism. Even if the prior is not strictly normalizable, the posterior probability can be. In any case, given sufficient relevant data the cycle of experiment-measurement-update of probability assignment usually soon leaves the prior far behind. Data usually count in the end.

I believe that deductvism fails to describe how science actually works in practice and is actually a dangerous road to start out on. It is indeed a very short ride, philosophically speaking, from deductivism (as espoused by, e.g., David Hume) to irrationalism (as espoused by, e.g., Paul Feyeraband).

The idea by which Popper is best known is the dogma of falsification. According to this doctrine, a hypothesis is only said to be scientific if it is capable of being proved false. In real science certain “falsehood” and certain “truth” are almost never achieved. The claimed detection of primordial B-mode polarization in the cosmic microwave background by BICEP2 was claimed by some to be “proof” of cosmic inflation, which it wouldn’t have been even if it hadn’t subsequently shown not to be a cosmological signal at all. What we now know to be the failure of BICEP2 to detect primordial B-mode polarization doesn’t disprove inflation either.

Theories are simply more probable or less probable than the alternatives available on the market at a given time. The idea that experimental scientists struggle through their entire life simply to prove theorists wrong is a very strange one, although I definitely know some experimentalists who chase theories like lions chase gazelles. The disparaging implication that scientists live only to prove themselves wrong comes from concentrating exclusively on the possibility that a theory might be found to be less probable than a challenger. In fact, evidence neither confirms nor discounts a theory; it either makes the theory more probable (supports it) or makes it less probable (undermines it). For a theory to be scientific it must be capable having its probability influenced in this way, i.e. amenable to being altered by incoming data “i.e. evidence”. The right criterion for a scientific theory is therefore not falsifiability but testability. It follows straightforwardly from Bayes theorem that a testable theory will not predict all things with equal facility. Scientific theories generally do have untestable components. Any theory has its interpretation, which is the untestable penumbra that we need to supply to make it comprehensible to us. But whatever can be tested can be regared as scientific.

So I think the Gubitosi et al. paper starts on the wrong foot by focussing exclusively on “falsifiability”. The issue of whether a theory is testable is complicated in the context of inflation because prior probabilities for most observables are difficult to determine with any confidence because we know next to nothing about either (a) the conditions prevailing in the early Universe prior to the onset of inflation or (b) how properly to define a measure on the space of inflationary models. Even restricting consideration to the simplest models with a single scalar field, initial data are required for the scalar field (and its time derivative) and there is also a potential whose functional form is not known. It is therfore a far from trivial task to assign meaningful prior probabilities on inflationary models and thus extremely difficult to determine the relative probabilities of observables and how these probabilities may or may not be influenced by interactions with data. Moreover, the Bayesian approach involves comparing probabilities of competing theories, so we also have the issue of what to compare inflation with…

The question of whether cosmic inflation (whether in general concept or in the form of a specific model) is testable or not seems to me to boil down to whether it predicts all possible values of relevant observables with equal ease. A theory might be testable in principle, but not testable at a given time if the available technology at that time is not able to make measurements that can distingish between that theory and another. Most theories have to wait some time for experiments can be designed and built to test them. On the other hand a theory might be untestable even in principle, if it is constructed in such a way that its probability can’t be changed at all by any amount of experimental data. As long as a theory is testable in principle, however, it has the right to be called scientific. If the current available evidence can’t test it we need to do better experiments. On other words, there’s a problem with the evidence not the theory.

Gubitosi et al. are correct in identifying the important distinction between the inflationary paradigm, which encompasses a large set of specific models each formulated in a different way, and an individual member of that set. I also agree – in contrast to many of my colleagues – that it is actually difficult to argue that the inflationary paradigm is currently falsfiable testable. But that doesn’t necessarily mean that it isn’t scientific. A theory doesn’t have to have been tested in order to be testable.

Kuhn the Irrationalist

Posted in Bad Statistics, The Universe and Stuff with tags , , , , , , , , , , , , on August 19, 2012 by telescoper

There’s an article in today’s Observer marking the 50th anniversary of the publication of Thomas Kuhn’s book The Structure of Scientific Revolutions.  John Naughton, who wrote the piece, claims that this book “changed the way we look at science”. I don’t agree with this view at all, actually. There’s little in Kuhn’s book that isn’t implicit in the writings of Karl Popper and little in Popper’s work that isn’t implicit in the work of a far more important figure in the development of the philosophy of science, David Hume. The key point about all these authors is that they failed to understand the central role played by probability and inductive logic in scientific research. In the following I’ll try to explain how I think it all went wrong. It might help the uninitiated to read an earlier post of mine about the Bayesian interpretation of probability.

It is ironic that the pioneers of probability theory and its application to scientific research, principally Laplace, unquestionably adopted a Bayesian rather than frequentist interpretation for his probabilities. Frequentism arose during the nineteenth century and held sway until relatively recently. I recall giving a conference talk about Bayesian reasoning only to be heckled by the audience with comments about “new-fangled, trendy Bayesian methods”. Nothing could have been less apt. Probability theory pre-dates the rise of sampling theory and all the other frequentist-inspired techniques that many modern-day statisticians like to employ.

Most disturbing of all is the influence that frequentist and other non-Bayesian views of probability have had upon the development of a philosophy of science, which I believe has a strong element of inverse reasoning or inductivism in it. The argument about whether there is a role for this type of thought in science goes back at least as far as Roger Bacon who lived in the 13th Century. Much later the brilliant Scottish empiricist philosopher and enlightenment figure David Hume argued strongly against induction. Most modern anti-inductivists can be traced back to this source. Pierre Duhem has argued that theory and experiment never meet face-to-face because in reality there are hosts of auxiliary assumptions involved in making this comparison. This is nowadays called the Quine-Duhem thesis.

Actually, for a Bayesian this doesn’t pose a logical difficulty at all. All one has to do is set up prior probability distributions for the required parameters, calculate their posterior probabilities and then integrate over those that aren’t related to measurements. This is just an expanded version of the idea of marginalization, explained here.

Rudolf Carnap, a logical positivist, attempted to construct a complete theory of inductive reasoning which bears some relationship to Bayesian thought, but he failed to apply Bayes’ theorem in the correct way. Carnap distinguished between two types or probabilities – logical and factual. Bayesians don’t – and I don’t – think this is necessary. The Bayesian definition seems to me to be quite coherent on its own.

Other philosophers of science reject the notion that inductive reasoning has any epistemological value at all. This anti-inductivist stance, often somewhat misleadingly called deductivist (irrationalist would be a better description) is evident in the thinking of three of the most influential philosophers of science of the last century: Karl Popper, Thomas Kuhn and, most recently, Paul Feyerabend. Regardless of the ferocity of their arguments with each other, these have in common that at the core of their systems of thought likes the rejection of all forms of inductive reasoning. The line of thought that ended in this intellectual cul-de-sac began, as I stated above, with the work of the Scottish empiricist philosopher David Hume. For a thorough analysis of the anti-inductivists mentioned above and their obvious debt to Hume, see David Stove’s book Popper and After: Four Modern Irrationalists. I will just make a few inflammatory remarks here.

Karl Popper really began the modern era of science philosophy with his Logik der Forschung, which was published in 1934. There isn’t really much about (Bayesian) probability theory in this book, which is strange for a work which claims to be about the logic of science. Popper also managed to, on the one hand, accept probability theory (in its frequentist form), but on the other, to reject induction. I find it therefore very hard to make sense of his work at all. It is also clear that, at least outside Britain, Popper is not really taken seriously by many people as a philosopher. Inside Britain it is very different and I’m not at all sure I understand why. Nevertheless, in my experience, most working physicists seem to subscribe to some version of Popper’s basic philosophy.

Among the things Popper has claimed is that all observations are “theory-laden” and that “sense-data, untheoretical items of observation, simply do not exist”. I don’t think it is possible to defend this view, unless one asserts that numbers do not exist. Data are numbers. They can be incorporated in the form of propositions about parameters in any theoretical framework we like. It is of course true that the possibility space is theory-laden. It is a space of theories, after all. Theory does suggest what kinds of experiment should be done and what data is likely to be useful. But data can be used to update probabilities of anything.

Popper has also insisted that science is deductive rather than inductive. Part of this claim is just a semantic confusion. It is necessary at some point to deduce what the measurable consequences of a theory might be before one does any experiments, but that doesn’t mean the whole process of science is deductive. He does, however, reject the basic application of inductive reasoning in updating probabilities in the light of measured data; he asserts that no theory ever becomes more probable when evidence is found in its favour. Every scientific theory begins infinitely improbable, and is doomed to remain so.

Now there is a grain of truth in this, or can be if the space of possibilities is infinite. Standard methods for assigning priors often spread the unit total probability over an infinite space, leading to a prior probability which is formally zero. This is the problem of improper priors. But this is not a killer blow to Bayesianism. Even if the prior is not strictly normalizable, the posterior probability can be. In any case, given sufficient relevant data the cycle of experiment-measurement-update of probability assignment usually soon leaves the prior far behind. Data usually count in the end.

The idea by which Popper is best known is the dogma of falsification. According to this doctrine, a hypothesis is only said to be scientific if it is capable of being proved false. In real science certain “falsehood” and certain “truth” are almost never achieved. Theories are simply more probable or less probable than the alternatives on the market. The idea that experimental scientists struggle through their entire life simply to prove theorists wrong is a very strange one, although I definitely know some experimentalists who chase theories like lions chase gazelles. To a Bayesian, the right criterion is not falsifiability but testability, the ability of the theory to be rendered more or less probable using further data. Nevertheless, scientific theories generally do have untestable components. Any theory has its interpretation, which is the untestable baggage that we need to supply to make it comprehensible to us. But whatever can be tested can be scientific.

Popper’s work on the philosophical ideas that ultimately led to falsificationism began in Vienna, but the approach subsequently gained enormous popularity in western Europe. The American Thomas Kuhn later took up the anti-inductivist baton in his book The Structure of Scientific Revolutions. Initially a physicist, Kuhn undoubtedly became a first-rate historian of science and this book contains many perceptive analyses of episodes in the development of physics. His view of scientific progress is cyclic. It begins with a mass of confused observations and controversial theories, moves into a quiescent phase when one theory has triumphed over the others, and lapses into chaos again when the further testing exposes anomalies in the favoured theory. Kuhn adopted the word paradigm to describe the model that rules during the middle stage,

The history of science is littered with examples of this process, which is why so many scientists find Kuhn’s account in good accord with their experience. But there is a problem when attempts are made to fuse this historical observation into a philosophy based on anti-inductivism. Kuhn claims that we “have to relinquish the notion that changes of paradigm carry scientists ..closer and closer to the truth.” Einstein’s theory of relativity provides a closer fit to a wider range of observations than Newtonian mechanics, but in Kuhn’s view this success counts for nothing.

Paul Feyerabend has extended this anti-inductivist streak to its logical (though irrational) extreme. His approach has been dubbed “epistemological anarchism”, and it is clear that he believed that all theories are equally wrong. He is on record as stating that normal science is a fairytale, and that equal time and resources should be spent on “astrology, acupuncture and witchcraft”. He also categorised science alongside “religion, prostitution, and so on”. His thesis is basically that science is just one of many possible internally consistent views of the world, and that the choice between which of these views to adopt can only be made on socio-political grounds.

Feyerabend’s views could only have flourished in a society deeply disillusioned with science. Of course, many bad things have been done in science’s name, and many social institutions are deeply flawed. One can’t expect anything operated by people to run perfectly. It’s also quite reasonable to argue on ethical grounds which bits of science should be funded and which should not. But the bottom line is that science does have a firm methodological basis which distinguishes it from pseudo-science, the occult and new age silliness. Science is distinguished from other belief-systems by its rigorous application of inductive reasoning and its willingness to subject itself to experimental test. Not all science is done properly, of course, and bad science is as bad as anything.

The Bayesian interpretation of probability leads to a philosophy of science which is essentially epistemological rather than ontological. Probabilities are not “out there” in external reality, but in our minds, representing our imperfect knowledge and understanding. Scientific theories are not absolute truths. Our knowledge of reality is never certain, but we are able to reason consistently about which of our theories provides the best available description of what is known at any given time. If that description fails when more data are gathered, we move on, introducing new elements or abandoning the theory for an alternative. This process could go on forever. There may never be a “final” theory, and scientific truths are consequently far from absolute, but that doesn’t mean that there is no progress.