Deductivism and Irrationalism

Looking at my stats I find that my recent introductory post about Bayesian probability has proved surprisingly popular with readers, so I thought I’d follow it up with a brief discussion of some of the philosophical issues surrounding it.

It is ironic that the pioneers of probability theory, principally Laplace, unquestionably adopted a Bayesian rather than frequentist interpretation for his probabilities. Frequentism arose during the nineteenth century and held sway until recently. I recall giving a conference talk about Bayesian reasoning only to be heckled by the audience with comments about “new-fangled, trendy Bayesian methods”. Nothing could have been less apt. Probability theory pre-dates the rise of sampling theory and all the frequentist-inspired techniques that modern-day statisticians like to employ.

Most disturbing of all is the influence that frequentist and other non-Bayesian views of probability have had upon the development of a philosophy of science, which I believe has a strong element of inverse reasoning or inductivism in it. The argument about whether there is a role for this type of thought in science goes back at least as far as Roger Bacon who lived in the 13th Century. Much later the brilliant Scottish empiricist philosopher and enlightenment figure David Hume argued strongly against induction. Most modern anti-inductivists can be traced back to this source. Pierre Duhem has argued that theory and experiment never meet face-to-face because in reality there are hosts of auxiliary assumptions involved in making this comparison. This is nowadays called the Quine-Duhem thesis.

Actually, for a Bayesian this doesn’t pose a logical difficulty at all. All one has to do is set up prior probability distributions for the required parameters, calculate their posterior probabilities and then integrate over those that aren’t related to measurements. This is just an expanded version of the idea of marginalization, explained here.

Rudolf Carnap, a logical positivist, attempted to construct a complete theory of inductive reasoning which bears some relationship to Bayesian thought, but he failed to apply Bayes’ theorem in the correct way. Carnap distinguished between two types or probabilities – logical and factual. Bayesians don’t – and I don’t – think this is necessary. The Bayesian definition seems to me to be quite coherent on its own.

Other philosophers of science reject the notion that inductive reasoning has any epistemological value at all. This anti-inductivist stance, often somewhat misleadingly called deductivist (irrationalist would be a better description) is evident in the thinking of three of the most influential philosophers of science of the last century: Karl Popper, Thomas Kuhn and, most recently, Paul Feyerabend. Regardless of the ferocity of their arguments with each other, these have in common that at the core of their systems of thought likes the rejection of all forms of inductive reasoning. The line of thought that ended in this intellectual cul-de-sac began, as I stated above, with the work of the Scottish empiricist philosopher David Hume. For a thorough analysis of the anti-inductivists mentioned above and their obvious debt to Hume, see David Stove’s book Popper and After: Four Modern Irrationalists. I will just make a few inflammatory remarks here.

Karl Popper really began the modern era of science philosophy with his Logik der Forschung, which was published in 1934. There isn’t really much about (Bayesian) probability theory in this book, which is strange for a work which claims to be about the logic of science. Popper also managed to, on the one hand, accept probability theory (in its frequentist form), but on the other, to reject induction. I find it therefore very hard to make sense of his work at all. It is also clear that, at least outside Britain, Popper is not really taken seriously by many people as a philosopher. Inside Britain it is very different and I’m not at all sure I understand why. Nevertheless, in my experience, most working physicists seem to subscribe to some version of Popper’s basic philosophy.

Among the things Popper has claimed is that all observations are “theory-laden” and that “sense-data, untheoretical items of observation, simply do not exist”. I don’t think it is possible to defend this view, unless one asserts that numbers do not exist. Data are numbers. They can be incorporated in the form of propositions about parameters in any theoretical framework we like. It is of course true that the possibility space is theory-laden. It is a space of theories, after all. Theory does suggest what kinds of experiment should be done and what data is likely to be useful. But data can be used to update probabilities of anything.

Popper has also insisted that science is deductive rather than inductive. Part of this claim is just a semantic confusion. It is necessary at some point to deduce what the measurable consequences of a theory might be before one does any experiments, but that doesn’t mean the whole process of science is deductive. He does, however, reject the basic application of inductive reasoning in updating probabilities in the light of measured data; he asserts that no theory ever becomes more probable when evidence is found in its favour. Every scientific theory begins infinitely improbable, and is doomed to remain so.

Now there is a grain of truth in this, or can be if the space of possibilities is infinite. Standard methods for assigning priors often spread the unit total probability over an infinite space, leading to a prior probability which is formally zero. This is the problem of improper priors. But this is not a killer blow to Bayesianism. Even if the prior is not strictly normalizable, the posterior probability can be. In any case, given sufficient relevant data the cycle of experiment-measurement-update of probability assignment usually soon leaves the prior far behind. Data usually count in the end.

The idea by which Popper is best known is the dogma of falsification. According to this doctrine, a hypothesis is only said to be scientific if it is capable of being proved false. In real science certain “falsehood” and certain “truth” are almost never achieved. Theories are simply more probable or less probable than the alternatives on the market. The idea that experimental scientists struggle through their entire life simply to prove theorists wrong is a very strange one, although I definitely know some experimentalists who chase theories like lions chase gazelles. To a Bayesian, the right criterion is not falsifiability but testability, the ability of the theory to be rendered more or less probable using further data. Nevertheless, scientific theories generally do have untestable components. Any theory has its interpretation, which is the untestable baggage that we need to supply to make it comprehensible to us. But whatever can be tested can be scientific.

Popper’s work on the philosophical ideas that ultimately led to falsificationism began in Vienna, but the approach subsequently gained enormous popularity in western Europe. The American Thomas Kuhn later took up the anti-inductivist baton in his book The Structure of Scientific Revolutions. Kuhn is undoubtedly a first-rate historian of science and this book contains many perceptive analyses of episodes in the development of physics. His view of scientific progress is cyclic. It begins with a mass of confused observations and controversial theories, moves into a quiescent phase when one theory has triumphed over the others, and lapses into chaos again when the further testing exposes anomalies in the favoured theory. Kuhn adopted the word paradigm to describe the model that rules during the middle stage,

The history of science is littered with examples of this process, which is why so many scientists find Kuhn’s account in good accord with their experience. But there is a problem when attempts are made to fuse this historical observation into a philosophy based on anti-inductivism. Kuhn claims that we “have to relinquish the notion that changes of paradigm carry scientists ..closer and closer to the truth.” Einstein’s theory of relativity provides a closer fit to a wider range of observations than Newtonian mechanics, but in Kuhn’s view this success counts for nothing.

Paul Feyerabend has extended this anti-inductivist streak to its logical (though irrational) extreme. His approach has been dubbed “epistemological anarchism”, and it is clear that he believed that all theories are equally wrong. He is on record as stating that normal science is a fairytale, and that equal time and resources should be spent on “astrology, acupuncture and witchcraft”. He also categorised science alongside “religion, prostitution, and so on”. His thesis is basically that science is just one of many possible internally consistent views of the world, and that the choice between which of these views to adopt can only be made on socio-political grounds.

Feyerabend’s views could only have flourished in a society deeply disillusioned with science. Of course, many bad things have been done in science’s name, and many social institutions are deeply flawed. One can’t expect anything operated by people to run perfectly. It’s also quite reasonable to argue on ethical grounds which bits of science should be funded and which should not. But the bottom line is that science does have a firm methodological basis which distinguishes it from pseudo-science, the occult and new age silliness. Science is distinguished from other belief-systems by its rigorous application of inductive reasoning and its willingness to subject itself to experimental test. Not all science is done properly, of course, and bad science is as bad as anything.

The Bayesian interpretation of probability leads to a philosophy of science which is essentially epistemological rather than ontological. Probabilities are not “out there” in external reality, but in our minds, representing our imperfect knowledge and understanding. Scientific theories are not absolute truths. Our knowledge of reality is never certain, but we are able to reason consistently about which of our theories provides the best available description of what is known at any given time. If that description fails when more data are gathered, we move on, introducing new elements or abandoning the theory for an alternative. This process could go on forever. There may never be a final theory. But although the game might have no end, at least we know the rules….


Share/Bookmark

34 Responses to “Deductivism and Irrationalism”

  1. While not going quite as far as Popper in the direction of ‘all data are theory-laden’, it should be clear that almost all the data we have to deal with are conditioned on why and how they were gathered and selected.

    But this is mainly means you should be careful how you use them, rather than any deep question of falsification or testability.

  2. Anton Garrett Says:

    The Wikipedia page on Karl Popper says that in the late 1930s he quit Austria (for obvious reasons) for a position in a New Zealand university. What it doesn’t say is that he was turned down for a Chair at another antipodean university.

    As for his idea of falsification, whoever believes that the height of a scientist’s ambition is to see his theory proved WRONG?

    Incidentally, David Stove’s fine book referred to above (which dissects Popper’s rhetorical tricks) now goes under the title Scientific Irrationalism.

    Anton

    • telescoper Says:

      I have a very old edition. I guess the new one is the reason Amazon is only selling second hand copies of the one I’ve got…

    • I see what Anton’s getting at, but I’d quite like to have a theory so good that proving it wrong would be a big deal…..

      Mark

    • Anton Garrett Says:

      Mark: It is trivial to come up with a theory that you can scarcely prove wrong. Just put 5000 free parameters in it, whose values are to be estimated from the data. In which regard, and thinking of Calabi-Yau spaces, you might try string theory, described in the title of Peter Woit’s critical book (after a comment by either Pauli or Landau) as “Not Even Wrong”.

    • Ha. True.

      But I stand by my original statement.

      Also, I wouldn’t mind being the person credited with inventing string theory either…..

    • Anton Garrett Says:

      “You’re missing the point. The point is that a good theory has to be falsifiable in principle.”

      No, the point is that a good theory has to be *testable*, meaning that its probability is capable of being shifted by experimental data. In practice, one theory will tend toward a probability of unity in any given era, but even that is relative to its rivals. In Newtonian mechanics vs Aristotelian (force proportional to velocity, not acceleration), Newton wins. But include Einsteinian relativistic mechanics in the hypothesis space and Newtonian, which formerly had probability (1 – delta), now has probability delta’, and it is Einstein which has probability (1- delta”).

      All this is far beyond Popper’s stuff. As for his inversion of the truth, so that the height of a scientist’s ambition is suppposedly to see his theory proved wrong, I stand by my comment that this is typical Popper disingenuousness. Please read David Stove for a detailed expose of that.

      Anton

    • Anton Garrett Says:

      Well Phillip, read Stove and make your own decision on that.

  3. “This process could go on forever. There may never be a final theory. ”

    Well, I have compiled a theory. It is complete and consistent, reified, universal, and incommensurable with current paradigms. Based upon my humble read of the scientific, philosophical, and theological literature, it’s a good candidate for the final model.

    I’ve been blogging about it for about 5 months now. I welcome commentary from individuals interested in such matters.

    Peace,

    Ik

  4. Prof P has a theory that Bayesian priors are scientifically meaningful. In fact, he has had such success with the Bayesian approach that in advance of seeking data to test his theory he feels he must assign a prior of probability one that the theory is correct and probability zero that it isn’t. After many years of taking new data he notes with satisfaction that the updated probability that his theory is correct is still exactly one. He concludes that Bayesian priors are scientific. Has he therefore scientifically proven that Bayesian priors are scientific? Discuss!

    • Anton Garrett Says:

      Tom: He is not free to set a Bayesian prior of unity. Bayesians do not know how to assign priors in every situation, but this is an incomplete science rather than as-you-like-it.

      You are of course correct that a proposition/hypothesis that is assigned a probability of one (or zero) can never move from there. It would be weird if that were not the case, in fact. The lesson is that we seldom start from certainty.

      Genius in physics lies not in comparing theories, using probability/inductive logic, but in creating theories that are consistent with all observations that have gone before while explaining those that are consistently anomalous acording to the prevailing theory.

    • Tom,

      I don’t know who Prof P is, but he sounds a lot like Prof T who assigned a prior of unity to a very low value of the Hubble constant and consequently ignored all evidence to the contrary.

      Peter

    • Peter,
      Bit unfair this to Prof T. Maybe things would have gone better if he had followed the Bayesian route!
      Tom

    • Anton Garrett Says:

      As a Bayesian expert but not an astrophysicist I’m greatly enjoying the cryptic professorial references here.

    • Anton,

      Talking of cryptic references, I think you’ll like this clue from today’s Grauniad crossword:

      Devilish team unchristened “The Red Devils” (10,6)

      Peter

  5. Anton Garrett Says:

    PS If you want to know why such a theory is crap, you need what Bayesians call the Ockham analysis. It is far beyond the level of sophistication that Popper attained.

  6. Nice wee essay Peter. I thought I was the only astronomer who had read Stove’s book.. Surprised you didn’t cover Lakatos too. Has anybody done anything interesting since him ?

    The popularity of Popper is indeed interesting. Brit scientists like being hard nosed. I think its the “put your money where your mouth is” aspect that appeals. If that’s right, Americans should like it even more.

    • Anton Garrett Says:

      Lakatos talked rubbish only about mathematics. Popper, Kuhn and Feyerabend did so about physics. The line doesn’t end there nowadays either; it is common for postmodernist philosophers to write in such a way as to suggest that the laws of physics are merely cultural constructions, while never quite saying so explicitly. The disjunction between their heads and their lives is alarming: they know well enough what effect the law of gravity would have if they jumped out of a window…

      • Similarly, some in the social sciences may dismiss the existence of physical addiction in favour of cultural entrainment. For them, the human psyche or mind is all a blank slate and everything is conditioned or moulded by culture or habituation.

    • Anton Garrett Says:

      The correct response to the postmodernists who succeeded even Feyerabend is probably action like the Sokal hoax, which I hope Peter will blog about sometime.

  7. John Peacock Says:

    Peter: when you post on Ockham, I hope you will criticise the tendency in Bayesian model selection to misuse the “O word”. What this ought to mean is that we assign a much smaller prior probability to a model with 5000 free parameters than to a model with one or none. But actually, Bayesians tend to ignore the prior (implicitly treating such models as equally likely), and refer to an Ockham-esque penalty if one or other of the models has only a small part of its parameter space consistent with the data. I think this is poor usage. If the 5000 parameters have only a weak impact on observables, then this common “evidence ratio” approach will not discriminate between the 1-parameter and 5000-parameter models. That is clearly loony.

    OK, so this is getting your retaliation in first… I’ll be interested to see what you have to say on the issue.

    • telescoper Says:

      John

      Without spoiling my forthcoming definitive discussion of this matter (*cough*) I would agree that (a) there has been a lot of misleading stuff (written by people who claim to be Bayesian) about model selection and (b) too many people who claim to be Bayesian mindlessly adopt flat priors for everything…

      Peter

  8. Peter – I’m an amateur in this area, but reading the definitions of deduction, induction and abduction on http://en.wikipedia.org/wiki/Logical_reasoning I would be inclined to say that the Bayesian approach is basically abductive rather than inductive.

    Or are you using “induction” in a different sense?

  9. telescoper Says:

    I think I’m using the sense of induction described in:

    http://en.wikipedia.org/wiki/Inductive_reasoning

    I’m not familiar with abduction, unless it’s from a Seraglio and is an opera written by Mozart.

  10. I think it’s helpful to distinguish between induction and other forms of non-deductive reasoning. Induction says that the fact that X has always happened in the past suggests that X will always happen in the future. Abduction (which I think is the same as inference to the best explanation) says that the observation of X gives us motivation for believing that Y is true, where Y is an elegant explanation for X.

    So one could make a strong attack on inductive reasoning without posing any threat to Bayesian methods of inference, which are neither inductive nor deductive.

  11. Apparently Popper defined inductive inference as “inference from repeatedly observed instances to as yet unobserved instances”, which would fit with my narrow definition. Maybe the word generally has a broader meaning that the one I am using, but I find the deduction/induction/abduction triad helpful to make some sense of all this.

  12. Anton Garrett Says:

    Philosophers can (and often do) define what they like, but is it self-consistent and does it provide a useful model for human inference?

    I – and to my knowledge Peter – regard probability theory and inductive logic, *provided that both are done correctly*, as identical.

    I need to check a dictionary but I had thought that abduction (adduction?) of a proposition/theory referred to its invention. This is of course the hard part, for which we currently have no model and are dependent on the minds of people like Newton, Einstein, Dirac etc. Testing relativistic vs Newtonian mechanics in (inevitably noisy) experiments is then just an olympic-scale application of hypothesis testing.

    John – I agree with you that Ockham’s Razor has come to have two applications within ‘theory comparison’, and that there is some consequent confusion. I’m sure that Peter will clear this up here eventually.

    Anton

  13. Phillip – who has ever been burned for advocating heliocentrism?

    • Anton Garrett Says:

      Pope Urban VIII personally intervened in Galileo’s inquistiion to sanction torture if necessary. Galileo was not tortured, nor is it proven that he was shown the instruments of torture (another popular misbelief), but there is no doubt that he would have been aware of Urban’s appalling statement. Aged 69 or 70, he recanted.

  14. “Among the things Popper has claimed is that all observations are “theory-laden” and that “sense-data, untheoretical items of observation, simply do not exist”. I don’t think it is possible to defend this view, unless one asserts that numbers do not exist. ”

    Make no sense. Numbers exist, but they are (including the misleadingly named ‘natural numbers’) entirely an invention of the human mind. They do not ‘exist’ in nature, even if they have been created by some psychological or logical process of abstraction from observed behaviour of physical objects.

  15. […] a little piece about Bayesian probability. That one and the others that followed it (here and here) proved to be surprisingly popular so I’ve been planning to add a few more posts whenever I […]

  16. […] If all this stuff about significance, power and Type I and Type II errors seems a bit bizarre, I think that’s because it is. So is the notion, which stems from this frequentist formulation, that all a scientist can ever hope to do is refute their null hypothesis. You’ll find this view echoed in the philosophical approach of Karl Popper and it has heavily influenced the way many scientists see the scientific method, unfortunately. […]

Leave a comment