Kuhn the Irrationalist

There’s an article in today’s Observer marking the 50th anniversary of the publication of Thomas Kuhn’s book The Structure of Scientific Revolutions.  John Naughton, who wrote the piece, claims that this book “changed the way we look at science”. I don’t agree with this view at all, actually. There’s little in Kuhn’s book that isn’t implicit in the writings of Karl Popper and little in Popper’s work that isn’t implicit in the work of a far more important figure in the development of the philosophy of science, David Hume. The key point about all these authors is that they failed to understand the central role played by probability and inductive logic in scientific research. In the following I’ll try to explain how I think it all went wrong. It might help the uninitiated to read an earlier post of mine about the Bayesian interpretation of probability.

It is ironic that the pioneers of probability theory and its application to scientific research, principally Laplace, unquestionably adopted a Bayesian rather than frequentist interpretation for his probabilities. Frequentism arose during the nineteenth century and held sway until relatively recently. I recall giving a conference talk about Bayesian reasoning only to be heckled by the audience with comments about “new-fangled, trendy Bayesian methods”. Nothing could have been less apt. Probability theory pre-dates the rise of sampling theory and all the other frequentist-inspired techniques that many modern-day statisticians like to employ.

Most disturbing of all is the influence that frequentist and other non-Bayesian views of probability have had upon the development of a philosophy of science, which I believe has a strong element of inverse reasoning or inductivism in it. The argument about whether there is a role for this type of thought in science goes back at least as far as Roger Bacon who lived in the 13th Century. Much later the brilliant Scottish empiricist philosopher and enlightenment figure David Hume argued strongly against induction. Most modern anti-inductivists can be traced back to this source. Pierre Duhem has argued that theory and experiment never meet face-to-face because in reality there are hosts of auxiliary assumptions involved in making this comparison. This is nowadays called the Quine-Duhem thesis.

Actually, for a Bayesian this doesn’t pose a logical difficulty at all. All one has to do is set up prior probability distributions for the required parameters, calculate their posterior probabilities and then integrate over those that aren’t related to measurements. This is just an expanded version of the idea of marginalization, explained here.

Rudolf Carnap, a logical positivist, attempted to construct a complete theory of inductive reasoning which bears some relationship to Bayesian thought, but he failed to apply Bayes’ theorem in the correct way. Carnap distinguished between two types or probabilities – logical and factual. Bayesians don’t – and I don’t – think this is necessary. The Bayesian definition seems to me to be quite coherent on its own.

Other philosophers of science reject the notion that inductive reasoning has any epistemological value at all. This anti-inductivist stance, often somewhat misleadingly called deductivist (irrationalist would be a better description) is evident in the thinking of three of the most influential philosophers of science of the last century: Karl Popper, Thomas Kuhn and, most recently, Paul Feyerabend. Regardless of the ferocity of their arguments with each other, these have in common that at the core of their systems of thought likes the rejection of all forms of inductive reasoning. The line of thought that ended in this intellectual cul-de-sac began, as I stated above, with the work of the Scottish empiricist philosopher David Hume. For a thorough analysis of the anti-inductivists mentioned above and their obvious debt to Hume, see David Stove’s book Popper and After: Four Modern Irrationalists. I will just make a few inflammatory remarks here.

Karl Popper really began the modern era of science philosophy with his Logik der Forschung, which was published in 1934. There isn’t really much about (Bayesian) probability theory in this book, which is strange for a work which claims to be about the logic of science. Popper also managed to, on the one hand, accept probability theory (in its frequentist form), but on the other, to reject induction. I find it therefore very hard to make sense of his work at all. It is also clear that, at least outside Britain, Popper is not really taken seriously by many people as a philosopher. Inside Britain it is very different and I’m not at all sure I understand why. Nevertheless, in my experience, most working physicists seem to subscribe to some version of Popper’s basic philosophy.

Among the things Popper has claimed is that all observations are “theory-laden” and that “sense-data, untheoretical items of observation, simply do not exist”. I don’t think it is possible to defend this view, unless one asserts that numbers do not exist. Data are numbers. They can be incorporated in the form of propositions about parameters in any theoretical framework we like. It is of course true that the possibility space is theory-laden. It is a space of theories, after all. Theory does suggest what kinds of experiment should be done and what data is likely to be useful. But data can be used to update probabilities of anything.

Popper has also insisted that science is deductive rather than inductive. Part of this claim is just a semantic confusion. It is necessary at some point to deduce what the measurable consequences of a theory might be before one does any experiments, but that doesn’t mean the whole process of science is deductive. He does, however, reject the basic application of inductive reasoning in updating probabilities in the light of measured data; he asserts that no theory ever becomes more probable when evidence is found in its favour. Every scientific theory begins infinitely improbable, and is doomed to remain so.

Now there is a grain of truth in this, or can be if the space of possibilities is infinite. Standard methods for assigning priors often spread the unit total probability over an infinite space, leading to a prior probability which is formally zero. This is the problem of improper priors. But this is not a killer blow to Bayesianism. Even if the prior is not strictly normalizable, the posterior probability can be. In any case, given sufficient relevant data the cycle of experiment-measurement-update of probability assignment usually soon leaves the prior far behind. Data usually count in the end.

The idea by which Popper is best known is the dogma of falsification. According to this doctrine, a hypothesis is only said to be scientific if it is capable of being proved false. In real science certain “falsehood” and certain “truth” are almost never achieved. Theories are simply more probable or less probable than the alternatives on the market. The idea that experimental scientists struggle through their entire life simply to prove theorists wrong is a very strange one, although I definitely know some experimentalists who chase theories like lions chase gazelles. To a Bayesian, the right criterion is not falsifiability but testability, the ability of the theory to be rendered more or less probable using further data. Nevertheless, scientific theories generally do have untestable components. Any theory has its interpretation, which is the untestable baggage that we need to supply to make it comprehensible to us. But whatever can be tested can be scientific.

Popper’s work on the philosophical ideas that ultimately led to falsificationism began in Vienna, but the approach subsequently gained enormous popularity in western Europe. The American Thomas Kuhn later took up the anti-inductivist baton in his book The Structure of Scientific Revolutions. Initially a physicist, Kuhn undoubtedly became a first-rate historian of science and this book contains many perceptive analyses of episodes in the development of physics. His view of scientific progress is cyclic. It begins with a mass of confused observations and controversial theories, moves into a quiescent phase when one theory has triumphed over the others, and lapses into chaos again when the further testing exposes anomalies in the favoured theory. Kuhn adopted the word paradigm to describe the model that rules during the middle stage,

The history of science is littered with examples of this process, which is why so many scientists find Kuhn’s account in good accord with their experience. But there is a problem when attempts are made to fuse this historical observation into a philosophy based on anti-inductivism. Kuhn claims that we “have to relinquish the notion that changes of paradigm carry scientists ..closer and closer to the truth.” Einstein’s theory of relativity provides a closer fit to a wider range of observations than Newtonian mechanics, but in Kuhn’s view this success counts for nothing.

Paul Feyerabend has extended this anti-inductivist streak to its logical (though irrational) extreme. His approach has been dubbed “epistemological anarchism”, and it is clear that he believed that all theories are equally wrong. He is on record as stating that normal science is a fairytale, and that equal time and resources should be spent on “astrology, acupuncture and witchcraft”. He also categorised science alongside “religion, prostitution, and so on”. His thesis is basically that science is just one of many possible internally consistent views of the world, and that the choice between which of these views to adopt can only be made on socio-political grounds.

Feyerabend’s views could only have flourished in a society deeply disillusioned with science. Of course, many bad things have been done in science’s name, and many social institutions are deeply flawed. One can’t expect anything operated by people to run perfectly. It’s also quite reasonable to argue on ethical grounds which bits of science should be funded and which should not. But the bottom line is that science does have a firm methodological basis which distinguishes it from pseudo-science, the occult and new age silliness. Science is distinguished from other belief-systems by its rigorous application of inductive reasoning and its willingness to subject itself to experimental test. Not all science is done properly, of course, and bad science is as bad as anything.

The Bayesian interpretation of probability leads to a philosophy of science which is essentially epistemological rather than ontological. Probabilities are not “out there” in external reality, but in our minds, representing our imperfect knowledge and understanding. Scientific theories are not absolute truths. Our knowledge of reality is never certain, but we are able to reason consistently about which of our theories provides the best available description of what is known at any given time. If that description fails when more data are gathered, we move on, introducing new elements or abandoning the theory for an alternative. This process could go on forever. There may never be a “final” theory, and scientific truths are consequently far from absolute, but that doesn’t mean that there is no progress.

26 Responses to “Kuhn the Irrationalist”

  1. In my youth I was ashamed of the fact that I couldn’t see what all the fuss was about regarding Kuhn’s Structure of Scientific Revolutions. I got over that a long time ago.

    Kuhn’s quite good as a historian of science. In particular, his book The Copernican Revolution is excellent, in my opinion. And in principle the idea that one should base philosophy of science on a close examination of what scientists actually do, rather than pure theorizing, seems sound to me. So it’s a shame that he bollixed it up the way he did, and a mystery that he’s regarded as such an important figure.

    Popper’s ghost certainly continues to haunt science departments (and not just in physics). A few years ago, I was on a committee at my university to establish criteria by which we would evaluate our general science courses (a largely silly exercise designed to please the accreditation boards that oversee US universities). I had a long argument with the other scientists over whether to include the notion of Popperian falisifiability when we talked about students’ ability to identify scientific hypotheses. I ranted about it at the time here: http://blog.richmond.edu/physicsbunn/2009/01/22/why-i-am-not-a-popperian/ . The main point of my rant is exactly what you’re saying: science is all about testability, which should be understood in a probabilistic sense.

  2. 1_ Falsifiability vs. testability: Doesn’t a test imply two possible outcomes, one that asserts and other that falsifies a theory? Where exactly is the difference? “Falsifiability” is just a fancy name, that draws attention to the fact there must be a possible outcome that would deny the theory being tested.

    2_ I completely agree these questions really go way back, we are just re-heating an old lunch. I’m a big fan of Hume, and I was half surprised and half terrified to find out how the frequentism/something-else-ism debate goes way back at least since 19th century statisticians like Fisher and Pearson.

    I think the idea there is a “Bayesian” interpretation and a “frequentist” interpretation of probability is extremely damaging to the progress of this science. I appreciate some of these metaphysical questions, but in general I see nothing being won by insisting there is this “ying-yang” on how we interpret the phenomena being studied.

  3. Anton Garrett Says:

    By rejecting inductive logic, which Bayesians understand IS probability theory (at least when both are done correctly), Kuhn had no way to say that one theory was BETTER than another, ie more probable, given the data. Hence all the waffle about paradigms, which are no better in his view than fashions in clothing. I agree that he was a good historian of science – but history is not philosophy…

  4. I think one cannot appreciate the argument of Hume if one is active in science. What Hume meant was that no system (even scientific sytems) are provable, and that the inductive system is not a fail safe proof, however powerful it may be.
    And therefore I can appreciate the statement that science should be seen in the context of other disciplines like astrology, however unsavoury that may sound. I do however not agree with that either.
    What all have in common is their unprovable inductive system in the form of axioms. Even religion is based on the same type of axioms science is based on, and scienttists are very quick to cling to their axioms and condemn those same axioms used in religion.

    • Anton Garrett Says:

      “I think one cannot appreciate the argument of Hume if one is active in science.”

      Please explain why not. What Hume meant has been discussed in detail in other bopks by David Stove, incidentally.

      • Hi Anton

        Hume referred to the absolute trust scientists and other philosophers have in their theories, whilst all are based on an inductive pocess which is not a fail safe process – in fact very far from that. Therefore Hume struck a red line through all philosophy and science – similar to Wittgenstein somehwat later.
        I did not read Stove. If I want to hear what Hume says, I read Hume.
        I found the debate somewhat futile, argueing between two things which does not exist in reality.

      • Anton Garrett Says:

        I regard induction as pretty much failsafe, at least now it has foundations in a reasonably unique generalisation of Boolean logic. If you are always going to question axioms then you will ultimately be driven back to Cogito ergo sum, which is true but cannot be built upon in the absence of any other axiom. So there you will be stuck. I agree that faith is needed to get anywhere, whether faith in what a divinity says or faith in other propositional axioms.

        Also I am not disagreeing that science based on induction is fallilble – because we might be a long way from coming up with a decent theory, and theory invention is up to humans. To make this point, suppose that the datapoints lie close to a straight line with gradient 2, and the only two theories to have been dreamed up have gradient 1.0 and 1.5. you can do Bayesian inductive comparison between them (and 1.5 will win), but you will still think that both are implausible.

      • I fully agree wih you, and I think if scientists would now and then ponder this, it would make the schism between them and religion smaller.
        I am not so sure about what is meant by falsification. If I understand it correctly, one cannot accept a hypothesis on a 95% probabliity if 5% of the data proves it false (not uncertain).
        In any case, we will never be able to move away from induction, as it is such a powerful tool, taking cognisance of the fact that what we want to know is unprovable in our frame of reference – Godel’s incompleteness theorom.
        And that is what interested me in this argument.

      • Anton Garrett Says:

        I’m still looking for an explanation of Goedel’s theorem at my own level, ie not the full gory details but some way beyond “intelligent layman” stuff. I suspect also that Jaynes (the outstanding advocate of Bayesianism) was a bit quick to dismiss the predicate calculus as merely the propositional calculus plus some fairly trivial notational sleight of hand. But he wasn’t often wrong, so I don’t know.

  5. Very recently, I have noted that the principle of falsification seems to be derivable from Bayes’ theorem ( http://maximum-entropy-blog.blogspot.nl/2012/08/bayes-theorem-all-you-need-to-know.html ).

    Of course, there are more ways than just falsification to update the probability for a hypothesis, but I agree with Popper that if a theory is not vulnerable to falsification, then it is pseudoscience at best.

    Furthermore, there is an asymmetry between falsification and verification. A theory that achieves negligibly small probability when compared to any other theory is falsified and remains so for all time. But a theory can achieve very high probability in one system of competing models, but later become superseded as new data comes in, resulting in new models to be constructed. Falsification is irreversible, but verification is not.

    Agree whole-heartedly with your main point about Kuhn, though. His ideas point to the equivalence of all points of view, no matter how barmy, or how devoid of evidence.

    • Anton Garrett Says:

      You are absolutely right about that asymmetry between falsification and verification. It is helpful to consider the source of that asymmetry: it stems from the introduction of new hypotheses (ie, new scientific theories) into the hypothesis space. And that is not modelled by Bayesian techniques (or indeed non-Bayesian techniques); we are dependent on the human mind for creativity of that sort – minds like Newton, Einstein, Maxwell, Dirac. In any *given* hypothesis space, there is symmetry, of course.

      I don’t think that Popper understood things in this depth at all. His claim of asymmetry was of the Black Swan type. But real science is not like that.

  6. I couldn’t agree more. Kuhn still reigns supreme at Harvard, I got very tired last year of listening to historians and sociologists of science quoting him unquestioningly. Much of the work is not new, and what is is often questionable.
    For example, I think his thesis of incommensurability of paradigms is completely out of step with my own experience as an experimentalist. We were taught from day one to consider the data in the light of all models, yes all.
    The older I get, the more I suspect that there are too many people writing about the philosophy of science who have a very simplified understanding of its practice. Perhaps more training in philosophy for science researchers is the answer!

  7. Dan Riley Says:

    I think you (and I suppose Stove) have got Popper almost entirely wrong–you are arguing against “cartoon Popper” (to use Ted’s phrase), not the real Popper.

    For starters, Popper did not reject the epistemological value of inductive reasoning. While Popper did accept Hume’s argument against induction, his response was that “we merely have to realize that our ‘adoption’ of scientific theories can only be tentative; that they always are and will remain guesses or conjectures or hypotheses. […] Yet if we consider the problems they solve, and the criticisms and the tests they have withstood, we may have excellent critical reasons for preferring them to other theories–though only provisionally and tentatively.” (all Popper quotes are from “Realism and the Aim of Science”). So Popper accepts Hume’s argument that induction isn’t logically sound, and therefore can’t be certain; Popper’s response is that it doesn’t have to be certain to have epistemological value. I don’t really see how a Bayesian could find that objectionable.

    Popper’s actual rejection of inductive reasoning was factual, not epistemological. Popper’s position was that almost all learning proceeds through steps of trial and error that are primarily deductive in nature, so the description of science as proceeding through inductive reasoning is, according to Popper, not a factually accurate description of how science is actually done. In this, Popper, like Kuhn, was interested in what scientists actually do, rather than an idealization or normative prescription of how it ought to be done. However, Popper also made it clear that, following Hume, “the factual, psychological, and historical question, ‘How do we come by our theories?’ […] is irrelevant to the logical, methodological, and epistemological question of validity.” You can’t really understand Popper (or Hume) without recognizing the distinction been made there.

    While Popper may be best known for the “dogma of falsification”, that dogma bears very little resemblance to the positions Popper actually held. To understand Popper, it is essential to separate the logical question of whether a theory is, in principle, falsifiable (a question of validity), from the practical, factual question of actually falsifying a theory. wrt the second sense, Popper recognized, of course, that “it is never possible to prove conclusively that an empirical scientific theory is false. In this sense, such theories are not falsifiable.” Popper’s use of falsification is always in the former sense of validity of a theory; it was never meant to be descriptive of what scientists actually do or prescriptive of what they ought.

    With the logical question of falsification, Popper was primarily interested in the problem of demarcation, of telling whether a theory is science or not, whether it was “arguable by means of empirical arguments […], and whether these arguments should be considered as serious tests”. According to Popper, a theory is falsifiable in principle if there exists a set of observations that, if assumed to be entirely true, would falsify the theory. Popper argued that if a theory is not falsifiable in principle, then it is also not testable in practice: “[h]ence I suggested that testability or refutability or falsifiability should be accepted as a criterion of the scientific character of theoretical systems.” Since Popper’s falsifiability is not normative, it is also not exclusive–accepting falsifiability as a criterion of scientific validity does not exclude the possibility of other criteria.

    On Kuhn, I think most of the value in Kuhn is his attempt to schematize how science is actually done; I don’t take him particularly seriously as a philosopher of science (certainly much less so than Popper). However, I do have to note that your claim that “in Kuhn’s view this success counts for nothing” implicitly assumes that moving “closer and closer to the truth” is the only thing that does count, which I don’t believe is an accurate characterization of Kuhn’s views. It’s also worth noting that Popper disagreed with Kuhn on this point–Popper did believe that science generally moves towards truth (appropriately defined), writing that “Kuhn’s views on this fundamental question seem to me affected by relativism; more specifically, by some form of subjectivism”.

    There are plenty of valid grounds for criticizing Popper, especially in his treatment of probabilities, but I think it does him a grave injustice to characterize him as an irrationalist. Popper was very much a rationalist and a realist; he was also a subtle thinker whose philosophy is worth engaging. It’s rather sad that the “cartoon Popper” of the “dogma of falsification” is so much better known than the subtle philosopher.

    • Anton Garrett Says:

      Please read Stove in his own words before criticising him; he includes a detailed analysis not only of Popper’s position but also of Popper’s rhetoric. The current edition of Stove’s book, to which Peter has referred, is called Scientific Irrationalism.

      Somewhere in the scientific process is the human act of hypothesis *creation*, ie invention of a new theory. No model exists for that, and a new theory might be inspired by fresh data – or it might not. But comparing theories is an inductive process that involves, when it is done correctly, Bayes’ theorem. Trouble is, Popper doesn’t like that theorem and is never clear on the relation of induction to probability and just what probability is.

  8. Bryn Jones Says:

    I agree with a lot in Peter’s essay above, but I’ll make a few points here, some of which reinforce what Peter’ has written.

    It might be worth repeating that old Richard Feynmann quote, “The philosophy of science is about as useful to scientists as ornithology is to birds.” I don’t entirely agree with it, but it does express an important point.

    I’m not sure I’d represent the division in outlook discussed in Peter’s essay as really being one between frequentist and Bayesian statistics. Surely the distinction is one between an absolute yes/no certainty (which philosophers historically considered) and a probabilistic outlook? Yes, the Bayesian approach naturally takes a probabilistic approach from the outset, while the frequentist approach builds up to concepts of probability from many yes/no events; but that is not the essential point here. The important issue is that scientists think of truth in terms of the probability of an explanation being true, even in an informal sense, regardless of whether they prefer to think about statistics from a Bayesian or frequentist viewpoint.

    My suspicion is that most scientists assign probabilities of alternative explanations/theories being true, even if many do so subconsciously and in a qualitative (not numerical) sense. What changes as science progresses is that these probabilities change, even if many scientists might not be consciously aware of the process.

    Perhaps it is these ideas of probabilities of different explanations being true that make science very different to much of philosophy, and might explain why some philosophers have failed to understand how science works.

    I’d be surprised if a majority of practising scientists had read Kuhn’s The Structure of Scientific Revolutions.

    I suspect that popular accounts of the history of science put stress on “paradigm shifts” because they are more readable. These accounts tend to emphasise dramatic changes in scientific understanding or conflicts with establishments, such as Copernicus overthrowing the classical view of the Solar System, Galileo’s observational evidence for the heliocentric model alongside his conflict with the Church, and Darwin disproving the literal interpretation of the Old Testament of some unthinking 19th-century amateur theologians. Broader accounts of the history of science would provide less evidence for “paradigm shifts” as a fundamental process in the development of science.

  9. On reflection, I think we may all be falling foul of our own biases here. Confirmation bias of sorts. If you read Popper with a view to ‘fault finding’ you will find fault, likewise Bayesianism. Overall, both approaches have their merits, and the two viewpoints are probably much more consistent than at loggerheads. I think their solutions to common problems in science might look very similar to an outsider looking in.

    The enemy, as I see it, is really the kind of ‘naive positivism’ that the vast majority of scientists exhibit – believing their results (and pet theories) to always be ‘true’ and unquestionable. I say this because that is what is going on every day and that is the assumption set behind most papers I read (in my field anyway). It is also behind most media portrayals of our work, and how it is interpreted. This is a far greater challenge to science in my view.

    I’ve probably exposed some sort of deep seated unforgivable ignorance now but hey ho, it’s early. I need coffee.

    • Anton Garrett Says:

      Enjoy your coffee and then please tell us what you consider to be the faults of Bayesianism. NB please define accurately what you mean by the term, because sadly it is not unambiguous today.

      • No because that’s obviously not the point I was making. I was trying to be reconciliatory not divisive. I wasn’t spoiling for a fight.

        (Did you mean for your comment to sound so patronising / condescending? If so, maybe I misunderstood the aims/scope of the blog, in which case, very sorry)

      • Anton Garrett Says:

        I’m not going to answer questions like whether I’ve stopped beating my wife… “Enjoy your coffee” was meant literally, not as a “last smoke before being shot” type of thing (which I now see it might be taken as). “please tell us what you consider to be the faults of Bayesianism” includes the words “please” out of courtesy and “consider” because I would not be true to myself if I used the phrase “the faults of Bayesianism” – I think that there aren’t any. The sentence was indeed a friendly challenge and I’m sorry if it came across as an unfriendly one. My second sentence makes a point that aims to prevent misunderstanding in the debate.
        Anton

  10. […] who, in my opinion, is unsurpassed in the role of Sherlock Holmes. It also allows me to return to a philosophical theme I visited earlier this […]

  11. Joining in. In a philosophical debate I am having with a friend the premise being put forward is that [a] induction does not exist [b] induction is a fallacy. I disagree with both [a] and [b]. In expressing how I thought induction theory worked = Observation, Pattern, Tentative Hypothesis, Theory, I have been met with the same comment. It does not exist. Can any of you help my brain to fathom this out better?

    • Anton Garrett Says:

      Inductive logic is a generalisation of Boolean deductive logic. Use of the sum and product rules when all probabilities are 0 or 1 is isomorphic to Boolean algebra. When probs are inbetween, the sum and product rules allow you to reason inductively. Of course you need a new concept which is absent from Boolean algebra, namely probability. I take p(A|B) to be a number representing how strongly the binary proposition A is true upon supposing that B is true, based on ontological relations known between their referents. This quantity is what you actually want in any real problem involving uncertainty.

      As for philosophy of quantitative science, it has several components: invention of a theory/hypothesis, by a bright scientist; deduction of the testable consequences of that theory; inductive comparison of that theory with its rivals in the light of the data.

      In view of the buzzwords in that sentence, I don’t like to say that the war in philosophy of science is between those who say it is hypothetico-deductive and those who say it is inductive. In practice I agree with what the latter camp say, but the words used in the debate are deeply and needlessly confusing.

  12. […] Kuhn the Irrationalist by Peter Coles […]

  13. […] Kuhn the Irrationalist (telescoper.wordpress.com) Share this:TwitterFacebookLike this:LikeBe the first to like this. Tags History and philosophy of science, Kuhn, Leeds, Philosophy of science, Structure of Scientific Revolutions, Thomas Kuhn Categories Books, Culture, Education, History, Reasearch, Science […]

Leave a comment