Archive for philosophy

Does Physics need Philosophy (and vice versa)?

Posted in mathematics, The Universe and Stuff with tags , , on June 1, 2018 by telescoper

There’s a new paper on the arXiv by Carlo Rovelli entitled Physics Needs Philosophy. Philosophy Needs Physics. Here is the abstract:

Contrary to claims about the irrelevance of philosophy for science, I argue that philosophy has had, and still has, far more influence on physics than is commonly assumed. I maintain that the current anti-philosophical ideology has had damaging effects on the fertility of science. I also suggest that recent important empirical results, such as the detection of the Higgs particle and gravitational waves, and the failure to detect supersymmetry where many expected to find it, question the validity of certain philosophical assumptions common among theoretical physicists, inviting us to engage in a clearer philosophical reflection on scientific method.

Read and discuss.

Advertisements

Cosmology and the Constants of Nature

Posted in The Universe and Stuff with tags , , , , on January 20, 2014 by telescoper

Just a brief post to advertise a very interesting meeting coming up in Cambridge:

–o–

Cosmology and the Constants of Nature

DAMTP, University of Cambridge

Monday, 17 March 2014 at 09:00 – Wednesday, 19 March 2014 at 15:00 (GMT)

Cambridge, United Kingdom

The Constants of Nature are quantities, whose numerical values we know with the greatest experimental accuracy – but about the rationale for those values, we have the greatest ignorance. We might also ask if they are indeed constant in space and time, and investigate whether their values arise at random or are uniquely determined by some deep theory.

This mini-series of talks is part of the joint Oxford-Cambridge programme on the Philosophy of Cosmology which aims to introduce philosophers of physics to fundamental problems in cosmology and associated areas of high-energy physics.

The talks are aimed at philosophers of physics but should also be of interest to a wide range of cosmologists.  Speakers will introduce the physical constants that define the standard model of particle physics and cosmology together with the data that determine them, describe observational programmes that test the constancy of traditional ʽconstantsʼ, including the cosmological constant, and discuss how self-consistent theories of varying constants can be formulated.

Speakers:

John Barrow, University of Cambridge

John Ellis, King’s College London

Pedro Ferreira, University of Oxford

Joao Magueijo, Imperial College, London

Thanu Padmanabhan, IUCAA, Pune

Martin Rees, University of Cambridge

John Webb, University of New South Wales, Sydney

Registration is free and includes morning coffee and lunch. Participants are requested to register at the conference website where the detailed programme of talks can be found:

http://www.eventbrite.co.uk/e/cosmology-and-the-constants-of-nature-registration-9356261831

For enquiries about this event please contact Margaret Bull at mmp@maths.cam.ac.uk

Kuhn the Irrationalist

Posted in Bad Statistics, The Universe and Stuff with tags , , , , , , , , , , , , on August 19, 2012 by telescoper

There’s an article in today’s Observer marking the 50th anniversary of the publication of Thomas Kuhn’s book The Structure of Scientific Revolutions.  John Naughton, who wrote the piece, claims that this book “changed the way we look at science”. I don’t agree with this view at all, actually. There’s little in Kuhn’s book that isn’t implicit in the writings of Karl Popper and little in Popper’s work that isn’t implicit in the work of a far more important figure in the development of the philosophy of science, David Hume. The key point about all these authors is that they failed to understand the central role played by probability and inductive logic in scientific research. In the following I’ll try to explain how I think it all went wrong. It might help the uninitiated to read an earlier post of mine about the Bayesian interpretation of probability.

It is ironic that the pioneers of probability theory and its application to scientific research, principally Laplace, unquestionably adopted a Bayesian rather than frequentist interpretation for his probabilities. Frequentism arose during the nineteenth century and held sway until relatively recently. I recall giving a conference talk about Bayesian reasoning only to be heckled by the audience with comments about “new-fangled, trendy Bayesian methods”. Nothing could have been less apt. Probability theory pre-dates the rise of sampling theory and all the other frequentist-inspired techniques that many modern-day statisticians like to employ.

Most disturbing of all is the influence that frequentist and other non-Bayesian views of probability have had upon the development of a philosophy of science, which I believe has a strong element of inverse reasoning or inductivism in it. The argument about whether there is a role for this type of thought in science goes back at least as far as Roger Bacon who lived in the 13th Century. Much later the brilliant Scottish empiricist philosopher and enlightenment figure David Hume argued strongly against induction. Most modern anti-inductivists can be traced back to this source. Pierre Duhem has argued that theory and experiment never meet face-to-face because in reality there are hosts of auxiliary assumptions involved in making this comparison. This is nowadays called the Quine-Duhem thesis.

Actually, for a Bayesian this doesn’t pose a logical difficulty at all. All one has to do is set up prior probability distributions for the required parameters, calculate their posterior probabilities and then integrate over those that aren’t related to measurements. This is just an expanded version of the idea of marginalization, explained here.

Rudolf Carnap, a logical positivist, attempted to construct a complete theory of inductive reasoning which bears some relationship to Bayesian thought, but he failed to apply Bayes’ theorem in the correct way. Carnap distinguished between two types or probabilities – logical and factual. Bayesians don’t – and I don’t – think this is necessary. The Bayesian definition seems to me to be quite coherent on its own.

Other philosophers of science reject the notion that inductive reasoning has any epistemological value at all. This anti-inductivist stance, often somewhat misleadingly called deductivist (irrationalist would be a better description) is evident in the thinking of three of the most influential philosophers of science of the last century: Karl Popper, Thomas Kuhn and, most recently, Paul Feyerabend. Regardless of the ferocity of their arguments with each other, these have in common that at the core of their systems of thought likes the rejection of all forms of inductive reasoning. The line of thought that ended in this intellectual cul-de-sac began, as I stated above, with the work of the Scottish empiricist philosopher David Hume. For a thorough analysis of the anti-inductivists mentioned above and their obvious debt to Hume, see David Stove’s book Popper and After: Four Modern Irrationalists. I will just make a few inflammatory remarks here.

Karl Popper really began the modern era of science philosophy with his Logik der Forschung, which was published in 1934. There isn’t really much about (Bayesian) probability theory in this book, which is strange for a work which claims to be about the logic of science. Popper also managed to, on the one hand, accept probability theory (in its frequentist form), but on the other, to reject induction. I find it therefore very hard to make sense of his work at all. It is also clear that, at least outside Britain, Popper is not really taken seriously by many people as a philosopher. Inside Britain it is very different and I’m not at all sure I understand why. Nevertheless, in my experience, most working physicists seem to subscribe to some version of Popper’s basic philosophy.

Among the things Popper has claimed is that all observations are “theory-laden” and that “sense-data, untheoretical items of observation, simply do not exist”. I don’t think it is possible to defend this view, unless one asserts that numbers do not exist. Data are numbers. They can be incorporated in the form of propositions about parameters in any theoretical framework we like. It is of course true that the possibility space is theory-laden. It is a space of theories, after all. Theory does suggest what kinds of experiment should be done and what data is likely to be useful. But data can be used to update probabilities of anything.

Popper has also insisted that science is deductive rather than inductive. Part of this claim is just a semantic confusion. It is necessary at some point to deduce what the measurable consequences of a theory might be before one does any experiments, but that doesn’t mean the whole process of science is deductive. He does, however, reject the basic application of inductive reasoning in updating probabilities in the light of measured data; he asserts that no theory ever becomes more probable when evidence is found in its favour. Every scientific theory begins infinitely improbable, and is doomed to remain so.

Now there is a grain of truth in this, or can be if the space of possibilities is infinite. Standard methods for assigning priors often spread the unit total probability over an infinite space, leading to a prior probability which is formally zero. This is the problem of improper priors. But this is not a killer blow to Bayesianism. Even if the prior is not strictly normalizable, the posterior probability can be. In any case, given sufficient relevant data the cycle of experiment-measurement-update of probability assignment usually soon leaves the prior far behind. Data usually count in the end.

The idea by which Popper is best known is the dogma of falsification. According to this doctrine, a hypothesis is only said to be scientific if it is capable of being proved false. In real science certain “falsehood” and certain “truth” are almost never achieved. Theories are simply more probable or less probable than the alternatives on the market. The idea that experimental scientists struggle through their entire life simply to prove theorists wrong is a very strange one, although I definitely know some experimentalists who chase theories like lions chase gazelles. To a Bayesian, the right criterion is not falsifiability but testability, the ability of the theory to be rendered more or less probable using further data. Nevertheless, scientific theories generally do have untestable components. Any theory has its interpretation, which is the untestable baggage that we need to supply to make it comprehensible to us. But whatever can be tested can be scientific.

Popper’s work on the philosophical ideas that ultimately led to falsificationism began in Vienna, but the approach subsequently gained enormous popularity in western Europe. The American Thomas Kuhn later took up the anti-inductivist baton in his book The Structure of Scientific Revolutions. Initially a physicist, Kuhn undoubtedly became a first-rate historian of science and this book contains many perceptive analyses of episodes in the development of physics. His view of scientific progress is cyclic. It begins with a mass of confused observations and controversial theories, moves into a quiescent phase when one theory has triumphed over the others, and lapses into chaos again when the further testing exposes anomalies in the favoured theory. Kuhn adopted the word paradigm to describe the model that rules during the middle stage,

The history of science is littered with examples of this process, which is why so many scientists find Kuhn’s account in good accord with their experience. But there is a problem when attempts are made to fuse this historical observation into a philosophy based on anti-inductivism. Kuhn claims that we “have to relinquish the notion that changes of paradigm carry scientists ..closer and closer to the truth.” Einstein’s theory of relativity provides a closer fit to a wider range of observations than Newtonian mechanics, but in Kuhn’s view this success counts for nothing.

Paul Feyerabend has extended this anti-inductivist streak to its logical (though irrational) extreme. His approach has been dubbed “epistemological anarchism”, and it is clear that he believed that all theories are equally wrong. He is on record as stating that normal science is a fairytale, and that equal time and resources should be spent on “astrology, acupuncture and witchcraft”. He also categorised science alongside “religion, prostitution, and so on”. His thesis is basically that science is just one of many possible internally consistent views of the world, and that the choice between which of these views to adopt can only be made on socio-political grounds.

Feyerabend’s views could only have flourished in a society deeply disillusioned with science. Of course, many bad things have been done in science’s name, and many social institutions are deeply flawed. One can’t expect anything operated by people to run perfectly. It’s also quite reasonable to argue on ethical grounds which bits of science should be funded and which should not. But the bottom line is that science does have a firm methodological basis which distinguishes it from pseudo-science, the occult and new age silliness. Science is distinguished from other belief-systems by its rigorous application of inductive reasoning and its willingness to subject itself to experimental test. Not all science is done properly, of course, and bad science is as bad as anything.

The Bayesian interpretation of probability leads to a philosophy of science which is essentially epistemological rather than ontological. Probabilities are not “out there” in external reality, but in our minds, representing our imperfect knowledge and understanding. Scientific theories are not absolute truths. Our knowledge of reality is never certain, but we are able to reason consistently about which of our theories provides the best available description of what is known at any given time. If that description fails when more data are gathered, we move on, introducing new elements or abandoning the theory for an alternative. This process could go on forever. There may never be a “final” theory, and scientific truths are consequently far from absolute, but that doesn’t mean that there is no progress.

Research Opportunities in the Philosophy of Cosmology

Posted in The Universe and Stuff with tags , , , , , , on March 16, 2012 by telescoper

I got an email this morning telling me about the following interesting opportunities for research fellowships. They are in quite an unusual area – the philosophy of cosmology – and one I’m quite interested in myself so I thought it might ahieve wider circulation if I posted the advertisement on here.

–0–

Applications are invited for two postdoctoral fellowships in the area of philosophy of cosmology, one to be held at Cambridge University and one to be held at Oxford University, starting 1 Jan 2013 to run until 31 Aug 2014. The two positions have similar job-descriptions and the deadline for applications is the same: 18 April 2012.

For more details, see here, for the Cambridge fellowship and  here for the Oxford fellowship.

Applicants are encouraged to apply for both positions. The Oxford group is led by Joe Silk, Simon Saunders and David Wallace, and that at Cambridge by John Barrow and Jeremy Butterfield.

These appointments are part of the initiative ‘establishing the philosophy of cosmology’, involving a consortium of universities in the UK and USA, funded by the John Templeton Foundation. Its aim is to identify, define and explore new foundational questions in cosmology. Key questions already identified concern:

  • The issue of measure, including potential uses of anthropic reasoning
  • Space-time structure, both at very large and very small scales
  • The cosmological constant problem
  • Entropy, time and complexity, in understanding the various arrows of time
  • Symmetries and invariants, and the nature of the description of the universe as a whole

Applicants with philosophical interests in cosmology outside these areas will also be considered.

For more background on the initiative, see here and the project website (still under construction).

Hungry Philosophers

Posted in The Universe and Stuff with tags , , on January 17, 2012 by telescoper

The Necessity of Atheism

Posted in History, Literature, The Universe and Stuff with tags , , , , , , , , on February 15, 2011 by telescoper

In the course of doing a crossword at the weekend, I learnt that the poet Percy Bysse Shelley was sent down from (i.e. kicked out of) Oxford University 200 years ago this month for writing a pamphlet entitled The Necessity of Atheism. He was at University College, in fact. A bit of googling around led me to the full text, which is well worth reading whatever your religious beliefs as it is a fascinating document. I’ll just quote a few excerpts here.

The main body of the tract begins There is No God, but this is followed by

This negation must be understood solely to affect a creative Deity. The hypothesis of a pervading Spirit co-eternal with the universe remains unshaken.

That’s pretty close to my own view, for what that’s worth.

More interestingly, Shelley goes on later in the work to talk about science and how it impacts upon belief. A couple of sections struck me particularly strongly, given my own scientific interests.

In one he tackles arguments for the existence of God based on Reason:

It is urged that man knows that whatever is must either have had a beginning, or have existed from all eternity, he also knows that whatever is not eternal must have had a cause. When this reasoning is applied to the universe, it is necessary to prove that it was created: until that is clearly demonstrated we may reasonably suppose that it has endured from all eternity. We must prove design before we can infer a designer. The only idea which we can form of causation is derivable from the constant conjunction of objects, and the consequent inference of one from the other. In a base where two propositions are diametrically opposite, the mind believes that which is least incomprehensible; — it is easier to suppose that the universe has existed from all eternity than to conceive a being beyond its limits capable of creating it: if the mind sinks beneath the weight of one, is it an alleviation to increase the intolerability of the burthen?

The other argument, which is founded on a Man’s knowledge of his own existence, stands thus. A man knows not only that he now is, but that once he was not; consequently there must have been a cause. But our idea of causation is alone derivable from the constant conjunction of objects and the consequent Inference of one from the other; and, reasoning experimentally, we can only infer from effects caused adequate to those effects. But there certainly is a generative power which is effected by certain instruments: we cannot prove that it is inherent in these instruments” nor is the contrary hypothesis capable of demonstration: we admit that the generative power is incomprehensible; but to suppose that the same effect is produced by an eternal, omniscient, omnipotent being leaves the cause in the same obscurity, but renders it more incomprehensible.

He thus reveals himself as an empiricist, a position he later amplifies with a curiously worded double-negative:

I confess that I am one of those who am unable to refuse my assent to the conclusion of those philosophers who assert that nothing exists but as it is perceived.

This is a philosophy I can’t agree with, but his use of words clearly suggests the young Shelley has been reading David Hume‘s analysis of causation.

Later he turns to the mystery of life and the sense of wonder it inspires.

Life and the world, or whatever we call that which we are and feel, is an astonishing thing. The mist of familiarity obscures from us the wonder of our being. We are struck with admiration at some of its transient modifications, but it is itself the great miracle. What are changes of empires, the wreck of dynasties, with the opinions which support them; what is the birth and the extinction of religious and of political systems, to life? What are the revolutions of the globe which we inhabit, and the operations of the elements of which it is composed, compared with life? What is the universe of stars, and suns, of which this inhabited earth is one, and their motions, and their destiny, compared with life? Life, the great miracle, we admire not because it is so miraculous. It is well that we are thus shielded by the familiarity of what is at once so certain and so unfathomable, from an astonishment which would otherwise absorb and overawe the functions of that which is its object.

Finally, I picked the following paragraph for its mention of astronomy:

If any artist, I do not say had executed, but had merely conceived in his mind the system of the sun, and the stars, and planets, they not existing, and had painted to us in words, or upon canvas, the spectacle now afforded by the nightly cope of heaven, and illustrated it by the wisdom of astronomy, great would be our admiration. Or had he imagined the scenery of this earth, the mountains, the seas, and the rivers; the grass, and the flowers, and the variety of the forms and masses of the leaves of the woods, and the colors which attend the setting and the rising sun, and the hues of the atmosphere, turbid or serene, these things not before existing, truly we should have been astonished, and it would not have been a vain boast to have said of such a man, Non merita nome di creatore, se non Iddio ed il Poeta. But how these things are looked on with little wonder, and to be conscious of them with intense delight is esteemed to be the distinguishing mark of a refined and extraordinary person. The multitude of men care not for them.

I think the multitude care just as little 200 years on.

P.S. The quotation is from the 16th Century Italian poet Torquato Tasso; in translation it reads “None deserve the name of Creator except God and the Poet”.


Share/Bookmark

Deductivism and Irrationalism

Posted in Bad Statistics, The Universe and Stuff with tags , , , , , , , , , , , on December 11, 2010 by telescoper

Looking at my stats I find that my recent introductory post about Bayesian probability has proved surprisingly popular with readers, so I thought I’d follow it up with a brief discussion of some of the philosophical issues surrounding it.

It is ironic that the pioneers of probability theory, principally Laplace, unquestionably adopted a Bayesian rather than frequentist interpretation for his probabilities. Frequentism arose during the nineteenth century and held sway until recently. I recall giving a conference talk about Bayesian reasoning only to be heckled by the audience with comments about “new-fangled, trendy Bayesian methods”. Nothing could have been less apt. Probability theory pre-dates the rise of sampling theory and all the frequentist-inspired techniques that modern-day statisticians like to employ.

Most disturbing of all is the influence that frequentist and other non-Bayesian views of probability have had upon the development of a philosophy of science, which I believe has a strong element of inverse reasoning or inductivism in it. The argument about whether there is a role for this type of thought in science goes back at least as far as Roger Bacon who lived in the 13th Century. Much later the brilliant Scottish empiricist philosopher and enlightenment figure David Hume argued strongly against induction. Most modern anti-inductivists can be traced back to this source. Pierre Duhem has argued that theory and experiment never meet face-to-face because in reality there are hosts of auxiliary assumptions involved in making this comparison. This is nowadays called the Quine-Duhem thesis.

Actually, for a Bayesian this doesn’t pose a logical difficulty at all. All one has to do is set up prior probability distributions for the required parameters, calculate their posterior probabilities and then integrate over those that aren’t related to measurements. This is just an expanded version of the idea of marginalization, explained here.

Rudolf Carnap, a logical positivist, attempted to construct a complete theory of inductive reasoning which bears some relationship to Bayesian thought, but he failed to apply Bayes’ theorem in the correct way. Carnap distinguished between two types or probabilities – logical and factual. Bayesians don’t – and I don’t – think this is necessary. The Bayesian definition seems to me to be quite coherent on its own.

Other philosophers of science reject the notion that inductive reasoning has any epistemological value at all. This anti-inductivist stance, often somewhat misleadingly called deductivist (irrationalist would be a better description) is evident in the thinking of three of the most influential philosophers of science of the last century: Karl Popper, Thomas Kuhn and, most recently, Paul Feyerabend. Regardless of the ferocity of their arguments with each other, these have in common that at the core of their systems of thought likes the rejection of all forms of inductive reasoning. The line of thought that ended in this intellectual cul-de-sac began, as I stated above, with the work of the Scottish empiricist philosopher David Hume. For a thorough analysis of the anti-inductivists mentioned above and their obvious debt to Hume, see David Stove’s book Popper and After: Four Modern Irrationalists. I will just make a few inflammatory remarks here.

Karl Popper really began the modern era of science philosophy with his Logik der Forschung, which was published in 1934. There isn’t really much about (Bayesian) probability theory in this book, which is strange for a work which claims to be about the logic of science. Popper also managed to, on the one hand, accept probability theory (in its frequentist form), but on the other, to reject induction. I find it therefore very hard to make sense of his work at all. It is also clear that, at least outside Britain, Popper is not really taken seriously by many people as a philosopher. Inside Britain it is very different and I’m not at all sure I understand why. Nevertheless, in my experience, most working physicists seem to subscribe to some version of Popper’s basic philosophy.

Among the things Popper has claimed is that all observations are “theory-laden” and that “sense-data, untheoretical items of observation, simply do not exist”. I don’t think it is possible to defend this view, unless one asserts that numbers do not exist. Data are numbers. They can be incorporated in the form of propositions about parameters in any theoretical framework we like. It is of course true that the possibility space is theory-laden. It is a space of theories, after all. Theory does suggest what kinds of experiment should be done and what data is likely to be useful. But data can be used to update probabilities of anything.

Popper has also insisted that science is deductive rather than inductive. Part of this claim is just a semantic confusion. It is necessary at some point to deduce what the measurable consequences of a theory might be before one does any experiments, but that doesn’t mean the whole process of science is deductive. He does, however, reject the basic application of inductive reasoning in updating probabilities in the light of measured data; he asserts that no theory ever becomes more probable when evidence is found in its favour. Every scientific theory begins infinitely improbable, and is doomed to remain so.

Now there is a grain of truth in this, or can be if the space of possibilities is infinite. Standard methods for assigning priors often spread the unit total probability over an infinite space, leading to a prior probability which is formally zero. This is the problem of improper priors. But this is not a killer blow to Bayesianism. Even if the prior is not strictly normalizable, the posterior probability can be. In any case, given sufficient relevant data the cycle of experiment-measurement-update of probability assignment usually soon leaves the prior far behind. Data usually count in the end.

The idea by which Popper is best known is the dogma of falsification. According to this doctrine, a hypothesis is only said to be scientific if it is capable of being proved false. In real science certain “falsehood” and certain “truth” are almost never achieved. Theories are simply more probable or less probable than the alternatives on the market. The idea that experimental scientists struggle through their entire life simply to prove theorists wrong is a very strange one, although I definitely know some experimentalists who chase theories like lions chase gazelles. To a Bayesian, the right criterion is not falsifiability but testability, the ability of the theory to be rendered more or less probable using further data. Nevertheless, scientific theories generally do have untestable components. Any theory has its interpretation, which is the untestable baggage that we need to supply to make it comprehensible to us. But whatever can be tested can be scientific.

Popper’s work on the philosophical ideas that ultimately led to falsificationism began in Vienna, but the approach subsequently gained enormous popularity in western Europe. The American Thomas Kuhn later took up the anti-inductivist baton in his book The Structure of Scientific Revolutions. Kuhn is undoubtedly a first-rate historian of science and this book contains many perceptive analyses of episodes in the development of physics. His view of scientific progress is cyclic. It begins with a mass of confused observations and controversial theories, moves into a quiescent phase when one theory has triumphed over the others, and lapses into chaos again when the further testing exposes anomalies in the favoured theory. Kuhn adopted the word paradigm to describe the model that rules during the middle stage,

The history of science is littered with examples of this process, which is why so many scientists find Kuhn’s account in good accord with their experience. But there is a problem when attempts are made to fuse this historical observation into a philosophy based on anti-inductivism. Kuhn claims that we “have to relinquish the notion that changes of paradigm carry scientists ..closer and closer to the truth.” Einstein’s theory of relativity provides a closer fit to a wider range of observations than Newtonian mechanics, but in Kuhn’s view this success counts for nothing.

Paul Feyerabend has extended this anti-inductivist streak to its logical (though irrational) extreme. His approach has been dubbed “epistemological anarchism”, and it is clear that he believed that all theories are equally wrong. He is on record as stating that normal science is a fairytale, and that equal time and resources should be spent on “astrology, acupuncture and witchcraft”. He also categorised science alongside “religion, prostitution, and so on”. His thesis is basically that science is just one of many possible internally consistent views of the world, and that the choice between which of these views to adopt can only be made on socio-political grounds.

Feyerabend’s views could only have flourished in a society deeply disillusioned with science. Of course, many bad things have been done in science’s name, and many social institutions are deeply flawed. One can’t expect anything operated by people to run perfectly. It’s also quite reasonable to argue on ethical grounds which bits of science should be funded and which should not. But the bottom line is that science does have a firm methodological basis which distinguishes it from pseudo-science, the occult and new age silliness. Science is distinguished from other belief-systems by its rigorous application of inductive reasoning and its willingness to subject itself to experimental test. Not all science is done properly, of course, and bad science is as bad as anything.

The Bayesian interpretation of probability leads to a philosophy of science which is essentially epistemological rather than ontological. Probabilities are not “out there” in external reality, but in our minds, representing our imperfect knowledge and understanding. Scientific theories are not absolute truths. Our knowledge of reality is never certain, but we are able to reason consistently about which of our theories provides the best available description of what is known at any given time. If that description fails when more data are gathered, we move on, introducing new elements or abandoning the theory for an alternative. This process could go on forever. There may never be a final theory. But although the game might have no end, at least we know the rules….


Share/Bookmark