Archive for supernovae

Redshift and Distance in Cosmology

Posted in The Universe and Stuff with tags , , , , , on April 29, 2019 by telescoper

I was looking for a copy of this this picture this morning and when I found it I thought I’d share it here. It was made by Andy Hamilton and appears in this paper. I used it (with permission) in the textbook I wrote with Francesco Lucchin which was published in 2003.

I think this is a nice simple illustration of the effect of the density parameter Ω and the cosmological constant Λ on the relationship between redshift and (comoving) distance in the standard cosmological models based on the Friedman Equations.

On the left there is the old standard model (from when I was a lad) in which space is Euclidean and there is a critical density of matter; this is called the Einstein de Sitter model in which Λ=0. On the right you can see something much closer to the current standard model of cosmology, with a lower density of matter but with the addition of a cosmological constant. Notice that in the latter case the distance to an object at a given redshift is far larger than in the former. This is, for example, why supernovae at high redshift look much fainter in the latter model than in the former, and why these measurements are so sensitive to the presence of a cosmological constant.

In the middle there is a model with no cosmological constant but a low density of matter; this is an open Universe. Because it decelerates much more slowly than in the Einstein de Sitter model, the distance out to a given redshift is larger (but not quite as large as the case on the right, which is an accelerating model), but the main property of interest in the open model is that the space is not Euclidean, but curved. The effect of this is that an object of fixed physical size at a given redshift subtends a much smaller angle than in the cases either side. That shows why observations of the pattern of variations in the temperature of the cosmic microwave background across the sky yield so much information about the spatial geometry.

It’s a very instructive picture, I think!


Eight Papers from the Dark Energy Survey

Posted in The Universe and Stuff with tags , , , , on November 9, 2018 by telescoper

Just a quick post to point out the exciting news that this week a clutch of papers on cosmology using Type 1a Supernovae have been released by the Dark Energy Survey team. Naturally, all of them are on the arXiv. You can also read them here. For convenience I’ve provided links below to arXiv versions through their titles:

  1. Steve: A hierarchical Bayesian model for Supernova Cosmology
  2. First Cosmology Results Using Type Ia Supernovae from the Dark Energy Survey: Effects of Chromatic Corrections to Supernova Photometry on Measurements of Cosmological Parameters
  3. First Cosmology Results using Type Ia Supernova from the Dark Energy Survey: Simulations to Correct Supernova Distance Biases
  4. First Cosmology Results Using Type Ia Supernovae From the Dark Energy Survey: Photometric Pipeline and Light Curve Data Release
  5. First Cosmology Results Using Type Ia Supernovae From the Dark Energy Survey: Analysis, Systematic Uncertainties, and Validation
  6. First Cosmological Results using Type Ia Supernovae from the Dark Energy Survey: Measurement of the Hubble Constant
  7. Cosmological Constraints from Multiple Probes in the Dark Energy Survey
  8. First Cosmology Results using Type Ia Supernovae from the Dark Energy Survey: Constraints on Cosmological Parameters

Here’s a plot showing some of the cosmological constraints:

The parameter plotted on the vertical axis is the dark energy equation of state parameter, w, and w=-1 corresponds to a cosmological constant.

For those of youparticularly interested in the Hubble constant, the headline value from Paper 6 is H0 = 67.77 +/- 1.30 km s-1 Mpc-1. This closer to the value obtained from Planck and in tension with other values as I’ve blogged about before, and gives me an excuse to continue my online poll:

A Non-accelerating Universe?

Posted in Astrohype, The Universe and Stuff with tags , , , , , on October 26, 2016 by telescoper

There’s been quite a lot of reaction on the interwebs over the last few days much of it very misleading; here’s a sensible account) to a paper by Nielsen, Guffanti and Sarkar which has just been published online in Scientific Reports, an offshoot of Nature. I think the above link should take you an “open access” version of the paper but if it doesn’t you can find the arXiv version here. I haven’t cross-checked the two versions so the arXiv one may differ slightly.

Anyway, here is the abstract:

The ‘standard’ model of cosmology is founded on the basis that the expansion rate of the universe is accelerating at present — as was inferred originally from the Hubble diagram of Type Ia supernovae. There exists now a much bigger database of supernovae so we can perform rigorous statistical tests to check whether these ‘standardisable candles’ indeed indicate cosmic acceleration. Taking account of the empirical procedure by which corrections are made to their absolute magnitudes to allow for the varying shape of the light curve and extinction by dust, we find, rather surprisingly, that the data are still quite consistent with a constant rate of expansion.

Obviously I haven’t been able to repeat the statistical analysis but I’ve skimmed over what they’ve done and as far as I can tell it looks a fairly sensible piece of work (although it is a frequentist analysis). Here is the telling plot (from the Nature version)  in terms of the dark energy (y-axis) and matter (x-axis) density parameters:


Models shown in this plane by a line have the correct balance between Ωm, and ΩΛ to cancel out the decelerating effect of the former against the accelerating effect of the latter (a special case is the origin on the plot, which is called the Milne model and represents an entirely empty universe). The contours show “1, 2 and 3σ” contours, regarding all other parameters as nuisance parameters. It is true that the line of no acceleration does go inside the 3σcontour so in that sense is not entirely inconsistent with the data. On the other hand, the “best fit” (which is at the point Ωm=0.341, ΩΛ=0.569) does represent an accelerating universe.

I am not all that surprised by this result, actually. I’ve always felt that taken on its own the evidence for cosmic acceleration from supernovae alone was not compelling. However, when it is combined with other measurements (particularly of the cosmic microwave background and large-scale structure) which are sensitive to other aspects of the cosmological space-time geometry, the agreement is extremely convincing and has established a standard “concordance” cosmology. The CMB, for example, is particularly sensitive to spatial curvature which, measurements tells us, must be close to zero. The Milne model, on the other hand, has a large (negative) spatial curvature entirely excluded by CMB observations. Curvature is regarded as a “nuisance parameter” in the above diagram.

I think this paper is a worthwhile exercise. Subir Sarkar (one of the authors) in particular has devoted a lot of energy to questioning the standard ΛCDM model which far too many others accept unquestioningly. That’s a noble thing to do, and it is an essential part of the scientific method, but this paper only looks at one part of an interlocking picture. The strongest evidence comes from the cosmic microwave background and despite this reanalysis I feel the supernovae measurements still provide a powerful corroboration of the standard cosmology.

Let me add, however, that the supernovae measurements do not directly measure cosmic acceleration. If one tries to account for them with a model based on Einstein’s general relativity and the assumption that the Universe is on large-scales is homogeneous and isotropic and with certain kinds of matter and energy then the observations do imply a universe that accelerates. Any or all of those assumptions may be violated (though some possibilities are quite heavily constrained). In short we could, at least in principle, simply be interpreting these measurements within the wrong framework, and statistics can’t help us with that!

Making Massive Black Hole Binaries Merge

Posted in The Universe and Stuff with tags , , , , , on February 16, 2016 by telescoper

Many fascinating questions remain unanswered by last week’s detection of gravitational waves produced by a coalescing binary black hole system (GW150914) by LIGO. One of these is whether the fact that the similarity of the component masses (29 and 36 times the mass of the Sun respectively) is significant.

An interesting paper appeared on the arXiv last week by Marchant et al. that touches on this. Here is the abstract (you can click on it to make it larger):



Although there is some technical jargon, the point is relatively clear. It appears that very masssive, very low metallicity binary stars can evolve into black hole binary systems via supernova explosions without disrupting their orbit. The term ‘low metallicity’ characteristises stars that form from primordial material (i.e. basically hydrogen and helium) early in the cycle of stellar evolution. Such material has very different opacity properties from material with significant quantities of heavier elements in it, which alters the dynamical evolution considerably.

(Remember that to an astrophysicist, chemistry is extremely simple. Hydrogen and helium make up most of the atomic matter in the Universe; all the rest is called “metals” including carbon, nitrogen, and oxygen…. )

Anyway, this theoretical paper is relevant because the mass ratios produced by this mechanism are expected to be of order unity, as is the case of GW150914.  One observation doesn’t prove much, but it’s definitely Quite Interesting…

Incidentally, it has been reported that another gravitational wave source may have been detected by LIGO, in October last year. This isn’t as clean a signal as the first, so it will require further analysis before a definitive result is claimed, but it too seems to be a black hole binary system with a mass ratio of order unity…

You wait forty years for a gravitational wave signal from a binary black hole merger and then two come along in quick succession…




Ligatures, Diphthongs and Supernovae

Posted in History, Pedantry, The Universe and Stuff with tags , , , , , , , , on January 18, 2016 by telescoper

At the weekend I noticed a nice article by John Butterworth on his Grauniad blog about where Gold comes from. Regular readers of this blog (Sid and Doris Bonkers) know that I am not at all pedantic but my attention was drawn to the plural of supernova in the preamble:


I have to confess that I much prefer the latin plural “supernovae” to the modernised “supernovas”, although most dictionaries (including the One True Chambers) give these both as valid forms.  In the interest of full disclosure I will point out that I did five years of Latin at school, and very much enjoyed it…

When I tweeted about my dislike for supernovas and preference for supernovae some replied that English words should have English plurals so that supernovas was preferred (although I wonder if that logic extends to, e.g. datums and phenomenons). Others said that supernovae was fine among experts but for science communication purposes it was better to say “supernovas” as this more obviously means “more than one supernova”. That’s a reasonable argument, but I have to admit I find it a little condescending to assume that an audience can cope with the idea of a massive star exploding as a consequence of gravitational collapse but be utterly bewildered by a straightforward latin plural.

One of the reasons I prefer the Latin plural – along with some other forms that may appear archaic, e.g. Nebulae – is that Astronomy is unique among sciences for having such a long history. Many astronomical terms derive from very ancient sources and in my view we should celebrate this fact because it’s part of the subject’s fascination. That’s just my opinion, of course. You are welcome to disagree with that too.

Anyway, you might be interested to know a couple of things. One is that the first use of “super-nova” recorded in the Oxford English Dictionary was in 1932 in a paper by Swedish astronomer Knut Lundmark. This word is however formed from “nova” (which means “new” in Latin) and the first use of this term in an astronomical setting was in a book by Tycho Brahe, published in 1573:

Brahe_book(I’ll leave it as an exercise to the student to translate the full title.)

Nowadays a nova is taken to be a much lower budget feature than a supernova but the “nova” described in Tycho’s book was was actually a supernova, SN1572 which he, along with many others, had observed the previous year. Historical novae were very often supernovae, in fact, because they are much brighter than mere novae. The real difference between these two classes of object wasn’t understood until the 20th Century, however, which is why the term supernova was coined much later than nova.

Anyway, back to pedantry.

A subsequent tweet from Roberto Trotta asserted  that in fact supernovae and supernovas are both wrong; the correct plural should be supernovæ, in which the two letters of the digraph “ae” are replaced with a single glyph known as a ligature. Often, as in this case, a ligature stands for a diphthong, a sort of composite vowel sound made by running two vowels together.   It’s one of the peculiarities of English that there are only five vowels, but these can represent quite different sounds depending on the context (and on the regional accent). This  means that English has many hidden diphthongs. For example,  the “o” in “no” is a diphthong in English. In languages such as Italian, in which the vowels are very pure, “no” is pronounced quite differently from English. The best test of whether a vowel is pure or not is whether your mouth changes shape as you pronounce it: your mouth moves as you say an English “no”, closing the vowel that stays open in the Italian “no”…

So, not all diphthongs are represented by ligatures. It’s also the case that not all ligatures represent diphthongs. Indeed some are composed entirely of consonants. My current employer’s logo features a ligature formed from the letters U and S:


The use of the ligature æ arose in Mediaeval Latin (or should I say Mediæval?). In fact if you look at the frontispiece of the Brahe book shown above you will see a number of examples of it in its upper-case form Æ. I’m by no means an expert in such things but my guess is that the use of such ligatures in printed works was favoured simply to speed up the typesetting process – which was very primitive – by allowing the compositor to use a single piece of type to set two characters. However, it does appear in handwritten documents e.g. in Old English, long before printing was invented so easier typesetting doesn’t explain it all.

Use of the specific ligature in question caught on particularly well in Scandinavia where it eventually became promoted to a letter in its own right (“aesc”) and is listed as a separate vowel in the modern Danish and Norwegian alphabets.  Early word-processing and computer typesetting software generally couldn’t render ligatures because they were just too complicated, so their use fell out of favour in the Eighties, though there are significant exceptions to this rule. Latex, for example, always allowed ligatures to be created quite easily. Software – even Microsoft Word – is much more sophisticated than it used to be, so it’s now not so much of a problem to use ligatures in digital text. Maybe they will make a comeback!

Anyway, the use of æ was optional even in Mediaeval Latin so I don’t think it can be argued that supernovæ is really more correct than supernovae, though to go back to a point I made earlier, I do admit that a rambling discussion of ligatures and diphthongs would not add much to a public lecture on exploding stars.


Science, Religion and Henry Gee

Posted in Bad Statistics, Books, Talks and Reviews, Science Politics, The Universe and Stuff with tags , , , , , , , , , on September 23, 2013 by telescoper

Last week a piece appeared on the Grauniad website by Henry Gee who is a Senior Editor at the magazine Nature.  I was prepared to get a bit snarky about the article when I saw the title, as it reminded me of an old  rant about science being just a kind of religion by Simon Jenkins that got me quite annoyed a few years ago. Henry Gee’s article, however, is actually rather more coherent than that and  not really deserving of some of the invective being flung at it.

For example, here’s an excerpt that I almost agree with:

One thing that never gets emphasised enough in science, or in schools, or anywhere else, is that no matter how fancy-schmancy your statistical technique, the output is always a probability level (a P-value), the “significance” of which is left for you to judge – based on nothing more concrete or substantive than a feeling, based on the imponderables of personal or shared experience. Statistics, and therefore science, can only advise on probability – they cannot determine The Truth. And Truth, with a capital T, is forever just beyond one’s grasp.

I’ve made the point on this blog many times that, although statistical reasoning lies at the heart of the scientific method, we don’t do anywhere near enough  to teach students how to use probability properly; nor do scientists do enough to explain the uncertainties in their results to decision makers and the general public.  I also agree with the concluding thought, that science isn’t about absolute truths. Unfortunately, Gee undermines his credibility by equating statistical reasoning with p-values which, in my opinion, are a frequentist aberration that contributes greatly to the public misunderstanding of science. Worse, he even gets the wrong statistics wrong…

But the main thing that bothers me about Gee’s article is that he blames scientists for promulgating the myth of “science-as-religion”. I don’t think that’s fair at all. Most scientists I know are perfectly well aware of the limitations of what they do. It’s really the media that want to portray everything in simple black and white terms. Some scientists play along, of course, as I comment upon below, but most of us are not priests but pragmatatists.

Anyway, this episode gives me the excuse to point out  that I ended a book I wrote in 1998 with a discussion of the image of science as a kind of priesthood which it seems apt to repeat here. The book was about the famous eclipse expedition of 1919 that provided some degree of experimental confirmation of Einstein’s general theory of relativity and which I blogged about at some length last year, on its 90th anniversary.

I decided to post the last few paragraphs here to show that I do think there is a valuable point to be made out of the scientist-as-priest idea. It’s to do with the responsibility scientists have to be honest about the limitations of their research and the uncertainties that surround any new discovery. Science has done great things for humanity, but it is fallible. Too many scientists are too certain about things that are far from proven. This can be damaging to science itself, as well as to the public perception of it. Bandwagons proliferate, stifling original ideas and leading to the construction of self-serving cartels. This is a fertile environment for conspiracy theories to flourish.

To my mind the thing  that really separates science from religion is that science is an investigative process, not a collection of truths. Each answer simply opens up more questions.  The public tends to see science as a collection of “facts” rather than a process of investigation. The scientific method has taught us a great deal about the way our Universe works, not through the exercise of blind faith but through the painstaking interplay of theory, experiment and observation.

This is what I wrote in 1998:

Science does not deal with ‘rights’ and ‘wrongs’. It deals instead with descriptions of reality that are either ‘useful’ or ‘not useful’. Newton’s theory of gravity was not shown to be ‘wrong’ by the eclipse expedition. It was merely shown that there were some phenomena it could not describe, and for which a more sophisticated theory was required. But Newton’s theory still yields perfectly reliable predictions in many situations, including, for example, the timing of total solar eclipses. When a theory is shown to be useful in a wide range of situations, it becomes part of our standard model of the world. But this doesn’t make it true, because we will never know whether future experiments may supersede it. It may well be the case that physical situations will be found where general relativity is supplanted by another theory of gravity. Indeed, physicists already know that Einstein’s theory breaks down when matter is so dense that quantum effects become important. Einstein himself realised that this would probably happen to his theory.

Putting together the material for this book, I was struck by the many parallels between the events of 1919 and coverage of similar topics in the newspapers of 1999. One of the hot topics for the media in January 1999, for example, has been the discovery by an international team of astronomers that distant exploding stars called supernovae are much fainter than had been predicted. To cut a long story short, this means that these objects are thought to be much further away than expected. The inference then is that not only is the Universe expanding, but it is doing so at a faster and faster rate as time passes. In other words, the Universe is accelerating. The only way that modern theories can account for this acceleration is to suggest that there is an additional source of energy pervading the very vacuum of space. These observations therefore hold profound implications for fundamental physics.

As always seems to be the case, the press present these observations as bald facts. As an astrophysicist, I know very well that they are far from unchallenged by the astronomical community. Lively debates about these results occur regularly at scientific meetings, and their status is far from established. In fact, only a year or two ago, precisely the same team was arguing for exactly the opposite conclusion based on their earlier data. But the media don’t seem to like representing science the way it actually is, as an arena in which ideas are vigorously debated and each result is presented with caveats and careful analysis of possible error. They prefer instead to portray scientists as priests, laying down the law without equivocation. The more esoteric the theory, the further it is beyond the grasp of the non-specialist, the more exalted is the priest. It is not that the public want to know – they want not to know but to believe.

Things seem to have been the same in 1919. Although the results from Sobral and Principe had then not received independent confirmation from other experiments, just as the new supernova experiments have not, they were still presented to the public at large as being definitive proof of something very profound. That the eclipse measurements later received confirmation is not the point. This kind of reporting can elevate scientists, at least temporarily, to the priesthood, but does nothing to bridge the ever-widening gap between what scientists do and what the public think they do.

As we enter a new Millennium, science continues to expand into areas still further beyond the comprehension of the general public. Particle physicists want to understand the structure of matter on tinier and tinier scales of length and time. Astronomers want to know how stars, galaxies  and life itself came into being. But not only is the theoretical ambition of science getting bigger. Experimental tests of modern particle theories require methods capable of probing objects a tiny fraction of the size of the nucleus of an atom. With devices such as the Hubble Space Telescope, astronomers can gather light that comes from sources so distant that it has taken most of the age of the Universe to reach us from them. But extending these experimental methods still further will require yet more money to be spent. At the same time that science reaches further and further beyond the general public, the more it relies on their taxes.

Many modern scientists themselves play a dangerous game with the truth, pushing their results one-sidedly into the media as part of the cut-throat battle for a share of scarce research funding. There may be short-term rewards, in grants and TV appearances, but in the long run the impact on the relationship between science and society can only be bad. The public responded to Einstein with unqualified admiration, but Big Science later gave the world nuclear weapons. The distorted image of scientist-as-priest is likely to lead only to alienation and further loss of public respect. Science is not a religion, and should not pretend to be one.

PS. You will note that I was voicing doubts about the interpretation of the early results from supernovae  in 1998 that suggested the universe might be accelerating and that dark energy might be the reason for its behaviour. Although more evidence supporting this interpretation has since emerged from WMAP and other sources, I remain sceptical that we cosmologists are on the right track about this. Don’t get me wrong – I think the standard cosmological model is the best working hypothesis we have _ I just think we’re probably missing some important pieces of the puzzle. I don’t apologise for that. I think sceptical is what a scientist should be.

Skepsis Revived

Posted in Politics, The Universe and Stuff with tags , , , , , , , , on November 14, 2012 by telescoper

I appear to be in recycling mode this week, so I thought I’d carry on with a rehash of an old post about skepticism.  The excuse for this was an item in one of the Guardian science blogs about the distinction between Skeptic and sceptic. I must say I always thought they were simply alternative spellings, the “k” being closer to the original Greek and “c” being Latinised (via French). The Oxford English dictionary merely states that “sceptic” is more widespread in the UK and Commonwealth whereas “skeptic” prevails in North America. Somehow, however, this distinction has morphed into one variant meaning a person who has a questioning attitude to or is simply unconvinced by what claims to be knowledge in a particular area, and another meaning a “denier”, the latter being an “anti-sceptic” who believes wholeheartedly and often without evidence in whatever is contrary to received wisdom. A scientists should, I think, be the former, but the latter represents a distinctly unscientific attitude.

Anyway, yesterday I blogged a little bit about dark energy as, according to the standard model, this accounts for about 75% of the energy budget of the Universe. It’s also something we don’t understand very well at all. To make a point, take a look at the following picture (credit to the High-z supernova search team).

 What is plotted is the redshift of each supernova (along the x-axis), which relates to the factor by which the universe has expanded since light set out from it. A redshift of 0.5 means the universe was compressed by a factor 1.5 in all dimensions at the time when that particular supernova went bang. The y-axis shows the really hard bit to get right. It’s the estimated distance (in terms of distance modulus) of the supernovae. In effect, this is a measure of how faint the sources are. The theoretical curves show the faintness expected of a standard source observed at a given redshift in various cosmological models. The bottom panel shows these plotted with a reference curve taken out so the trend is easier to see. Actually, this is quite an old plot and there are many more points now but this is the version that convinced most cosmologists when it came out about a decade ago, which is why I show it here.

The argument drawn from these data is that the high redshift supernovae are fainter than one would expect in models without dark energy (represented by the \Omega_{\Lambda}  in the diagram. If this is true then it means the luminosity distance of these sources is greater than it would be in a decelerating universe. Their observed properties can be accounted for, however, if the universe’s expansion rate has been accelerating since light set out from the supernovae. In the bog standard cosmological models we all like to work with, acceleration requires that \rho + 3p/c^2 be negative. The “vacuum” equation of state p=-\rho c^2 provides a simple way of achieving this but there are many other forms of energy that could do it also, and we don’t know which one is present or why…

This plot contains the principal evidence that has led to most cosmologists accepting that the Universe is accelerating.  However, when I show it to first-year undergraduates (or even to members of the public at popular talks), they tend to stare in disbelief. The errors are huge, they say, and there are so  few data points. It just doesn’t look all that convincing. Moreover, there are other possible explanations. Maybe supernovae were different beasties back when the universe was young. Maybe something has absorbed their light making them look fainter rather than being further away. Maybe we’ve got the cosmological models wrong.

The reason I have shown this diagram is precisely because it isn’t superficially convincing. When they see it, students probably form the opinion that all cosmologists are gullible idiots. I’m actually pleased by that.  In fact, it’s the responsibility of scientists to be skeptical about new discoveries. However, it’s not good enough just to say “it’s not convincing so I think it’s rubbish”. What you have to do is test it, combine it with other evidence, seek alternative explanations and test those. In short you subject it to rigorous scrutiny and debate. It’s called the scientific method.

Some of my colleagues express doubts about me talking as I do about dark energy in first-year lectures when the students haven’t learned general relativity. But I stick to my guns. Too many people think science has to be taught as great stacks of received wisdom, of theories that are unquestionably “right”. Frontier sciences such as cosmology give us the chance to demonstrate the process by which we find out about the answers to big questions, not by believing everything we’re told but by questioning it.

My attitude to dark energy is that, given our limited understanding of the constituents of the universe and the laws of matter, it’s the best explanation we have of what’s going on. There is corroborating evidence of missing energy, from the cosmic microwave background and measurements of galaxy clustering, so it does have explanatory power. I’d say it was quite reasonable to believe in dark energy on the basis of what we know (or think we know) about the Universe.  In other words, as a good Bayesian, I’d say it was the most probable explanation. However, just because it’s the best explanation we have now doesn’t mean it’s a fact. It’s a credible hypothesis that deserves further work, but I wouldn’t bet much against it turning out to be wrong when we learn more.

I have to say that too many cosmologists seem to accept the reality of dark energy  with the unquestioning fervour of a religious zealot.  Influential gurus have turned the dark energy business into an industrial-sized bandwagon that sometimes makes it difficult, especially for younger scientists, to develop independent theories. On the other hand, it is clearly a question of fundamental importance to physics, so I’m not arguing that such projects should be axed. I just wish the culture of skepticism ran a little deeper.

Another context in which the word “skeptic” crops up frequently nowadays is  in connection with climate change although it has come to mean “denier” rather than “doubter”. I’m not an expert on climate change, so I’m not going to pretend that I understand all the details. However, there is an interesting point to be made in comparing climate change with cosmology. To make the point, here’s another figure.

There’s obviously a lot of noise and it’s only the relatively few points at the far right that show a clear increase (just as in the first Figure, in fact). However, looking at the graph I’d say that, assuming the historical data points are accurate,  it looks very convincing that the global mean temperature is rising with alarming rapidity. Modelling the Earth’s climate is very difficult and we have to leave it to the experts to assess the effects of human activity on this curve. There is a strong consensus from scientific experts, as monitored by the Intergovernmental Panel on Climate Change, that it is “very likely” that the increasing temperatures are due to increased atmospheric concentrations of greenhouse gas emissions.

There is, of course, a bandwagon effect going on in the field of climatology, just as there is in cosmology. This tends to stifle debate, make things difficult for dissenting views to be heard and evaluated rationally,  and generally hinders the proper progress of science. It also leads to accusations of – and no doubt temptations leading to – fiddling of the data to fit the prevailing paradigm. In both fields, though, the general consensus has been established by an honest and rational evaluation of data and theory.

I would say that any scientist worthy of the name should be skeptical about the human-based interpretation of these data and that, as in cosmology (or any scientific discipline), alternative theories should be developed and additional measurements made. However, this situation in climatology is very different to cosmology in one important respect. The Universe will still be here in 100 years time. We might not.

The big issue relating to climate change is not just whether we understand what’s going on in the Earth’s atmosphere, it’s the risk to our civilisation of not doing anything about it. This is a great example where the probability of being right isn’t the sole factor in making a decision. Sure, there’s a chance that humans aren’t responsible for global warming. But if we carry on as we are for decades until we prove conclusively that we are, then it will be too late. The penalty for being wrong will be unbearable. On the other hand, if we tackle climate change by adopting greener technologies, burning less fossil fuels, wasting less energy and so on, these changes may cost us a bit of money in the short term but  frankly we’ll be better off anyway whether we did it for the right reasons or not. Of course those whose personal livelihoods depend on the status quo are the ones who challenge the scientific consensus most vociferously. They would, wouldn’t they?

This is a good example of a decision that can be made on the basis of a  judgement of the probability of being right. In that respect , the issue of how likely it is that the scientists are correct on this one is almost irrelevant. Even if you’re a complete disbeliever in science you should know  how to respond to this issue, following the logic of Blaise Pascal. He argued that there’s no rational argument for the existence or non-existence of God but that the consequences of not believing if God does exist (eternal damnation) were much worse than those of behaving as if you believe in God when he doesn’t. For “God” read “climate change” and let Pascal’s wager be your guide….