Archive for the Astrohype Category

Results from the Event Horizon Telescope

Posted in Astrohype, The Universe and Stuff with tags , , on April 10, 2019 by telescoper

Following yesterday’s little teaser, let me point out that there is a press conference taking place today (at 2pm Irish Summer Time, that’s 3pm Brussels) to announce a new result from the Event Horizon Telescope. The announcement will be streamed live here.

Sadly, I’m teaching at the time of the press conference so I won’t be able to watch, but that doesn’t mean that you shouldn’t!

I’ll post pictures and comments when I get back. Watch this space. Or you could watch this video..

UPDATE: Well, there we are. Here is the image of the `shadow’ of the event horizon around the black hole in M87:

The image is about 42 micro arcseconds across. I guess to people brought up on science fiction movies with fancy special effects the image is probably a little underwhelming, but it really is an excellent achievement to get that resolution. Above all, it’s a great example of scientific cooperation – 8 different telescopes all round the world. The sizeable European involvement received a substantial injection of funding from the European Union too!

Other parameters are here:

The accompanying EU press release is here. Further information can be found here. The six publications relating to this result can be found here:

Advertisements

BICEP2: Is the Signal Cosmological?

Posted in Astrohype, The Universe and Stuff with tags , , on March 28, 2019 by telescoper

An article in Physics Today just reminded me just now that I have missed the fifth anniversary of the BICEP2 announcement of `the detection of primordial gravitational waves’. I know I’m a week but I thought I’d reblog the post I wrote on March 19th 2014.You will see that I was sceptical…

..and it subsequently turned out that I was right to be so.

In the Dark

I have a short gap in my schedule today so I thought I would use it to post a short note about the BICEP2 results announced to great excitement on Monday.

There has been a great deal of coverage in the popular media about a “Spectacular Cosmic Discovery” and this is mirrored by excitement at a more technical level about the theoretical implications of the BICEP2 results. Having taken a bit of time out last night to go through the discovery paper, I think I should say that I think all this excitement is very premature. In that respect I agree with the result of my straw poll.

First of all let me make it clear that the BICEP2 experiment is absolutely superb. It was designed and built by top-class scientists and has clearly functioned brilliantly to improve its sensitivity so much that it has gone so…

View original post 1,015 more words

The Negative Mass Bug

Posted in Astrohype, Open Access, The Universe and Stuff with tags , , , , , on February 25, 2019 by telescoper

You may have noticed that some time ago I posted about  a paper by Jamie Farnes published in Astronomy & Astrophysics but available on the arXiv here which entails a suggestion that material with negative mass might account for dark energy and/or dark matter.

Here is the abstract of said paper:

Dark energy and dark matter constitute 95% of the observable Universe. Yet the physical nature of these two phenomena remains a mystery. Einstein suggested a long-forgotten solution: gravitationally repulsive negative masses, which drive cosmic expansion and cannot coalesce into light-emitting structures. However, contemporary cosmological results are derived upon the reasonable assumption that the Universe only contains positive masses. By reconsidering this assumption, I have constructed a toy model which suggests that both dark phenomena can be unified into a single negative mass fluid. The model is a modified ΛCDM cosmology, and indicates that continuously-created negative masses can resemble the cosmological constant and can flatten the rotation curves of galaxies. The model leads to a cyclic universe with a time-variable Hubble parameter, potentially providing compatibility with the current tension that is emerging in cosmological measurements. In the first three-dimensional N-body simulations of negative mass matter in the scientific literature, this exotic material naturally forms haloes around galaxies that extend to several galactic radii. These haloes are not cuspy. The proposed cosmological model is therefore able to predict the observed distribution of dark matter in galaxies from first principles. The model makes several testable predictions and seems to have the potential to be consistent with observational evidence from distant supernovae, the cosmic microwave background, and galaxy clusters. These findings may imply that negative masses are a real and physical aspect of our Universe, or alternatively may imply the existence of a superseding theory that in some limit can be modelled by effective negative masses. Both cases lead to the surprising conclusion that the compelling puzzle of the dark Universe may have been due to a simple sign error.

Well there’s a new paper just out on the arXiv by Hector Socas-Navarro with the abstract

A recent work by Farnes (2018) proposed an alternative cosmological model in which both dark matter and dark energy are replaced with a single fluid of negative mass. This paper presents a critical review of that model. A number of problems and discrepancies with observations are identified. For instance, the predicted shape and density of galactic dark matter halos are incorrect. Also, halos would need to be less massive than the baryonic component or they would become gravitationally unstable. Perhaps the most challenging problem in this theory is the presence of a large-scale version of the `runaway’ effect, which would result in all galaxies moving in random directions at nearly the speed of light. Other more general issues regarding negative mass in general relativity are discussed, such as the possibility of time-travel paradoxes.

Among other things there is this:

After initially struggling to reproduce the F18 results, a careful inspection of his source code revealed a subtle bug in the computation of the gravitational acceleration. Unfortunately, the simulations in F18 are seriously compromised by this coding error whose effect is that the gravitational force decreases with the inverse of the distance, instead of the distance squared.

Oh dear.

I don’t think I need go any further into this particular case, which would just rub salt into the wounds of Farnes (2018) but I will make a general comment. Peer review is the best form of quality stamp that we have but, as this case demonstrates, it is by no means flawless. The paper by Farnes (2018) was refereed and published, but is now shown to be wrong*. Just as authors can make mistakes so can referees. I know I’ve screwed up as a referee in the past so I’m not claiming to be better than anyone in saying this.

*This claim is contested: see the comment below.

I don’t think the lesson is that we should just scrap peer review, but I do think we need to be more imaginative about how it is used than just relying on one or two individuals to do it. This case shows that science eventually works, as the error was found and corrected, but that was only possible because the code used by Farnes (2018) was made available for scrutiny. This is not always what happens. I take this as a vindication of open science, and an example of why scientists should share their code and data to enable others to check the results. I’d like to see a system in which papers are not regarded as `final’ documents but things which can be continuously modified in response to independent scrutiny, but that would require a major upheaval in academic practice and is unlikely to happen any time soon.

In this case, in the time since publication there has been a large amount of hype about the Farnes (2018) paper, and it’s unlikely that any of the media who carried stories about the results therein will ever publish retractions. This episode does therefore illustrate the potentially damaging effect on public trust that the excessive thirst for publicity can have. So how do we balance open science against the likelihood that wrong results will be taken up by the media before the errors are found? I wish I knew!

Negative Mass, Phlogiston and the State of Modern Cosmology

Posted in Astrohype, The Universe and Stuff with tags , , on December 7, 2018 by telescoper

A graphical representation of something or other.

I’ve noticed a modest amount of hype – much of it gibberish – going around about a paper published in Astronomy & Astrophysics but available on the arXiv here which entails a suggestion that material with negative mass might account for dark energy and/or dark matter. Here is the abstract of the paper:

Dark energy and dark matter constitute 95% of the observable Universe. Yet the physical nature of these two phenomena remains a mystery. Einstein suggested a long-forgotten solution: gravitationally repulsive negative masses, which drive cosmic expansion and cannot coalesce into light-emitting structures. However, contemporary cosmological results are derived upon the reasonable assumption that the Universe only contains positive masses. By reconsidering this assumption, I have constructed a toy model which suggests that both dark phenomena can be unified into a single negative mass fluid. The model is a modified ΛCDM cosmology, and indicates that continuously-created negative masses can resemble the cosmological constant and can flatten the rotation curves of galaxies. The model leads to a cyclic universe with a time-variable Hubble parameter, potentially providing compatibility with the current tension that is emerging in cosmological measurements. In the first three-dimensional N-body simulations of negative mass matter in the scientific literature, this exotic material naturally forms haloes around galaxies that extend to several galactic radii. These haloes are not cuspy. The proposed cosmological model is therefore able to predict the observed distribution of dark matter in galaxies from first principles. The model makes several testable predictions and seems to have the potential to be consistent with observational evidence from distant supernovae, the cosmic microwave background, and galaxy clusters. These findings may imply that negative masses are a real and physical aspect of our Universe, or alternatively may imply the existence of a superseding theory that in some limit can be modelled by effective negative masses. Both cases lead to the surprising conclusion that the compelling puzzle of the dark Universe may have been due to a simple sign error.

For a skeptical commentary on this work, see here.

The idea of negative mass is no by no means new, of course. If you had asked a seventeenth century scientist the question “what happens when something burns?”  the chances are the answer would  have involved the word phlogiston, a name derived from the Greek  φλογιστόν, meaning “burning up”. This “fiery principle” or “element” was supposed to be present in all combustible materials and the idea was that it was released into air whenever any such stuff was ignited. The act of burning separated the phlogiston from the dephlogisticated “true” form of the material, also known as calx.

The phlogiston theory held sway until  the late 18th Century, when Antoine Lavoisier demonstrated that combustion results in an increase in weight implying an increase in mass of the material being burned. This poses a serious problem if burning also involves the loss of phlogiston unless phlogiston has negative mass. However, many serious scientists of the 18th Century, such as Georg Ernst Stahl, had already suggested that phlogiston might have negative weight or, as he put it, `levity’. Nowadays we would probably say `anti-gravity.

Eventually, Joseph Priestley discovered what actually combines with materials during combustion:  oxygen. Instead of becoming dephlogisticated, things become oxidised by fixing oxygen from air, which is why their weight increases. It’s worth mentioning, though, the name that Priestley used for oxygen was in fact “dephlogisticated air” (because it was capable of combining more extensively with phlogiston than ordinary air). He  remained a phlogistonian longer after making the discovery that should have killed the theory.

The standard cosmological model involves the hypothesis that about 75% of the energy budget of the Universe is in the form of “dark energy”. We don’t know much about what this is, except that in order to make our current understanding work out it has to act like a source of anti-gravity. It does this by violating the strong energy condition of general relativity.

Dark energy is needed to reconcile three basic measurements: (i) the brightness distant supernovae that seem to indicate the Universe is accelerating (which is where the anti-gravity comes in); (ii) the cosmic microwave background that suggests the Universe has flat spatial sections; and (iii) the direct estimates of the mass associated with galaxy clusters that accounts for about 25% of the mass needed to close the Universe.

A universe without dark energy appears not to be able to account for these three observations simultaneously within our current understanding of gravity as obtained from Einstein’s theory of general relativity.

I’ve blogged before, with some levity of my own, about how uncomfortable this dark energy makes me feel. It makes me even more uncomfortable that such an enormous  industry has grown up around it and that its existence is accepted unquestioningly by so many modern cosmologists.

Isn’t there a chance that, with the benefit of hindsight, future generations will look back on dark energy in the same way that we now see the phlogiston theory?

Or maybe, as the paper that prompted this piece might be taken to suggest, the dark energy really is something like phlogiston. At least I prefer the name to quintessence. However, I think the author has missed a trick. I think to create a properly trendy cosmological theory he should include the concept of supersymmetry, according to which there should be a Fermionic counterpart of phlogiston called the phlogistino..

Hawking Points in the CMB Sky?

Posted in Astrohype, Bad Statistics, The Universe and Stuff with tags , on October 30, 2018 by telescoper

As I wait in Cardiff Airport for a flight back to civilization, I thought I’d briefly mention a paper that appeared on the arXiv this summer. The abstract of this paper (by Daniel An, Krzysztof A. Meissner and Roger Penrose) reads as follows:

This paper presents powerful observational evidence of anomalous individual points in the very early universe that appear to be sources of vast amounts of energy, revealed as specific signals found in the CMB sky. Though seemingly problematic for cosmic inflation, the existence of such anomalous points is an implication of conformal cyclic cosmology (CCC), as what could be the Hawking points of the theory, these being the effects of the final Hawking evaporation of supermassive black holes in the aeon prior to ours. Although of extremely low temperature at emission, in CCC this radiation is enormously concentrated by the conformal compression of the entire future of the black hole, resulting in a single point at the crossover into our current aeon, with the emission of vast numbers of particles, whose effects we appear to be seeing as the observed anomalous points. Remarkably, the B-mode location found by BICEP 2 is at one of these anomalous points.

The presence of Roger Penrose in the author list of this paper is no doubt a factor that contributed to the substantial amount of hype surrounding it, but although he is the originator of the Conformal Cyclic Cosmology I suspect he didn’t have anything to do with the data analysis presented in the paper as, great mathematician though he is, data analysis is not his forte.

I have to admit that I am very skeptical of the claims made in this paper – as I was in the previous case of claims of a evidence in favour of the Penrose model. In that case the analysis was flawed because it did not properly calculate the probability of the claimed anomalies in the standard model of cosmology. Moreover, the addition of a reference to BICEP2 at the end of the abstract doesn’t strengthen the case. The detection claimed by BICEP2 was (a) in polarization not in temperature and (b) is now known to be consistent with galactic foregrounds.

I will, however, hold my tongue on these claims, at least for the time being. I have an MSc student at Maynooth who is going to try to reproduce the analysis (which is not trivial, as the description in the paper is extremely vague). Watch this space.

EDGES and Foregrounds

Posted in Astrohype, The Universe and Stuff with tags , , , on September 3, 2018 by telescoper

Earlier this year I wrote a brief post about paper by Bowman et al. from the EDGES experiment that had just come out in Nature reportining the detection of a flattened absorption profile in the sky-averaged radio spectrum, centred at a frequency of 78 megahertz, largely consistent with expectations for the 21-centimetre signal induced by early stars. It caused a lot of excitement at the time; see, e.g., here.
The key plot from the paper is this:

At the time I said that I wasn’t entirely convinced. Although the paper is very good at describing the EDGES experiment, it is far less convincing that all necessary foregrounds and systematics have been properly accounted for. There are many artefacts that could mimic the signal shown in the diagram.

I went on to say

If true, the signal is quite a lot larger than amplitude than standard models predict. That doesn’t mean that it must be wrong – I’ve never gone along with the saying `never trust an experimental result until it is confirmed by theory’ – but it’s way too early to claim that it proves that some new exotic physics is involved. The real explanation may be far more mundane.

There’s been a lot of media hype about this result – reminiscent of the BICEP bubble – and, while I agree that if it is true it is an extremely exciting result – I think it’s far too early to be certain of what it really represents. To my mind there’s a significant chance this could be a false cosmic dawn.

I gather the EDGES team is going to release its data publicly. That will be good, as independent checks of the data analysis would be very valuable.

Well, there’s a follow-up paper that I missed when it appeared on the arXiv in May the abstract of which reads:

We have re-analyzed the data in which Bowman et al. (2018) identified a feature that could be due to cosmological 21-cm line absorption in the intergalactic medium at redshift z~17. If we use exactly their procedures then we find almost identical results, but the fits imply either non-physical properties for the ionosphere or unexpected structure in the spectrum of foreground emission (or both). Furthermore we find that making reasonable changes to the analysis process, e.g., altering the description of the foregrounds or changing the range of frequencies included in the analysis, gives markedly different results for the properties of the absorption profile. We can in fact get what appears to be a satisfactory fit to the data without any absorption feature if there is a periodic feature with an amplitude of ~0.05 K present in the data. We believe that this calls into question the interpretation of these data as an unambiguous detection of the cosmological 21-cm absorption signature.

You can read the full paper here (PDF). I haven’t kept up with this particular story, so further comments/updates/references are welcome through the box below!

The Dark Matter of Astronomy Hype

Posted in Astrohype, Bad Statistics, The Universe and Stuff with tags , , , , on April 16, 2018 by telescoper

Just before Easter (and, perhaps more significantly, just before April Fool’s Day) a paper by van Dokkum et al. was published in Nature with the title A Galaxy Lacking Dark Matter. As is often the case with scientific publications presented in Nature, the press machine kicked into action and stories about this mysterious galaxy appeared in print and online all round the world.

So what was the result? Here’s the abstract of the Nature paper:

 

Studies of galaxy surveys in the context of the cold dark matter paradigm have shown that the mass of the dark matter halo and the total stellar mass are coupled through a function that varies smoothly with mass. Their average ratio Mhalo/Mstars has a minimum of about 30 for galaxies with stellar masses near that of the Milky Way (approximately 5 × 1010 solar masses) and increases both towards lower masses and towards higher masses. The scatter in this relation is not well known; it is generally thought to be less than a factor of two for massive galaxies but much larger for dwarf galaxies. Here we report the radial velocities of ten luminous globular-cluster-like objects in the ultra-diffuse galaxy NGC1052–DF2, which has a stellar mass of approximately 2 × 108 solar masses. We infer that its velocity dispersion is less than 10.5 kilometres per second with 90 per cent confidence, and we determine from this that its total mass within a radius of 7.6 kiloparsecs is less than 3.4 × 108 solar masses. This implies that the ratio Mhalo/Mstars is of order unity (and consistent with zero), a factor of at least 400 lower than expected. NGC1052–DF2 demonstrates that dark matter is not always coupled with baryonic matter on galactic scales.

 

I had a quick look at the paper at the time and wasn’t very impressed by the quality of the data. To see why look at the main plot, a histogram formed from just ten observations (of globular clusters used as velocity tracers):

I didn’t have time to read the paper thoroughly before the Easter weekend,  but did draft a sceptical blog on the paper only to decide not to publish it as I thought it might be too inflammatory even by my standards! Suffice to say that I was unconvinced.

Anyway, it turns out I was far from the only astrophysicist to have doubts about this result; you can find a nice summary of the discussion on social media here and here. Fortunately, people more expert than me have found the time to look in more detail at the Dokkum et al. claim. There’s now a paper on the arXiv by Martin et al.

It was recently proposed that the globular cluster system of the very low surface-brightness galaxy NGC1052-DF2 is dynamically very cold, leading to the conclusion that this dwarf galaxy has little or no dark matter. Here, we show that a robust statistical measure of the velocity dispersion of the tracer globular clusters implies a mundane velocity dispersion and a poorly constrained mass-to-light ratio. Models that include the possibility that some of the tracers are field contaminants do not yield a more constraining inference. We derive only a weak constraint on the mass-to-light ratio of the system within the half-light radius or within the radius of the furthest tracer (M/L_V<8.1 at the 90-percent confidence level). Typical mass-to-light ratios measured for dwarf galaxies of the same stellar mass as NGC1052-DF2 are well within this limit. With this study, we emphasize the need to properly account for measurement uncertainties and to stay as close as possible to the data when determining dynamical masses from very small data sets of tracers.

More information about this system has been posted by Pieter van Dokkum on his website here.

Whatever turns out in the final analysis of NGC1052-DF2 it is undoubtedly an interesting system. It may indeed turn out to  have less dark matter than expected though I don’t think the evidence available right now warrants such an inference with such confidence. What worries me most however, is the way this result was presented in the media, with virtually no regard for the manifest statistical uncertainty inherent in the analysis. This kind of hype can be extremely damaging to science in general, and to explain why I’ll go off on a rant that I’ve indulged in a few times before on this blog.

A few years ago there was an interesting paper  (in Nature of all places), the opening paragraph of which reads:

The past few years have seen a slew of announcements of major discoveries in particle astrophysics and cosmology. The list includes faster-than-light neutrinos; dark-matter particles producing γ-rays; X-rays scattering off nuclei underground; and even evidence in the cosmic microwave background for gravitational waves caused by the rapid inflation of the early Universe. Most of these turned out to be false alarms; and in my view, that is the probable fate of the rest.

The piece went on to berate physicists for being too trigger-happy in claiming discoveries, the BICEP2 fiasco being a prime example. I agree that this is a problem, but it goes far beyond physics. In fact its endemic throughout science. A major cause of it is abuse of statistical reasoning.

Anyway, I thought I’d take the opportunity to re-iterate why I statistics and statistical reasoning are so important to science. In fact, I think they lie at the very core of the scientific method, although I am still surprised how few practising scientists are comfortable with even basic statistical language. A more important problem is the popular impression that science is about facts and absolute truths. It isn’t. It’s a <em>process</em>. In order to advance it has to question itself. Getting this message wrong – whether by error or on purpose -is immensely dangerous.

Statistical reasoning also applies to many facets of everyday life, including business, commerce, transport, the media, and politics. Probability even plays a role in personal relationships, though mostly at a subconscious level. It is a feature of everyday life that science and technology are deeply embedded in every aspect of what we do each day. Science has given us greater levels of comfort, better health care, and a plethora of labour-saving devices. It has also given us unprecedented ability to destroy the environment and each other, whether through accident or design.

Civilized societies face rigorous challenges in this century. We must confront the threat of climate change and forthcoming energy crises. We must find better ways of resolving conflicts peacefully lest nuclear or chemical or even conventional weapons lead us to global catastrophe. We must stop large-scale pollution or systematic destruction of the biosphere that nurtures us. And we must do all of these things without abandoning the many positive things that science has brought us. Abandoning science and rationality by retreating into religious or political fundamentalism would be a catastrophe for humanity.

Unfortunately, recent decades have seen a wholesale breakdown of trust between scientists and the public at large. This is due partly to the deliberate abuse of science for immoral purposes, and partly to the sheer carelessness with which various agencies have exploited scientific discoveries without proper evaluation of the risks involved. The abuse of statistical arguments have undoubtedly contributed to the suspicion with which many individuals view science.

There is an increasing alienation between scientists and the general public. Many fewer students enrol for courses in physics and chemistry than a a few decades ago. Fewer graduates mean fewer qualified science teachers in schools. This is a vicious cycle that threatens our future. It must be broken.

The danger is that the decreasing level of understanding of science in society means that knowledge (as well as its consequent power) becomes concentrated in the minds of a few individuals. This could have dire consequences for the future of our democracy. Even as things stand now, very few Members of Parliament are scientifically literate. How can we expect to control the application of science when the necessary understanding rests with an unelected “priesthood” that is hardly understood by, or represented in, our democratic institutions?

Very few journalists or television producers know enough about science to report sensibly on the latest discoveries or controversies. As a result, important matters that the public needs to know about do not appear at all in the media, or if they do it is in such a garbled fashion that they do more harm than good.

Years ago I used to listen to radio interviews with scientists on the Today programme on BBC Radio 4. I even did such an interview once. It is a deeply frustrating experience. The scientist usually starts by explaining what the discovery is about in the way a scientist should, with careful statements of what is assumed, how the data is interpreted, and what other possible interpretations might be and the likely sources of error. The interviewer then loses patience and asks for a yes or no answer. The scientist tries to continue, but is badgered. Either the interview ends as a row, or the scientist ends up stating a grossly oversimplified version of the story.

Some scientists offer the oversimplified version at the outset, of course, and these are the ones that contribute to the image of scientists as priests. Such individuals often believe in their theories in exactly the same way that some people believe religiously. Not with the conditional and possibly temporary belief that characterizes the scientific method, but with the unquestioning fervour of an unthinking zealot. This approach may pay off for the individual in the short term, in popular esteem and media recognition – but when it goes wrong it is science as a whole that suffers. When a result that has been proclaimed certain is later shown to be false, the result is widespread disillusionment.

The worst example of this tendency that I can think of is the constant use of the phrase “Mind of God” by theoretical physicists to describe fundamental theories. This is not only meaningless but also damaging. As scientists we should know better than to use it. Our theories do not represent absolute truths: they are just the best we can do with the available data and the limited powers of the human mind. We believe in our theories, but only to the extent that we need to accept working hypotheses in order to make progress. Our approach is pragmatic rather than idealistic. We should be humble and avoid making extravagant claims that can’t be justified either theoretically or experimentally.

The more that people get used to the image of “scientist as priest” the more dissatisfied they are with real science. Most of the questions asked of scientists simply can’t be answered with “yes” or “no”. This leaves many with the impression that science is very vague and subjective. The public also tend to lose faith in science when it is unable to come up with quick answers. Science is a process, a way of looking at problems not a list of ready-made answers to impossible problems. Of course it is sometimes vague, but I think it is vague in a rational way and that’s what makes it worthwhile. It is also the reason why science has led to so many objectively measurable advances in our understanding of the World.

I don’t have any easy answers to the question of how to cure this malaise, but do have a few suggestions. It would be easy for a scientist such as myself to blame everything on the media and the education system, but in fact I think the responsibility lies mainly with ourselves. We are usually so obsessed with our own research, and the need to publish specialist papers by the lorry-load in order to advance our own careers that we usually spend very little time explaining what we do to the public or why.

I think every working scientist in the country should be required to spend at least 10% of their time working in schools or with the general media on “outreach”, including writing blogs like this. People in my field – astronomers and cosmologists – do this quite a lot, but these are areas where the public has some empathy with what we do. If only biologists, chemists, nuclear physicists and the rest were viewed in such a friendly light. Doing this sort of thing is not easy, especially when it comes to saying something on the radio that the interviewer does not want to hear. Media training for scientists has been a welcome recent innovation for some branches of science, but most of my colleagues have never had any help at all in this direction.

The second thing that must be done is to improve the dire state of science education in schools. Over the last two decades the national curriculum for British schools has been dumbed down to the point of absurdity. Pupils that leave school at 18 having taken “Advanced Level” physics do so with no useful knowledge of physics at all, even if they have obtained the highest grade. I do not at all blame the students for this; they can only do what they are asked to do. It’s all the fault of the educationalists, who have done the best they can for a long time to convince our young people that science is too hard for them. Science can be difficult, of course, and not everyone will be able to make a career out of it. But that doesn’t mean that it should not be taught properly to those that can take it in. If some students find it is not for them, then so be it. We don’t everyone to be a scientist, but we do need many more people to understand how science really works.

I realise I must sound very gloomy about this, but I do think there are good prospects that the gap between science and society may gradually be healed. The fact that the public distrust scientists leads many of them to question us, which is a very good thing. They should question us and we should be prepared to answer them. If they ask us why, we should be prepared to give reasons. If enough scientists engage in this process then what will emerge is and understanding of the enduring value of science. I don’t just mean through the DVD players and computer games science has given us, but through its cultural impact. It is part of human nature to question our place in the Universe, so science is part of what we are. It gives us purpose. But it also shows us a way of living our lives. Except for a few individuals, the scientific community is tolerant, open, internationally-minded, and imbued with a philosophy of cooperation. It values reason and looks to the future rather than the past. Like anyone else, scientists will always make mistakes, but we can always learn from them. The logic of science may not be infallible, but it’s probably the best logic there is in a world so filled with uncertainty.