Archive for April, 2009

Space Time

Posted in Biographical, The Universe and Stuff with tags , , on April 30, 2009 by telescoper

I thought anyone reading my rather gloomy recent posts could probably do with a laugh so I thought I’d put this up.

These clips contain a short item  I did about nine or ten years ago for the BBC series Space, which was presented by Sam Neill. Originally we were going to demonstrate wormholes using a snooker table, clever editing and reversed video. The producer, Jeremy,  decided that wouldn’t look spectacular enough so instead we went to St Anton in Austria: I was flown over the Alps in a helicopter and then driven through the Arlberg tunnel in an impressively fast car. Well worth the cost to license fee payers, I’m sure, even if the three-day trip to Austria by me and a crew of six as well as the hire of the helicopter ended up as a mere three minutes of screen time…

The episode I was in, the last of 6 in the series, was called To Boldly Go. I remember suggesting to the producer that the only way to travel faster than light in the manner required was with a split infinitive drive, but they didn’t use that in the final script.

Notice how, in the helicopter sequence, I give the appearance of being completely terrified. A fine piece of acting by me, I thought. *Cough*

Unfortunately my bit is quite a long way into the first clip, so you need to wait until about 09.00, and it runs over the join into the second clip

The item is daft, I know, and I don’t really believe any of that stuff about wormholes… but it was great fun doing it.

Advertisements

The Doomsday Argument

Posted in Bad Statistics, The Universe and Stuff with tags , , , , , on April 29, 2009 by telescoper

I don’t mind admitting that as I get older I get more and  more pessimistic about the prospects for humankind’s survival into the distant future.

Unless there are major changes in the way this planet is governed, our planet may become barren and uninhabitable through war or environmental catastrophe. But I do think the future is in our hands, and disaster is, at least in principle, avoidable. In this respect I have to distance myself from a very strange argument that has been circulating among philosophers and physicists for a number of years. It is called Doomsday argument, and it even has a sizeable wikipedia entry, to which I refer you for more details and variations on the basic theme. As far as I am aware, it was first introduced by the mathematical physicist Brandon Carter and subsequently developed and expanded by the philosopher John Leslie (not to be confused with the TV presenter of the same name). It also re-appeared in slightly different guise through a paper in the serious scientific journal Nature by the eminent physicist Richard Gott. Evidently, for some reason, some serious people take it very seriously indeed.

The Doomsday argument uses the language of probability theory, but it is such a strange argument that I think the best way to explain it is to begin with a more straightforward problem of the same type.

 Imagine you are a visitor in an unfamiliar, but very populous, city. For the sake of argument let’s assume that it is in China. You know that this city is patrolled by traffic wardens, each of whom carries a number on their uniform.  These numbers run consecutively from 1 (smallest) to T (largest) but you don’t know what T is, i.e. how many wardens there are in total. You step out of your hotel and discover traffic warden number 347 sticking a ticket on your car. What is your best estimate of T, the total number of wardens in the city?

 I gave a short lunchtime talk about this when I was working at Queen Mary College, in the University of London. Every Friday, over beer and sandwiches, a member of staff or research student would give an informal presentation about their research, or something related to it. I decided to give a talk about bizarre applications of probability in cosmology, and this problem was intended to be my warm-up. I was amazed at the answers I got to this simple question. The majority of the audience denied that one could make any inference at all about T based on a single observation like this, other than that it  must be at least 347.

 Actually, a single observation like this can lead to a useful inference about T, using Bayes’ theorem. Suppose we have really no idea at all about T before making our observation; we can then adopt a uniform prior probability. Of course there must be an upper limit on T. There can’t be more traffic wardens than there are people, for example. Although China has a large population, the prior probability of there being, say, a billion traffic wardens in a single city must surely be zero. But let us take the prior to be effectively constant. Suppose the actual number of the warden we observe is t. Now we have to assume that we have an equal chance of coming across any one of the T traffic wardens outside our hotel. Each value of t (from 1 to T) is therefore equally likely. I think this is the reason that my astronomers’ lunch audience thought there was no information to be gleaned from an observation of any particular value, i.e. t=347.

 Let us simplify this argument further by allowing two alternative “models” for the frequency of Chinese traffic wardens. One has T=1000, and the other (just to be silly) has T=1,000,000. If I find number 347, which of these two alternatives do you think is more likely? Think about the kind of numbers that occupy the range from 1 to T. In the first case, most of the numbers have 3 digits. In the second, most of them have 6. If there were a million traffic wardens in the city, it is quite unlikely you would find a random individual with a number as small as 347. If there were only 1000, then 347 is just a typical number. There are strong grounds for favouring the first model over the second, simply based on the number actually observed. To put it another way, we would be surprised to encounter number 347 if T were actually a million. We would not be surprised if T were 1000.

 One can extend this argument to the entire range of possible values of T, and ask a more general question: if I observe traffic warden number t what is the probability I assign to each value of T? The answer is found using Bayes’ theorem. The prior, as I assumed above, is uniform. The likelihood is the probability of the observation given the model. If I assume a value of T, the probability P(t|T) of each value of t (up to and including T) is just 1/T (since each of the wardens is equally likely to be encountered). Bayes’ theorem can then be used to construct a posterior probability of P(T|t). Without going through all the nuts and bolts, I hope you can see that this probability will tail off for large T. Our observation of a (relatively) small value for t should lead us to suspect that T is itself (relatively) small. Indeed it’s a reasonable “best guess” that T=2t. This makes intuitive sense because the observed value of t then lies right in the middle of its range of possibilities.

 Before going on, it is worth mentioning one other point about this kind of inference: that it is not at all powerful. Note that the likelihood just varies as 1/T. That of course means that small values are favoured over large ones. But note that this probability is uniform in logarithmic terms. So although T=1000 is more probable than T=1,000,000,  the range between 1000 and 10,000 is roughly as likely as the range between 1,000,000 and 10,000,0000, assuming there is no prior information. So although it tells us something, it doesn’t actually tell us very much. Just like any probabilistic inference, there’s a chance that it is wrong, perhaps very wrong.

 What does all this have to do with Doomsday? Instead of traffic wardens, we want to estimate N, the number of humans that will ever be born, Following the same logic as in the example above, I assume that I am a “randomly” chosen individual drawn from the sequence of all humans to be born, in past present and future. For the sake of argument, assume I number n in this sequence. The logic I explained above should lead me to conclude that the total number N is not much larger than my number, n. For the sake of argument, assume that I am the one-billionth human to be born, i.e. n=1,000,000,0000.  There should not be many more than a few billion humans ever to be born. At the rate of current population growth, this means that not many more generations of humans remain to be born. Doomsday is nigh.

 Richard Gott’s version of this argument is logically similar, but is based on timescales rather than numbers. If whatever thing we are considering begins at some time tbegin and ends at a time tend and if we observe it at a “random” time between these two limits, then our best estimate for its future duration is of order how long it has lasted up until now. Gott gives the example of Stonehenge[1], which was built about 4,000 years ago: we should expect it to last a few thousand years into the future. Actually, Stonehenge is a highly dubious . It hasn’t really survived 4,000 years. It is a ruin, and nobody knows its original form or function. However, the argument goes that if we come across a building put up about twenty years ago, presumably we should think it will come down again (whether by accident or design) in about twenty years time. If I happen to walk past a building just as it is being finished, presumably I should hang around and watch its imminent collapse….

But I’m being facetious.

Following this chain of thought, we would argue that, since humanity has been around a few hundred thousand years, it is expected to last a few hundred thousand years more. Doomsday is not quite as imminent as previously, but in any case humankind is not expected to survive sufficiently long to, say, colonize the Galaxy.

 You may reject this type of argument on the grounds that you do not accept my logic in the case of the traffic wardens. If so, I think you are wrong. I would say that if you accept all the assumptions entering into the Doomsday argument then it is an equally valid example of inductive inference. The real issue is whether it is reasonable to apply this argument at all in this particular case. There are a number of related examples that should lead one to suspect that something fishy is going on. Usually the problem can be traced back to the glib assumption that something is “random” when or it is not clearly stated what that is supposed to mean.

 There are around sixty million British people on this planet, of whom I am one. In contrast there are 3 billion Chinese. If I follow the same kind of logic as in the examples I gave above, I should be very perplexed by the fact that I am not Chinese. After all, the odds are 50: 1 against me being British, aren’t they?

 Of course, I am not at all surprised by the observation of my non-Chineseness. My upbringing gives me access to a great deal of information about my own ancestry, as well as the geographical and political structure of the planet. This data convinces me that I am not a “random” member of the human race. My self-knowledge is conditioning information and it leads to such a strong prior knowledge about my status that the weak inference I described above is irrelevant. Even if there were a million million Chinese and only a hundred British, I have no grounds to be surprised at my own nationality given what else I know about how I got to be here.

 This kind of conditioning information can be applied to history, as well as geography. Each individual is generated by its parents. Its parents were generated by their parents, and so on. The genetic trail of these reproductive events connects us to our primitive ancestors in a continuous chain. A well-informed alien geneticist could look at my DNA and categorize me as an “early human”. I simply could not be born later in the story of humankind, even if it does turn out to continue for millennia. Everything about me – my genes, my physiognomy, my outlook, and even the fact that I bothering to spend time discussing this so-called paradox – is contingent on my specific place in human history. Future generations will know so much more about the universe and the risks to their survival that they won’t even discuss this simple argument. Perhaps we just happen to be living at the only epoch in human history in which we know enough about the Universe for the Doomsday argument to make some kind of sense, but too little to resolve it.

 To see this in a slightly different light, think again about Gott’s timescale argument. The other day I met an old friend from school days. It was a chance encounter, and I hadn’t seen the person for over 25 years. In that time he had married, and when I met him he was accompanied by a baby daughter called Mary. If we were to take Gott’s argument seriously, this was a random encounter with an entity (Mary) that had existed for less than a year. Should I infer that this entity should probably only endure another year or so? I think not. Again, bare numerological inference is rendered completely irrelevant by the conditioning information I have. I know something about babies. When I see one I realise that it is an individual at the start of its life, and I assume that it has a good chance of surviving into adulthood. Human civilization is a baby civilization. Like any youngster, it has dangers facing it. But is not doomed by the mere fact that it is young,

 John Leslie has developed many different variants of the basic Doomsday argument, and I don’t have the time to discuss them all here. There is one particularly bizarre version, however, that I think merits a final word or two because is raises an interesting red herring. It’s called the “Shooting Room”.

 Consider the following model for human existence. Souls are called into existence in groups representing each generation. The first generation has ten souls. The next has a hundred, the next after that a thousand, and so on. Each generation is led into a room, at the front of which is a pair of dice. The dice are rolled. If the score is double-six then everyone in the room is shot and it’s the end of humanity. If any other score is shown, everyone survives and is led out of the Shooting Room to be replaced by the next generation, which is ten times larger. The dice are rolled again, with the same rules. You find yourself called into existence and are led into the room along with the rest of your generation. What should you think is going to happen?

 Leslie’s argument is the following. Each generation not only has more members than the previous one, but also contains more souls than have ever existed to that point. For example, the third generation has 1000 souls; the previous two had 10 and 100 respectively, i.e. 110 altogether. Roughly 90% of all humanity lives in the last generation. Whenever the last generation happens, there bound to be more people in that generation than in all generations up to that point. When you are called into existence you should therefore expect to be in the last generation. You should consequently expect that the dice will show double six and the celestial firing squad will take aim. On the other hand, if you think the dice are fair then each throw is independent of the previous one and a throw of double-six should have a probability of just one in thirty-six. On this basis, you should expect to survive. The odds are against the fatal score.

 This apparent paradox seems to suggest that it matters a great deal whether the future is predetermined (your presence in the last generation requires the double-six to fall) or “random” (in which case there is the usual probability of a double-six). Leslie argues that if everything is pre-determined then we’re doomed. If there’s some indeterminism then we might survive. This isn’t really a paradox at all, simply an illustration of the fact that assuming different models gives rise to different probability assignments.

 While I am on the subject of the Shooting Room, it is worth drawing a parallel with another classic puzzle of probability theory, the St Petersburg Paradox. This is an old chestnut to do with a purported winning strategy for Roulette. It was first proposed by Nicolas Bernoulli but famously discussed at greatest length by Daniel Bernoulli in the pages of Transactions of the St Petersburg Academy, hence the name.  It works just as well for the case of a simple toss of a coin as for Roulette as in the latter game it involves betting only on red or black rather than on individual numbers.

 Imagine you decide to bet such that you win by throwing heads. Your original stake is £1. If you win, the bank pays you at even money (i.e. you get your stake back plus another £1). If you lose, i.e. get tails, your strategy is to play again but bet double. If you win this time you get £4 back but have bet £2+£1=£3 up to that point. If you lose again you bet £8. If you win this time, you get £16 back but have paid in £8+£4+£2+£1=£15 to that point. Clearly, if you carry on the strategy of doubling your previous stake each time you lose, when you do eventually win you will be ahead by £1. It’s a guaranteed winner. Isn’t it?

 The answer is yes, as long as you can guarantee that the number of losses you will suffer is finite. But in tosses of a fair coin there is no limit to the number of tails you can throw before getting a head. To get the correct probability of winning you have to allow for all possibilities. So what is your expected stake to win this £1? The answer is the root of the paradox. The probability that you win straight off is ½ (you need to throw a head), and your stake is £1 in this case so the contribution to the expectation is £0.50. The probability that you win on the second go is ¼ (you must lose the first time and win the second so it is ½ times ½) and your stake this time is £2 so this contributes the same £0.50 to the expectation. A moment’s thought tells you that each throw contributes the same amount, £0.50, to the expected stake. We have to add this up over all possibilities, and there are an infinite number of them. The result of summing them all up is therefore infinite. If you don’t believe this just think about how quickly your stake grows after only a few losses: £1, £2, £4, £8, £16, £32, £64, £128, £256, £512, £1024, etc. After only ten losses you are staking over a thousand pounds just to get your pound back. Sure, you can win £1 this way, but you need to expect to stake an infinite amount to guarantee doing so. It is not a very good way to get rich.

 The relationship of all this to the Shooting Room is that it is shows it is dangerous to pre-suppose a finite value for a number which in principle could be infinite. If the number of souls that could be called into existence is allowed to be infinite, then any individual as no chance at all of being called into existence in any generation!

 Amusing as they are, the thing that makes me most uncomfortable about these Doomsday arguments is that they attempt to determine a probability of an event without any reference to underlying mechanism. For me, a valid argument about Doomsday would have to involve a particular physical cause for the extinction of humanity (e.g. asteroid impact, climate change, nuclear war, etc). Given this physical mechanism one should construct a model within which one can estimate probabilities for the model parameters (such as the rate of occurrence of catastrophic asteroid impacts). Only then can one make a valid inference based on relevant observations and their associated likelihoods. Such calculations may indeed lead to alarming or depressing results. I fear that the greatest risk to our future survival is not from asteroid impact or global warming, where the chances can be estimated with reasonable precision, but self-destructive violence carried out by humans themselves. Science has no way of being able to predict what atrocities people are capable of so we can’t make any reliable estimate of the probability we will self-destruct. But the absence of any specific mechanism in the versions of the Doomsday argument I have discussed robs them of any scientific credibility at all.

There are better grounds for worrying about the future than mere numerology.

How Loud was the Big Bang?

Posted in The Universe and Stuff with tags , , , , , , on April 26, 2009 by telescoper

The other day I was giving a talk about cosmology at Cardiff University’s Open Day for prospective students. I was talking, as I usually do on such occasions, about the cosmic microwave background, what we have learnt from it so far and what we hope to find out from it from future experiments, assuming they’re not all cancelled.

Quite a few members of staff listened to the talk too and, afterwards, some of them expressed surprise at what I’d been saying, so I thought it would be fun to try to explain it on here in case anyone else finds it interesting.

As you probably know the Big Bang theory involves the assumption that the entire Universe – not only the matter and energy but also space-time itself – had its origins in a single event a finite time in the past and it has been expanding ever since. The earliest mathematical models of what we now call the  Big Bang were derived independently by Alexander Friedman and George Lemaître in the 1920s. The term “Big Bang” was later coined by Fred Hoyle as a derogatory description of an idea he couldn’t stomach, but the phrase caught on. Strictly speaking, though, the Big Bang was a misnomer.

Friedman and Lemaître had made mathematical models of universes that obeyed the Cosmological Principle, i.e. in which the matter was distributed in a completely uniform manner throughout space. Sound consists of oscillating fluctuations in the pressure and density of the medium through which it travels. These are longitudinal “acoustic” waves that involve successive compressions and rarefactions of matter, in other words departures from the purely homogeneous state required by the Cosmological Principle. The Friedman-Lemaitre models contained no sound waves so they did not really describe a Big Bang at all, let alone how loud it was.

However, as I have blogged about before, newer versions of the Big Bang theory do contain a mechanism for generating sound waves in the early Universe and, even more importantly, these waves have now been detected and their properties measured.

The above image shows the variations in temperature of the cosmic microwave background as charted by the Wilkinson Microwave Anisotropy Probe about five years ago. The average temperature of the sky is about 2.73 K but there are variations across the sky that have an rms value of about 0.08 milliKelvin. This corresponds to a fractional variation of a few parts in a hundred thousand relative to the mean temperature. It doesn’t sound like much, but this is evidence for the existence of primordial acoustic waves and therefore of a Big Bang with a genuine “Bang” to it.

A full description of what causes these temperature fluctuations would be very complicated but, roughly speaking, the variation in temperature you corresponds directly to variations in density and pressure arising from sound waves.

So how loud was it?

The waves we are dealing with have wavelengths up to about 200,000 light years and the human ear can only actually hear sound waves with wavelengths up to about 17 metres. In any case the Universe was far too hot and dense for there to have been anyone around listening to the cacophony at the time. In some sense, therefore, it wouldn’t have been loud at all because our ears can’t have heard anything.

Setting aside these rather pedantic objections – I’m never one to allow dull realism to get in the way of a good story- we can get a reasonable value for the loudness in terms of the familiar language of decibels. This defines the level of sound (L) logarithmically in terms of the rms pressure level of the sound wave Prms relative to some reference pressure level Pref

L=20 log10[Prms/Pref]

(the 20 appears because of the fact that the energy carried goes as the square of the amplitude of the wave; in terms of energy there would be a factor 10).

There is no absolute scale for loudness because this expression involves the specification of the reference pressure. We have to set this level by analogy with everyday experience. For sound waves in air this is taken to be about 20 microPascals, or about 2×10-10 times the ambient atmospheric air pressure which is about 100,000 Pa.  This reference is chosen because the limit of audibility for most people corresponds to pressure variations of this order and these consequently have L=0 dB. It seems reasonable to set the reference pressure of the early Universe to be about the same fraction of the ambient pressure then, i.e.

Pref~2×10-10 Pamb

The physics of how primordial variations in pressure translate into observed fluctuations in the CMB temperature is quite complicated, and the actual sound of the Big Bang contains a mixture of wavelengths with slightly different amplitudes so it all gets a bit messy if you want to do it exactly, but it’s quite easy to get a rough estimate. We simply take the rms pressure variation to be the same fraction of ambient pressure as the averaged temperature variation are compared to the average CMB temperature,  i.e.

Prms~ a few ×10-5Pamb

If we do this, scaling both pressures in logarithm in the equation in proportion to the ambient pressure, the ambient pressure cancels out in the ratio, which turns out to be a few times 10-5.

AudiogramsSpeechBanana

With our definition of the decibel level we find that waves corresponding to variations of one part in a hundred thousand of the reference level  give roughly L=100dB while part in ten thousand gives about L=120dB. The sound of the Big Bang therefore peaks at levels just over  110 dB. As you can see in the Figure above, this is close to the threshold of pain,  but it’s perhaps not as loud as you might have guessed in response to the initial question. Many rock concerts are actually louder than the Big Bang, at least near the speakers!

A useful yardstick is the amplitude  at which the fluctuations in pressure are comparable to the mean pressure. This would give a factor of about 1010 in the logarithm and is pretty much the limit that sound waves can propagate without distortion. These would have L≈190 dB. It is estimated that the 1883 Krakatoa eruption produced a sound level of about 180 dB at a range of 100 miles. By comparison the Big Bang was little more than a whimper.

PS. If you would like to read more about the actual sound of the Big Bang, have a look at John Cramer’s webpages. You can also download simulations of the actual sound. If you listen to them you will hear that it’s more of  a “Roar” than a “Bang” because the sound waves don’t actually originate at a single well-defined event but are excited incoherently all over the Universe.

Deterministic Chaos

Posted in Biographical with tags , on April 25, 2009 by telescoper

Yesterday was the occasion of the Annual Ball of the Cardiff  University School of Physics & Astronomy‘s Student Society Chaos held in the Cardiff Arms Suite of the Millennium Stadium. I had reservations about going because things like this always make me feel very old, but having been persuaded I was determined to have a good time. It turned out to be very enjoyable, so much so that I ended up moving on with some others to a nightclub to continue the party into the small hours. I think I kept up with the youngsters quite well, although I was well and truly knackered when I got home.

I’m also glad I didn’t disgrace myself too much, or if I did I don’t remember…

There was about a hundred people at the Chaos Ball, the vast majority of them students in the department. Not many staff members went along, although those that did all seemed to have a good time. These social events are quite tricky to pull off for a number of reasons. One is that there’s an inevitable “distance” between students and staff, not just in terms of age but also in the sense that the staff have positions of responsibility for the students. Students are not children, of course, so we’re not legally  in loco parentis, but something of that kind of relationship is definitely there. Although it doesn’t stop either side letting their hair down once in a while, I always find there’s a little bit of tension especially if the revels get a bit out of hand.

To help occasions like this I think it’s the responsibility of the staff members present to drink heavily in order to put the students at ease. United by a common bond of inebriation, the staff-student divide crumbles and a good time is had by all.

A couple of other incidents that happened this week serve to illustrate related issues. On Thursday we had to evacuate the building because the fire alarm went off. It turned out that some work being done on the roof had triggered a smoke detector. Although it wasn’t a real emergency, four fire engines arrived and we all stood outside for the best part of an hour while they figured out what had happened and, curiously, how to switch the alarm off.

The fire alarm had gone off, the fire brigade had turned out, but there was no fire to be seen. I joked that the only possible explanation of this state of affairs was that there must be a dark fire…

Standing outside, staff and students chatted casually while waiting to be let back into the building. It was sunny, which added to the conviviality. I realised, though, that I’d  never really spoken to many of my students like that before, i.e. outside the lecture  or tutorial. I see the same faces in my lectures day in, day out but all I do is talk to them about physics. I don’t know them at all. It’s strange.

The other thing was yesterday morning where I was giving one of my first year lectures on Astrophysical Concepts, a course which I really enjoy teaching. The topic was supernovae and it’s a lecture which I always end by doing an impersonation of a supernova explosion. If you want to see it, you’ll have to sign up for the course.

I was doing my PhD in 1987 when a supernova (SN1987A) went off in the Large Magellanic Cloud. It was  a hot topic for a while and I mentioned in my talk. I started to say “Some of you will remember…” then I suddenly realised to my horror that in 1987  nobody in my class had yet been born…

The Shape of Things to Come..

Posted in Science Politics with tags , , on April 24, 2009 by telescoper

The implications of this week’s budget for astronomy are gradually becoming clearer although a full picture is yet to emerge.

The following statement appeared on the webpages of the Science and Technology Facilities Council:

STFC’s budget of £491 million for 2009-10 is evidence of the Government’s commitment to investing in science in a period of severe national and global economic uncertainty.

STFC’s Chief Executive Officer, Professor Keith Mason, said: “Our budget represents a major investment in science at a time of increasing pressure on public spending, and will allow us to fund a wide array of world leading science delivering significant impact for the UK.”

“The budget confirms the Government’s commitment to, and acknowledgement of, investment in curiosity driven and application led research as essential elements to support the country’s economic growth in the short, medium and longer term.”

Professor Mason said the near cash* budget of £491 million was more than the Council’s allocation in the Comprehensice Spending Review (CSR07), thanks to assistance from the Department for Innovation, Universities and Skills (DIUS) in the form of a loan and compensation for foreign exchange exposure. This outcome follows extensive consultation between DIUS and the Research Councils to ameliorate the effect of the fall of the pound. However, it will unfortunately not allow STFC to fund the full science programme planned under its Programmatic Review.

Professor Mason said STFC would now consult on reprioritising its programme across the remainder of the CSR period. This consultation will cover both the short-term items required for 2009-10, and a longer term process to ensure stable platform for planning in the medium to longer term. Council will discuss options for 2009-10 at its meeting on the 28th April.

“For its part STFC has already imposed a series of internal savings, including on travel and severe restrictions on external recruitment. We will seek to identify further savings in order to concentrate resources on funding our core research programme,” Professor Mason said.

It appears, then, that there is to be short-term assistance from the effects of currency fluctuations but this will be in the form of a loan that will eventually have to be paid back from savings found within the programme. I suppose something’s better than nothing, despite the bland language, it is quite clear that we are heading for big cuts in the STFC programme and astronomy will not be immune.

The Times Higher has also covered the budget settlement for science and higher education generally in very downbeat terms. Echoing what I put in my previous post:

Although the Budget maintains an existing commitment to ring-fence the science budget, DIUS had reportedly sought a £1 billion increase in funding for scientific research as part of a stimulus package designed to use science to boost the economy.

Instead of this, research councils will be required to make £106 million in savings, which will then be reinvested elsewhere intheir portfolio “to support key areas of economic potential”.

We await details of where these “savings” will be made. My current understanding is that the STFC needs to find about £10 million immediately although whether this is on top of or including its share of the overall “efficiency savings”, I don’t know. In any case it is clear that this money will be taken from pure science programmes and spent instead on areas deemed to have “economic potential”. It looks like we’re all going to have to hone our bullshitting skills over the next few years.

Jorunn Monrad

Posted in Art with tags , on April 23, 2009 by telescoper

Off the Wall is a small contemporary art gallery in Llandaff, about 15 minutes walk from my home in Pontcanna, Cardiff.  I went there this evening to a private view of some works by Norwegian artist Jorunn Monrad, who lives and works in Milan.

The artist herself was there and I got the chance to talk to her over a glass or two of pink champagne after looking at the paintings.

The works on view in her exhibition were all made this year, and they were produced with a technique developed in the Middle Ages that involves egg and casein tempera. The paintings are brilliantly coloured abstract works that involve structures built up  from representations of tiny proto-animals, meticulously painted all over the linen background so that they build up to larger structures. The dramatic colour palette produces interesting visual effects, at times  revealing and at times obscuring patterns present in the paint. The intricate detail and luminous colouring makes for a vivid but sometimes perplexing whole.

Here is an example (although the digital image doesn’t really do justice to the original).

dicembre2008verdevermilion

To quote her own description

My works are rooted in an imagery from my childhood: the snakes of the wooden sculptures of Viking and mediaeval Norwegian art, the forms that were created by nature, like branches, cloudsm forms of branches. The fables, the mysterious nature has also played a part. I have also done research on phenomena that are triggered by the imagery, one may say biological, on which precisely the visions of forms that repeat themselves during falling asleep and waking up can create this kind of visual effects.

From this I have obtained a kind of module, that is a kind of biomorphic form, rather than one specific animal or other, that is merely the building brick of of the structure, but that is multiplied in forms that are vertiginous and sometime perhaps unsettling. The idea is to create a dreamy, moving atmosphere that is nevertheless very different from the effects of op art, in short a less clashing, more “natural” effect.

The effects she achieves are, in some sense, a variation on those I blogged about previously but with elements that are entirely original.

If you’re in Cardiff this small exhibition is well worth seeing. Her paintings are for sale too, with a surprisingly modest price tag. I’m seriously thinking of investing in one myself, in fact.

The exhibition continues at Off the Wall, The Old Probate Registry, Llandaff until 30th May 2009.

PS. In response to the specific request below from Tom Shanks, who is never shy of making an exhibition of himself,  I’ve added this picture of his famous travelling installation:

dscf0001

Economic Impact

Posted in Science Politics with tags , , on April 22, 2009 by telescoper

Like many of my colleagues I’ve been looking nervously through the lengthy documents  produced by HM Treasury to fill in the details of the Chancellor’s Budget speech. I was hoping to find some evidence of a boost for science that might filter down as a rescue package for STFC and might dispel the rumours of savage cuts in the Astronomy programme. Unfortunately I didn’t find any.

No real details about the science programme are given in the lengthy budget report, at least not that I could find this afternoon. There are, however, a couple of worrying pointers that things might be going from bad to worse.

The Chancellor has decided to cut public spending overall by about £15 billion (largely by “efficiency savings”) in order to control the UK’s ballooning public debt. The Department of Innovation, Universities and Skills (DIUS) which sits above the Research Councils in the hierarchy of research management is mentioned twice in the document, in the following passages talking about savings:

£118 million through increasing the effectiveness of research activities funded by the Research Councils by reducing administration costs and refocusing spend on new research priorities;

and

An additional £106 million of savings delivered by the Research Councils within the science and research budget to be re-invested within that budget to support key areas of economic potential.

Both of these look to me like indications that money will be diverted from pure science into technology-driven areas. Far from there being a boost for astronomy, it looks like we face the opposite with money being squeezed from us and re-allocated to areas that can make a stronger case for economic potential.

Another indication of this phase change, which has been in the air for some time, appeared yesterday on the STFC website.  The whole item can be found here, but the salient points are included in the following excerpt

Applicants for STFC rolling and standard grants will now be required to produce an impact plan, identifying the potential economic impacts of their proposal. The change takes effect from 21 April 2009 and will affect grants rounds from autumn 2009 onward.

The change follows a 2006 Research Councils UK project, and subsequent Excellence with Impact report, into the efficiency and value for money of Research Council peer review processes. The report recommended the Research Councils improve guidance to applicants and peer reviewers to ensure a shared understanding about the value of identifying the potential economic impact of research, and that the new requirements be supported in electronic application systems and guidelines.

More details of the spending priorities of DIUS within its overall budget will no doubt emerge in due course and they may yet reveal a tonic of some sort for STFC. What seems more likely, however, is that any such funds will be aimed at space gadgetry rather than at science. I have a feeling that the impact of the economic downturn on UK Astronomy is going to turn out to be dire.