I explained this time last year how I’m not really a big fan of Halloween and don’t tend to celebrate it. However, I decided to make an exception this year and post the following little video which seems to be appropriate for the occasion. It’s made of bits of old horror B-movies but the music – by Bobby “Boris” Pickett and the Crypt-kickers is actually the second single I ever bought, way back in 1973. I wonder if you can guess what the first one was?
Archive for October, 2009
It’s been a couple of weeks since the University of Cambridge announced that the successor to Stephen Hawking as Lucasian Professor of Mathematics would be Michael Green, who is best known for his work on string theory. Heartiest congratulations to him for reaching a position of such eminence.
I was trying to think of a suitable way of marking the occasion of his election to this prestigious post when I suddenly remembered that we were actually on a TV programme together years ago. The show in question was called Unravelling the Universe and was first broadcast in December 1991 as part of a science documentary series called Equinox.
I eventually found my ancient VHS copy of the broadcast master tape of this show and persuaded Ed and Stephen, two of the excellent elves that work in the School of Physics & Astronomy here at Cardiff University, to transfer it to a digital format and put a bit on Youtube for all to see. Many thanks to them for their help.
Other people involved in the programme included Rocky Kolb, Chris Isham and Paul Davies but the short (2-and-a-half minute) clip below features just Michael Green (who basically put the show together) and myself (who was just there to make up the numbers), plus wonderful narration by the late great Peter Jones.
Michael Green hasn’t changed a bit in 18 years. In fact, I saw him last year and am sure he was even wearing the same sweater.
I, on the other hand….Oh dear.
I just picked up an item from the BBC Website that refers to news announced in this week’s edition of Nature of the discovery of a gamma-ray burst detected by NASA’s Swift satellite. The burst itself was detected in April this year and I had a sneak preview that something exciting was going to be announced earlier this month at the Royal Astronomical Society meeting on October 9th. However, today’s press releases still managed to catch me on the hop owing to the fact that a rather different story had distracted my attention…
In fact, detections of gamma-ray bursts are not all that rare. Swift observes one every few days on average. Once such a source is found through its gamma-ray emission, a signal is sent to astronomers around the world who then work like crazy to detect an optical counterpart. If and when they find one, they try to measure the spectrum of light emitted in order to determine the source’s redshift. This is very difficult for the distant ones, and is not always successful.
However, what happened in this case – called GRB 090423 – was that a spectrum was that not one but two independent teams obtained optical spectra of the object in which the gamma-ray burst must have happened. What each time found was that their spectrum showed a sharp cut-off at wavelengths shorter than a given limiting value.
Hydrogen is very effective at absorbing radiation with wavelengths shorter than 91.2 nm (the so-called Lyman limit, which is in the ultraviolet part of the spectrum), and all galaxies contain large amounts of hydrogen; hence galaxies are virtually dark at wavelengths shorter than 91.2 nm in their rest-frame. The position of the break in an observed frame will be at a different wavelength owing to the effect of the cosmological redshift.
The Lyman break for the host of GRB 090423 appears not in the ultraviolet but in the infrared, indicating a very large redshift. In fact, it’s a truly spectacular 8.2.
Together with the direct observations of galaxies at high redshifts I blogged about a month or so ago, this discovery helps push back the frontiers of our knowledge of the Universe not just in space but also in time. A quick calculation reveals that in the standard cosmological model, light from a source at redshift 8.2 has taken about 13.1 billion light years to reach us. The gamma-ray burst therefore exploded about 600 million years after the Big Bang.
Another interesting thing about this source is its duration. The optical afterglow of a gamma-ray burst decays with time. Gamma-ray bursts are usually classified as either short or long, depending on the decay time with the dividing line between the two classes being around 2 seconds. The optical afterglow of GRB 090423 lasted about ten seconds. But that doesn’t make it a long burst. We actually see the afterglow stretched out in time by the same redshift factor as an individual photon’s wavelength. So in the rest frame of the source the optical glow was only a bit over a second in duration, i.e. it was a short burst.
Long gamma-ray bursts are thought to be associated with core-collapse supernovae which arise from the self-destruction of very massive stars with very short lifetimes. The fact that such things die young means that they are only found where star formation has happened very recently. One might expect the earliest gamma-ray bursts to therefore be of this type.
I don’t think anyone is really sure what the shorter ones really are, but they seem to happen in regions without active star formation in which the stellar populations are quite old, such as in elliptical galaxies. The fact that the most distant GRB yet discovered happens to be a short burst is very interesting. How can there be an old stellar population at a time when the Universe itself was so young?
If the Big Bang theory is correct, astronomers should eventually be able to reach back so far in time that the Universe was so young that no stars had had time to form. There would be no sources of light to detect so we would have reached the edge of darkness. We’re not there yet, but we’re getting closer.
When I was a research student at Sussex University I lived for a time in Hove, close to the local Greyhound track. I soon discovered that going to the dogs could be both enjoyable and instructive. The card for an evening would usually consist of ten races, each involving six dogs. It didn’t take long for me to realise that it was quite boring to watch the greyhounds unless you had a bet, so I got into the habit of making small investments on each race. In fact, my usual bet would involve trying to predict both first and second place, the kind of combination bet which has longer odds and therefore generally has a better return if you happen to get it right.
The simplest way to bet is through a totalising pool system (called “The Tote”) in which the return on a successful bet is determined by how much money has been placed on that particular outcome; the higher the amount staked, the lower the return for an individual winner. The Tote accepts very small bets, which suited me because I was an impoverished student in those days. The odds at any particular time are shown on the giant Tote Board you can see in the picture above.
However, every now and again I would place bets with one of the independent trackside bookies who set their own odds. Here the usual bet is for one particular dog to win, rather than on 1st/2nd place combinations. Sometimes these odds were much more generous than those that were showing on the Tote Board so I gave them a go. When bookies offer long odds, however, it’s probably because they know something the punters don’t and I didn’t win very often.
I often watched the bookmakers in action, chalking the odds up, sometimes lengthening them to draw in new bets or sometimes shortening them to discourage bets if they feared heavy losses. It struck me that they have to be very sharp when they change odds in this way because it’s quite easy to make a mistake that might result in a combination bet guaranteeing a win for a customer.
With six possible winners it takes a while to work out if there is such a strategy but to explain what I mean consider a race with three competitors. The bookie assigns odds as follows : (1) even money; (2) 3/1 against; and (3) 4/1 against. The quoted odds imply probabilities to win of 50% (1 in 2), 25% (1 in 4) and 20% (1 in 5) respectively.
Now suppose you place in three different bets: £100 on (1) to win, £50 on (2) and £40 on (3). Your total stake is then £190. If (1) succeeds you win £100 and also get your stake back; you lose the other stakes, but you have turned £190 into £200 so are up £10 overall. If (2) wins you also come out with £200: your £50 stake plus £150 for the bet. Likewise if (3) wins. You win whatever the outcome of the race. It’s not a question of being lucky, just that the odds have been designed inconsistently.
I stress that I never saw a bookie actually do this. If one did, he’d soon go out of business. An inconsistent set of odds like this is called a Dutch Book, and a bet which guarantees the better a positive return is often called a lock. It’s the also the principle behind many share-trading schemes based on the idea of arbitrage.
It was only much later I realised that there is a nice way of turning the Dutch Book argument around to derive the laws of probability from the principle that the odds be consistent, i.e. so that they do not lead to situations where a Dutch Book arises.
To see this, I’ll just generalise the above discussion a bit. Imagine you are a gambler interested in betting on the outcome of some event. If the game is fair, you would have expect to pay a stake px to win an amount x if the probability of the winning outcome is p.
Now imagine that there are several possible outcomes, each with different probabilities, and you are allowed to bet a different amount on each of them. Clearly, the bookmaker has to be careful that there is no combination of bets that guarantees that you (the punter) will win.
Now consider a specific example. Suppose there are three possible outcomes; call them A, B, and C. Your bookie will accept the following bets: a bet on A with a payoff xA, for which the stake is pAxA; a bet on B for which the return is xB and the stake pBxB; and a bet on C with stake pCxC and payoff xC.
Think about what happens in the special case where the events A and B are mutually exclusive (which just means that they can’t both happen) and C is just given by A “OR” B, i.e. the event that either A or B happens. There are then three possible outcomes.
First, if A happens but B does not happen the net return to the gambler is
The first term represents the difference between the stake and the return for the successful bet on A, the second is the lost stake corresponding to the failed bet on the event B, and the third term arises from the successful bet on C. The bet on C succeeds because if A happens then A”OR”B must happen too.
Alternatively, if B happens but A does not happen, the net return is
in a similar way to the previous result except that the bet on A loses, while those on B and C succeed.
Finally there is the possibility that neither A nor B succeeds: in this case the gambler does not win at all, and the return (which is bound to be negative) is
Notice that A and B can’t both happen because I have assumed that they are mutually exclusive. For the game to be consistent (in the sense I’ve discussed above) we need to have
This means that
so, since C is the event A “OR” B, this means that the probabilityof two mutually exclusive events A and B is the sum of the separate probabilities of A and B. This is usually taught as one of the axioms from which the calculus of probabilities is derived, but what this discussion shows is that it can itself be derived in this way from the principle of consistency. It is the only way to combine probabilities that is consistent from the point of view of betting behaviour. Similar logic leads to the other rules of probability, including those for events which are not mutually exclusive.
Notice that this kind of consistency has nothing to do with averages over a long series of repeated bets: if the rules are violated then the game itself is rigged.
A much more elegant and complete derivation of the laws of probability has been set out by Cox, but I find the Dutch Book argument a nice practical way to illustrate the important difference between being unlucky and being irrational.
P.S. For legal reasons I should point out that, although I was a research student at the University of Sussex, I do not have a PhD. My doctorate is a DPhil.
At the last Meeting of the RAS Council on October 9th 2009, Professor Keith Mason, Chief Executive of the Science and Technology Facilities Council (STFC), made a presentation after which he claimed that STFC spends too much on “exploitation”, i.e. on doing science with the facilities it provides. This statement clearly signals an intention to cut grants to research groups still further and funnel a greater proportion of STFC’s budget into technology development rather than pure research.
It seems Keith Mason doesn’t give a fuck
About the future of Astronomy.
“The mess we’re in is down to rotten luck
And our country’s ruin’d economy”;
Or that’s the tale our clueless leader tells
When oft by angry critics he’s assailed,
Undaunted he in Swindon’s office dwells
Refusing to accept it’s him that failed.
And now he tells us we must realise:
We spend “too much on science exploitation”.
Forget the dreams of research in blue skies
The new name of the game is wealth creation.
A truth his recent statement underlines
Is that we’re doomed unless this man resigns.
OK. I admit it. I’m automatonophobic.
I don’t think I have many irrational fears. I don’t like snakes, and am certainly a bit frightened of them, but there’s nothing irrational about that. They’re nasty and likely to be poisonous. I don’t like slugs either, especially when they eat things in my garden. They’re unpleasant but easy to deal with and I’m not at all scared of them. Likewise spiders and insects.
But ventriloquists’ dummies give me nightmares every time.
When I was a little boy my grandfather took me to the Spanish City in Whitley Bay. There was an amusement arcade there and one of the attractions was thing called The Laughing Sailor. You put a penny in the slot and a hideous automaton – very similar to the dummy a ventriloquist might use, except in mock-nautical attire – began to lurch backwards and forwards, flailing its arms, staring maniacally and emitting a loud mechanical cackle that was supposed to represent a laugh. The minute it started doing its turn I burst into tears and ran screaming out of the building. I’ve hated such things ever since.
The anxiety that these objects induce has now been given a name: automatonophobia, which is defined as “a persistent, abnormal, and unwarranted fear of ventriloquist’s dummies, animatronic creatures or wax statues”. Abnormal? No way. They’re simply horrible.
I’m clearly not the only one who thinks so, because there was an article in The Independent a few years ago by Neil Norman that exactly expressed the fear and loathing I feel about these creepy little dolls. Feature films including Magic and Dead of Night, and episodes of The Twilight Zone and Hammer House of Horror have taken it further by playing with the idea that a ventriloquist’s dummy has been possessed by some sort of malign power which uses it to wreak terror on those around.
We’re not talking about a benign wooden doll like Pinocchio who metamorphoses into a real boy; we’re talking about a ghastly staring-faced mannequin that is brought to life by its operator, the ventriloquist, by inserting his hand up its backside. The dummy never looks human, but can speak and displays some human traits, usually nasty ones. The essence of a ventriloquist act is to generate the illusion that one is watching two personalities sparring with each other when in reality the two voices are coming from the same person. Schizophrenia here we come.
It must be very clever to be able to throw your voice, but I always had the nagging suspicion that ventriloquists use dummies to express the things they find it difficult to say through their own mouth, and so to give life to their darkest thoughts.
Best of all the attempts to realise the sinister potential of this relationship in a movie is the “Ventriloquist’s Dummy” episode, directed by Alberto Cavalcanti, in Dead of Night, the 1945 portmanteau that some regard as Britain’s greatest horror film. Here is the part that tells the tale of Michael Redgrave’s ventriloquist being sweatily possessed by the spirit of his malevolent dummy, Hugo. It’s old and creaky, but I find it absolutely terrifying.
So what is it about these man-child mannequins – they are always male – that makes them so creepy? First, there is their appearance: the mad, swivelling, psychotic eyes beneath arched eyebrows and that crude parody of a mouth (with painted teeth) that opens and shuts with a mechanical sound like a trap. Then there are the badly articulated limbs, like those of a dead thing. When at rest, their eyes remain open, their mouths fixed in a diabolic grimace. Moreover, with their rouged cheeks, lurid red lips and unnatural eyelashes, all ventriloquist’s dummies look like the badly embalmed corpses of small boys. And they always end up sitting on the knee of a horrible pervert. Necrophilia and paedophilia all in one sick package. Yuck.
Worst of all, perhaps, is the voice. The high-pitched squawk that emerges is one of the most unpleasant sounds a human being can make. Even if you find it tolerable when you know that it comes from the ventriloquist, the last thing you want is the dummy to start talking on its own.
I started writing this with the cathartic intention of exorcising the demon that appears whenever I see one of these wretched things. It didn’t work. However, I have now decided to take my mind off this track with a change of thread. Here’s a little quiz. I wonder if anyone can spot the connection between this post and the history of cosmology?
Alternatively, if you’re brave, you could try a bit of catharsis of your own and reveal your worst phobias through the comments box…
I couldn’t resist a quick post about this old record, which was made in Chicago in 1928. The personnel line-up is very similar to that of the classic Hot Sevens, except that Louis Armstrong wasn’t there. Satchmo was, in fact, replaced for this number by two trumpeters, Natty Dominique and George Mitchell. John Thomas played trombone, Bud Scott was on banjo and Warren “Baby” Dodds played the drums.
The star of the show, however, is undoubtedly the great Johnny Dodds (the older brother of the drummer). He was a clarinettist of exceptional power, a fact that enabled him to cut through the limitations of the relatively crude recording technology of the time. Standing shoulder-to-shoulder with Louis Armstrong doesn’t make it easy for a clarinettist to be heard!
This is still a favourite tune for jazz bands all around the world, but I’ve never heard a version as good as this one. There are lots of little things that contribute to its brilliance, such as the thumping 2/4 rhythm (which also gives away its origins in the New Orleans tradition of marching bands). It’s a bit fast to actually march to, though; I suppose that’s what turns a march into a stomp. I like the little breaks too (such as Bud Scott’s banjo fill around 2:10 and, especially, the ensemble break at 2:45). But most of all it’s all about how they build up the momentum in such a controlled way, using little key changes to shift gear but holding back until the time Johnny Dodds joins in again (around 2:20). At that point the whole thing totally catches fire and the remaining 40 seconds or so are some of the “hottest” in all of jazz history.
Some time ago I heard Robert Parker’s digitally remastered version of this track, which revealed that Baby Dodds was pounding away on the bass drum all the way through it. He’s barely audible on the original but it was clearly him that drove the performance along. Anyway, despite the relatively poor sound quality I do hope you enjoy it. It’s a little bit of musical history, but also an enormous bit of fun.