Archive for Physics

SEPnet Awayday

Posted in Education with tags , , , on April 20, 2015 by telescoper

Here I am in Easthampstead Park Conference Centre after a hard day being away at an awayday. In fact we’ve been so busy that I’ve only just checked into my room (actually it’s a suite) and shall very soon be attempting to find the bar so I can have a drink. I’m parched.

The place is very nice. Here’s a picture from outside:

Eastham

I’m told it is very close to Broadmoor, the famous high-security psychiatric hospital, although I’m sure that wasn’t one of the reasons for choice of venue.

I have to attend quite a few of these things for one reason or another. This one is on the Future and Sustainability of the South East Physics Network, known as SEPnet for short, which is a consortium of physics departments across the South East of England working together to deliver excellence in both teaching and research. I am here deputising for a Pro Vice Chancellor who can’t be here. I’ve enjoyed pretending to be important, but I’m sure nobody has been taken in.

Although it’s been quite tiring, it has been an interesting day. Lots of ideas and discussion, but we do have to distil all  that down into some more specific detail over dinner tonight and during the course of tomorrow morning.  Anyway, better begin the search for the bar so I can refresh the old brain cells.

 

Albert, Bernard and Bell’s Theorem

Posted in The Universe and Stuff with tags , , , , , , , , , , on April 15, 2015 by telescoper

You’ve probably all heard of the little logic problem involving the mysterious Cheryl and her friends Albert and Bernard that went viral on the internet recently. I decided not to post about it directly because it’s already been done to death. It did however make me think that if people struggle so much with “ordinary” logic problems of this type its no wonder they are so puzzled by the kind of logical issues raised by quantum mechanics. Hence the motivation of updating a post I did quite a while ago. The question we’ll explore does not concern the date of Cheryl’s birthday but the spin of an electron.

To begin with, let me give a bit of physics background. Spin is a concept of fundamental importance in quantum mechanics, not least because it underlies our most basic theoretical understanding of matter. The standard model of particle physics divides elementary particles into two types, fermions and bosons, according to their spin.  One is tempted to think of  these elementary particles as little cricket balls that can be rotating clockwise or anti-clockwise as they approach an elementary batsman. But, as I hope to explain, quantum spin is not really like classical spin.

Take the electron,  for example. The amount of spin an electron carries is  quantized, so that it always has a magnitude which is ±1/2 (in units of Planck’s constant; all fermions have half-integer spin). In addition, according to quantum mechanics, the orientation of the spin is indeterminate until it is measured. Any particular measurement can only determine the component of spin in one direction. Let’s take as an example the case where the measuring device is sensitive to the z-component, i.e. spin in the vertical direction. The outcome of an experiment on a single electron will lead a definite outcome which might either be “up” or “down” relative to this axis.

However, until one makes a measurement the state of the system is not specified and the outcome is consequently not predictable with certainty; there will be a probability of 50% probability for each possible outcome. We could write the state of the system (expressed by the spin part of its wavefunction  ψ prior to measurement in the form

|ψ> = (|↑> + |↓>)/√2

This gives me an excuse to use  the rather beautiful “bra-ket” notation for the state of a quantum system, originally due to Paul Dirac. The two possibilities are “up” (↑­) and “down” (↓) and they are contained within a “ket” (written |>)which is really just a shorthand for a wavefunction describing that particular aspect of the system. A “bra” would be of the form <|; for the mathematicians this represents the Hermitian conjugate of a ket. The √2 is there to insure that the total probability of the spin being either up or down is 1, remembering that the probability is the square of the wavefunction. When we make a measurement we will get one of these two outcomes, with a 50% probability of each.

At the point of measurement the state changes: if we get “up” it becomes purely |↑>  and if the result is  “down” it becomes |↓>. Either way, the quantum state of the system has changed from a “superposition” state described by the equation above to an “eigenstate” which must be either up or down. This means that all subsequent measurements of the spin in this direction will give the same result: the wave-function has “collapsed” into one particular state. Incidentally, the general term for a two-state quantum system like this is a qubit, and it is the basis of the tentative steps that have been taken towards the construction of a quantum computer.

Notice that what is essential about this is the role of measurement. The collapse of  ψ seems to be an irreversible process, but the wavefunction itself evolves according to the Schrödinger equation, which describes reversible, Hamiltonian changes.  To understand what happens when the state of the wavefunction changes we need an extra level of interpretation beyond what the mathematics of quantum theory itself provides,  because we are generally unable to write down a wave-function that sensibly describes the system plus the measuring apparatus in a single form.

So far this all seems rather similar to the state of a fair coin: it has a 50-50 chance of being heads or tails, but the doubt is resolved when its state is actually observed. Thereafter we know for sure what it is. But this resemblance is only superficial. A coin only has heads or tails, but the spin of an electron doesn’t have to be just up or down. We could rotate our measuring apparatus by 90° and measure the spin to the left (←) or the right (→). In this case we still have to get a result which is a half-integer times Planck’s constant. It will have a 50-50 chance of being left or right that “becomes” one or the other when a measurement is made.

Now comes the real fun. Suppose we do a series of measurements on the same electron. First we start with an electron whose spin we know nothing about. In other words it is in a superposition state like that shown above. We then make a measurement in the vertical direction. Suppose we get the answer “up”. The electron is now in the eigenstate with spin “up”.

We then pass it through another measurement, but this time it measures the spin to the left or the right. The process of selecting the electron to be one with  spin in the “up” direction tells us nothing about whether the horizontal component of its spin is to the left or to the right. Theory thus predicts a 50-50 outcome of this measurement, as is observed experimentally.

Suppose we do such an experiment and establish that the electron’s spin vector is pointing to the left. Now our long-suffering electron passes into a third measurement which this time is again in the vertical direction. You might imagine that since we have already measured this component to be in the up direction, it would be in that direction again this time. In fact, this is not the case. The intervening measurement seems to “reset” the up-down component of the spin; the results of the third measurement are back at square one, with a 50-50 chance of getting up or down.

This is just one example of the kind of irreducible “randomness” that seems to be inherent in quantum theory. However, if you think this is what people mean when they say quantum mechanics is weird, you’re quite mistaken. It gets much weirder than this! So far I have focussed on what happens to the description of single particles when quantum measurements are made. Although there seem to be subtle things going on, it is not really obvious that anything happening is very different from systems in which we simply lack the microscopic information needed to make a prediction with absolute certainty.

At the simplest level, the difference is that quantum mechanics gives us a theory for the wave-function which somehow lies at a more fundamental level of description than the usual way we think of probabilities. Probabilities can be derived mathematically from the wave-function,  but there is more information in ψ than there is in |2; the wave-function is a complex entity whereas the square of its amplitude is entirely real. If one can construct a system of two particles, for example, the resulting wave-function is obtained by superimposing the wave-functions of the individual particles, and probabilities are then obtained by squaring this joint wave-function. This will not, in general, give the same probability distribution as one would get by adding the one-particle probabilities because, for complex entities A and B,

A2+B2 ≠(A+B)2

in general. To put this another way, one can write any complex number in the form a+ib (real part plus imaginary part) or, generally more usefully in physics , as Re, where R is the amplitude and θ  is called the phase. The square of the amplitude gives the probability associated with the wavefunction of a single particle, but in this case the phase information disappears; the truly unique character of quantum physics and how it impacts on probabilies of measurements only reveals itself when the phase information is retained. This generally requires two or more particles to be involved, as the absolute phase of a single-particle state is essentially impossible to measure.

Finding situations where the quantum phase of a wave-function is important is not easy. It seems to be quite easy to disturb quantum systems in such a way that the phase information becomes scrambled, so testing the fundamental aspects of quantum theory requires considerable experimental ingenuity. But it has been done, and the results are astonishing.

Let us think about a very simple example of a two-component system: a pair of electrons. All we care about for the purpose of this experiment is the spin of the electrons so let us write the state of this system in terms of states such as  which I take to mean that the first particle has spin up and the second one has spin down. Suppose we can create this pair of electrons in a state where we know the total spin is zero. The electrons are indistinguishable from each other so until we make a measurement we don’t know which one is spinning up and which one is spinning down. The state of the two-particle system might be this:

|ψ> = (|↑↓> – |↓↑>)/√2

squaring this up would give a 50% probability of “particle one” being up and “particle two” being down and 50% for the contrary arrangement. This doesn’t look too different from the example I discussed above, but this duplex state exhibits a bizarre phenomenon known as quantum entanglement.

Suppose we start the system out in this state and then separate the two electrons without disturbing their spin states. Before making a measurement we really can’t say what the spins of the individual particles are: they are in a mixed state that is neither up nor down but a combination of the two possibilities. When they’re up, they’re up. When they’re down, they’re down. But when they’re only half-way up they’re in an entangled state.

If one of them passes through a vertical spin-measuring device we will then know that particle is definitely spin-up or definitely spin-down. Since we know the total spin of the pair is zero, then we can immediately deduce that the other one must be spinning in the opposite direction because we’re not allowed to violate the law of conservation of angular momentum: if Particle 1 turns out to be spin-up, Particle 2  must be spin-down, and vice versa. It is known experimentally that passing two electrons through identical spin-measuring gadgets gives  results consistent with this reasoning. So far there’s nothing so very strange in this.

The problem with entanglement lies in understanding what happens in reality when a measurement is done. Suppose we have two observers, Albert and Bernard, who are bored with Cheryl’s little games and have decided to do something interesting with their lives by becoming physicists. Each is equipped with a device that can measure the spin of an electron in any direction they choose. Particle 1 emerges from the source and travels towards Albert whereas particle 2 travels in Bernard’s direction. Before any measurement, the system is in an entangled superposition state. Suppose Albert decides to measure the spin of electron 1 in the z-direction and finds it spinning up. Immediately, the wave-function for electron 2 collapses into the down direction. If Albert had instead decided to measure spin in the left-right direction and found it “left” similar collapse would have occurred for particle 2, but this time putting it in the “right” direction.

Whatever Albert does, the result of any corresponding measurement made by Bernard has a definite outcome – the opposite to Alberts result. So Albert’s decision whether to make a measurement up-down or left-right instantaneously transmits itself to Bernard who will find a consistent answer, if he makes the same measurement as Albert.

If, on the other hand, Albert makes an up-down measurement but Bernard measures left-right then Albert’s answer has no effect on Bernard, who has a 50% chance of getting “left” and 50% chance of getting right. The point is that whatever Albert decides to do, it has an immediate effect on the wave-function at ’s position; the collapse of the wave-function induced by Albert immediately collapses the state measured by Bernard. How can particle 1 and particle 2 communicate in this way?

This riddle is the core of a thought experiment by Einstein, Podolsky and Rosen in 1935 which has deep implications for the nature of the information that is supplied by quantum mechanics. The essence of the EPR paradox is that each of the two particles – even if they are separated by huge distances – seems to know exactly what the other one is doing. Einstein called this “spooky action at a distance” and went on to point out that this type of thing simply could not happen in the usual calculus of random variables. His argument was later tightened considerably by John Bell in a form now known as Bell’s theorem.

To see how Bell’s theorem works, consider the following roughly analagous situation. Suppose we have two suspects in prison, say Albert and Bernard (presumably Cheryl grassed them up and has been granted immunity from prosecution). The  two are taken apart to separate cells for individual questioning. We can allow them to use notes, electronic organizers, tablets of stone or anything to help them remember any agreed strategy they have concocted, but they are not allowed to communicate with each other once the interrogation has started. Each question they are asked has only two possible answers – “yes” or “no” – and there are only three possible questions. We can assume the questions are asked independently and in a random order to the two suspects.

When the questioning is over, the interrogators find that whenever they asked the same question, Albert and Bernard always gave the same answer, but when the question was different they only gave the same answer 25% of the time. What can the interrogators conclude?

The answer is that Albert and Bernard must be cheating. Either they have seen the question list ahead of time or are able to communicate with each other without the interrogator’s knowledge. If they always give the same answer when asked the same question, they must have agreed on answers to all three questions in advance. But when they are asked different questions then, because each question has only two possible responses, by following this strategy it must turn out that at least two of the three prepared answers – and possibly all of them – must be the same for both Albert and Bernard. This puts a lower limit on the probability of them giving the same answer to different questions. I’ll leave it as an exercise to the reader to show that the probability of coincident answers to different questions in this case must be at least 1/3.

This a simple illustration of what in quantum mechanics is known as a Bell inequality. Albert and Bernard can only keep the number of such false agreements down to the measured level of 25% by cheating.

This example is directly analogous to the behaviour of the entangled quantum state described above under repeated interrogations about its spin in three different directions. The result of each measurement can only be either “yes” or “no”. Each individual answer (for each particle) is equally probable in this case; the same question always produces the same answer for both particles, but the probability of agreement for two different questions is indeed ¼ and not larger as would be expected if the answers were random. For example one could ask particle 1 “are you spinning up” and particle 2 “are you spinning to the right”? The probability of both producing an answer “yes” is 25% according to quantum theory but would be higher if the particles weren’t cheating in some way.

Probably the most famous experiment of this type was done in the 1980s, by Alain Aspect and collaborators, involving entangled pairs of polarized photons (which are bosons), rather than electrons, primarily because these are easier to prepare.

The implications of quantum entanglement greatly troubled Einstein long before the EPR paradox. Indeed the interpretation of single-particle quantum measurement (which has no entanglement) was already troublesome. Just exactly how does the wave-function relate to the particle? What can one really say about the state of the particle before a measurement is made? What really happens when a wave-function collapses? These questions take us into philosophical territory that I have set foot in already; the difficult relationship between epistemological and ontological uses of probability theory.

Thanks largely to the influence of Niels Bohr, in the relatively early stages of quantum theory a standard approach to this question was adopted. In what became known as the  Copenhagen interpretation of quantum mechanics, the collapse of the wave-function as a result of measurement represents a real change in the physical state of the system. Before the measurement, an electron really is neither spinning up nor spinning down but in a kind of quantum purgatory. After a measurement it is released from limbo and becomes definitely something. What collapses the wave-function is something unspecified to do with the interaction of the particle with the measuring apparatus or, in some extreme versions of this doctrine, the intervention of human consciousness.

I find it amazing that such a view could have been held so seriously by so many highly intelligent people. Schrödinger hated this concept so much that he invented a thought-experiment of his own to poke fun at it. This is the famous “Schrödinger’s cat” paradox.

In a closed box there is a cat. Attached to the box is a device which releases poison into the box when triggered by a quantum-mechanical event, such as radiation produced by the decay of a radioactive substance. One can’t tell from the outside whether the poison has been released or not, so one doesn’t know whether the cat is alive or dead. When one opens the box, one learns the truth. Whether the cat has collapsed or not, the wave-function certainly does. At this point one is effectively making a quantum measurement so the wave-function of the cat is either “dead” or “alive” but before opening the box it must be in a superposition state. But do we really think the cat is neither dead nor alive? Isn’t it certainly one or the other, but that our lack of information prevents us from knowing which? And if this is true for a macroscopic object such as a cat, why can’t it be true for a microscopic system, such as that involving just a pair of electrons?

As I learned at a talk a while ago by the Nobel prize-winning physicist Tony Leggett – who has been collecting data on this  – most physicists think Schrödinger’s cat is definitely alive or dead before the box is opened. However, most physicists don’t believe that an electron definitely spins either up or down before a measurement is made. But where does one draw the line between the microscopic and macroscopic descriptions of reality? If quantum mechanics works for 1 particle, does it work also for 10, 1000? Or, for that matter, 1023?

Most modern physicists eschew the Copenhagen interpretation in favour of one or other of two modern interpretations. One involves the concept of quantum decoherence, which is basically the idea that the phase information that is crucial to the underlying logic of quantum theory can be destroyed by the interaction of a microscopic system with one of larger size. In effect, this hides the quantum nature of macroscopic systems and allows us to use a more classical description for complicated objects. This certainly happens in practice, but this idea seems to me merely to defer the problem of interpretation rather than solve it. The fact that a large and complex system makes tends to hide its quantum nature from us does not in itself give us the right to have a different interpretations of the wave-function for big things and for small things.

Another trendy way to think about quantum theory is the so-called Many-Worlds interpretation. This asserts that our Universe comprises an ensemble – sometimes called a multiverse – and  probabilities are defined over this ensemble. In effect when an electron leaves its source it travels through infinitely many paths in this ensemble of possible worlds, interfering with itself on the way. We live in just one slice of the multiverse so at the end we perceive the electron winding up at just one point on our screen. Part of this is to some extent excusable, because many scientists still believe that one has to have an ensemble in order to have a well-defined probability theory. If one adopts a more sensible interpretation of probability then this is not actually necessary; probability does not have to be interpreted in terms of frequencies. But the many-worlds brigade goes even further than this. They assert that these parallel universes are real. What this means is not completely clear, as one can never visit parallel universes other than our own …

It seems to me that none of these interpretations is at all satisfactory and, in the gap left by the failure to find a sensible way to understand “quantum reality”, there has grown a pathological industry of pseudo-scientific gobbledegook. Claims that entanglement is consistent with telepathy, that parallel universes are scientific truths, that consciousness is a quantum phenomena abound in the New Age sections of bookshops but have no rational foundation. Physicists may complain about this, but they have only themselves to blame.

But there is one remaining possibility for an interpretation of that has been unfairly neglected by quantum theorists despite – or perhaps because of – the fact that is the closest of all to commonsense. This view that quantum mechanics is just an incomplete theory, and the reason it produces only a probabilistic description is that does not provide sufficient information to make definite predictions. This line of reasoning has a distinguished pedigree, but fell out of favour after the arrival of Bell’s theorem and related issues. Early ideas on this theme revolved around the idea that particles could carry “hidden variables” whose behaviour we could not predict because our fundamental description is inadequate. In other words two apparently identical electrons are not really identical; something we cannot directly measure marks them apart. If this works then we can simply use only probability theory to deal with inferences made on the basis of information that’s not sufficient for absolute certainty.

After Bell’s work, however, it became clear that these hidden variables must possess a very peculiar property if they are to describe out quantum world. The property of entanglement requires the hidden variables to be non-local. In other words, two electrons must be able to communicate their values faster than the speed of light. Putting this conclusion together with relativity leads one to deduce that the chain of cause and effect must break down: hidden variables are therefore acausal. This is such an unpalatable idea that it seems to many physicists to be even worse than the alternatives, but to me it seems entirely plausible that the causal structure of space-time must break down at some level. On the other hand, not all “incomplete” interpretations of quantum theory involve hidden variables.

One can think of this category of interpretation as involving an epistemological view of quantum mechanics. The probabilistic nature of the theory has, in some sense, a subjective origin. It represents deficiencies in our state of knowledge. The alternative Copenhagen and Many-Worlds views I discussed above differ greatly from each other, but each is characterized by the mistaken desire to put quantum mechanics – and, therefore, probability –  in the realm of ontology.

The idea that quantum mechanics might be incomplete  (or even just fundamentally “wrong”) does not seem to me to be all that radical. Although it has been very successful, there are sufficiently many problems of interpretation associated with it that perhaps it will eventually be replaced by something more fundamental, or at least different. Surprisingly, this is a somewhat heretical view among physicists: most, including several Nobel laureates, seem to think that quantum theory is unquestionably the most complete description of nature we will ever obtain. That may be true, of course. But if we never look any deeper we will certainly never know…

With the gradual re-emergence of Bayesian approaches in other branches of physics a number of important steps have been taken towards the construction of a truly inductive interpretation of quantum mechanics. This programme sets out to understand  probability in terms of the “degree of belief” that characterizes Bayesian probabilities. Recently, Christopher Fuchs, amongst others, has shown that, contrary to popular myth, the role of probability in quantum mechanics can indeed be understood in this way and, moreover, that a theory in which quantum states are states of knowledge rather than states of reality is complete and well-defined. I am not claiming that this argument is settled, but this approach seems to me by far the most compelling and it is a pity more people aren’t following it up…


Why the Big Bang wasn’t as loud as you think…

Posted in The Universe and Stuff with tags , , , , , on March 31, 2015 by telescoper

So how loud was the Big Bang?

I’ve posted on this before but a comment posted today reminded me that perhaps I should recycle it and update it as it relates to the cosmic microwave background, which is what I work on on the rare occasions on which I get to do anything interesting.

As you probably know the Big Bang theory involves the assumption that the entire Universe – not only the matter and energy but also space-time itself – had its origins in a single event a finite time in the past and it has been expanding ever since. The earliest mathematical models of what we now call the  Big Bang were derived independently by Alexander Friedman and George Lemaître in the 1920s. The term “Big Bang” was later coined by Fred Hoyle as a derogatory description of an idea he couldn’t stomach, but the phrase caught on. Strictly speaking, though, the Big Bang was a misnomer.

Friedman and Lemaître had made mathematical models of universes that obeyed the Cosmological Principle, i.e. in which the matter was distributed in a completely uniform manner throughout space. Sound consists of oscillating fluctuations in the pressure and density of the medium through which it travels. These are longitudinal “acoustic” waves that involve successive compressions and rarefactions of matter, in other words departures from the purely homogeneous state required by the Cosmological Principle. The Friedman-Lemaitre models contained no sound waves so they did not really describe a Big Bang at all, let alone how loud it was.

However, as I have blogged about before, newer versions of the Big Bang theory do contain a mechanism for generating sound waves in the early Universe and, even more importantly, these waves have now been detected and their properties measured.

Planck_CMB

The above image shows the variations in temperature of the cosmic microwave background as charted by the Planck Satellite. The average temperature of the sky is about 2.73 K but there are variations across the sky that have an rms value of about 0.08 milliKelvin. This corresponds to a fractional variation of a few parts in a hundred thousand relative to the mean temperature. It doesn’t sound like much, but this is evidence for the existence of primordial acoustic waves and therefore of a Big Bang with a genuine “Bang” to it.

A full description of what causes these temperature fluctuations would be very complicated but, roughly speaking, the variation in temperature you corresponds directly to variations in density and pressure arising from sound waves.

So how loud was it?

The waves we are dealing with have wavelengths up to about 200,000 light years and the human ear can only actually hear sound waves with wavelengths up to about 17 metres. In any case the Universe was far too hot and dense for there to have been anyone around listening to the cacophony at the time. In some sense, therefore, it wouldn’t have been loud at all because our ears can’t have heard anything.

Setting aside these rather pedantic objections – I’m never one to allow dull realism to get in the way of a good story- we can get a reasonable value for the loudness in terms of the familiar language of decibels. This defines the level of sound (L) logarithmically in terms of the rms pressure level of the sound wave Prms relative to some reference pressure level Pref

L=20 log10[Prms/Pref].

(the 20 appears because of the fact that the energy carried goes as the square of the amplitude of the wave; in terms of energy there would be a factor 10).

There is no absolute scale for loudness because this expression involves the specification of the reference pressure. We have to set this level by analogy with everyday experience. For sound waves in air this is taken to be about 20 microPascals, or about 2×10-10 times the ambient atmospheric air pressure which is about 100,000 Pa.  This reference is chosen because the limit of audibility for most people corresponds to pressure variations of this order and these consequently have L=0 dB. It seems reasonable to set the reference pressure of the early Universe to be about the same fraction of the ambient pressure then, i.e.

Pref~2×10-10 Pamb.

The physics of how primordial variations in pressure translate into observed fluctuations in the CMB temperature is quite complicated, because the primordial universe consists of a plasma rather than air. Moreover, the actual sound of the Big Bang contains a mixture of wavelengths with slightly different amplitudes. In fact here is the spectrum, showing a distinctive signature that looks, at least in this representation, like a fundamental tone and a series of harmonics…

Planck_power_spectrum_orig

 

If you take into account all this structure it all gets a bit messy, but it’s quite easy to get a rough but reasonable estimate by ignoring all these complications. We simply take the rms pressure variation to be the same fraction of ambient pressure as the averaged temperature variation are compared to the average CMB temperature,  i.e.

Prms~ a few ×10-5Pamb.

If we do this, scaling both pressures in logarithm in the equation in proportion to the ambient pressure, the ambient pressure cancels out in the ratio, which turns out to be a few times 10-5. With our definition of the decibel level we find that waves of this amplitude, i.e. corresponding to variations of one part in a hundred thousand of the reference level, give roughly L=100dB while part in ten thousand gives about L=120dB. The sound of the Big Bang therefore peaks at levels just a bit less than 120 dB.

cooler_decibel_chart

As you can see in the Figure above, this is close to the threshold of pain,  but it’s perhaps not as loud as you might have guessed in response to the initial question. Modern popular beat combos often play their dreadful rock music much louder than the Big Bang….

A useful yardstick is the amplitude  at which the fluctuations in pressure are comparable to the mean pressure. This would give a factor of about 1010 in the logarithm and is pretty much the limit that sound waves can propagate without distortion. These would have L≈190 dB. It is estimated that the 1883 Krakatoa eruption produced a sound level of about 180 dB at a range of 100 miles. By comparison the Big Bang was little more than a whimper.

PS. If you would like to read more about the actual sound of the Big Bang, have a look at John Cramer’s webpages. You can also download simulations of the actual sound. If you listen to them you will hear that it’s more of  a “Roar” than a “Bang” because the sound waves don’t actually originate at a single well-defined event but are excited incoherently all over the Universe.

Essays in Physics

Posted in Biographical, Education with tags , , , , , , on March 6, 2015 by telescoper

In the course of a rare episode of tidying-up in my office I came across this. You can click on it to make it bigger if it’s difficult to read. It was the first paper of my finals examination at the University of Cambridge way back in 1985. Yes, that really was thirty years ago…

wpid-wp-1425648226410.jpeg

As you can probably infer from the little circle around number 4, I decided to write an Essay about topic 4. I’ve always been interested in detective stories so this was an easy choice for me, but I have absolutely no idea what I wrote about for three hours. Nor do I recall actually ever getting a mark for the essay, so I never really knew whether it really counted for anything. I do remember, however, that I had another 3-hour examination in the afternoon of the same day, two three-hour examinations the following day, and would have had two the day after that had I not elected to do a theory project which let me off one paper at the end.

I survived this rigorous diet of examinations (more-or-less) and later that year moved to Sussex to start my DPhil, returning here couple of years ago as Head of the same School in which I did my graduate studies. To add further proof that the universe is cyclic, this year I’ve taken on the job of being External Examiner for physics at the University of Cambridge, the same place I did my undergraduate studies.

Anyway, to get back to the essay paper, we certainly don’t set essay examinations like that here in the Department of Physics & Astronomy at the University of Sussex and I suspect they no longer do so in the Department of Physics at Cambridge. I don’t really see the point of making students write such things under examination conditions. On the other hand, I do have an essay as part of the coursework in my 2nd Year Theoretical Physics module. That may seem surprising and I’m not sure the students like the idea, but the reason for having it is that theoretical physics students don’t do experimental work in the second year so they don’t get the chance to develop their writing skills through lab reports. The essay titles I set are much more specific than those listed in the paper above and linked very closely to the topics covered in the lectures, but it’s still an opportunity for physics students to practice writing and getting some feedback on their efforts. Incidentally, some of the submissions last year were outstandingly good and I’m actually quite looking forward to reading this year’s crop!

What is the Scientific Method?

Posted in The Universe and Stuff with tags , , on February 25, 2015 by telescoper

Twitter sent me this video about the scientific method yesterday, so I thought I’d share it via this blog.

The term Scientific Method is one that I find it difficult to define satisfactorily, despite having worked in science for over 25 years. The Oxford English Dictionary  defines Scientific Method as

..a method or procedure that has characterized natural science since the 17th century, consisting in systematic observation, measurement, and experiment, and the formulation, testing, and modification of hypotheses.

This is obviously a very general description, and the balance between the different aspects described is very different in different disciplines. For this reason when people try to define what the Scientific Method is for their own field, it doesn’t always work for others even within the same general area. It’s fairly obvious that zoology is very different from nuclear physics, but that doesn’t mean that either has to be unscientific. Moreover, the approach used in laboratory-based experimental physics can be very different from that used in astrophysics, for example. What I like about this video, though, is that it emphasizes the role of uncertainty in how the process works. I think that’s extremely valuable, as the one thing that I think should define the scientific method across all disciplines is a proper consideration of the assumptions made, the possibility of experimental error, and the limitations of what has been done. I wish this aspect of science had more prominence in media reports of scientific breakthroughs. Unfortunately these are almost always presented as certainties, so if they later turn out to be incorrect it looks like science itself has gone wrong. I don’t blame the media entirely about this, as there are regrettably many scientists willing to portray their own findings in this way.

When I give popular talks about my own field, Cosmology,  I often  look for appropriate analogies or metaphors in television programmes about forensic science, such as CSI: Crime Scene Investigation which I used to watch quite regularly (to the disdain of many of my colleagues and friends). Cosmology is methodologically similar to forensic science because it is generally necessary in both these fields to proceed by observation and inference, rather than experiment and deduction: cosmologists have only one Universe;  forensic scientists have only one scene of the crime. They can collect trace evidence, look for fingerprints, establish or falsify alibis, and so on. But they can’t do what a laboratory physicist or chemist would typically try to do: perform a series of similar experimental crimes under slightly different physical conditions. What we have to do in cosmology is the same as what detectives do when pursuing an investigation: make inferences and deductions within the framework of a hypothesis that we continually subject to empirical test. This process carries on until reasonable doubt is exhausted, if that ever happens. Of course there is much more pressure on detectives to prove guilt than there is on cosmologists to establish “the truth” about our Cosmos. That’s just as well, because there is still a very great deal we do not know about how the Universe works.

 

 

Funding for Masters in Science

Posted in Education with tags , , , , , , , , on February 11, 2015 by telescoper

My recent post about postgraduate scholarships at the University of Sussex has generated quite a lot of interest so I thought I’d spend a few moments today trying to answer some of the questions I’ve been asked recently, by current and prospective students (or parents thereof).

I’ll start by explaining what the difference is between the different forms of Masters degrees in science that you can get in the United Kingdom, chiefly the distinction between an MSc  and one of the variations on the MPhys or MMath we have here in the School of Mathematical and Physical Sciences here at the University of Sussex. I have to admit that it’s all very confusing so here’s my attempt to explain.

The main distinction is that the MSc “Master of Science” is a (taught) postgraduate (PG) degree, usually of one (calendar) year’s duration, whereas the MPhys etc are undergraduate (UG) degrees usually lasting 4 years. This means that students wanting to do an MSc must already have completed a degree programme (and usually have been awarded at least Second Class Honours)  before starting an MSc whereas those doing the MPhys do not.

Undergraduate students wanting to do Physics in the Department of Physics & Astronomy at the University of Sussex, for example, can opt for either the 3-year BSc or the 4-year MPhys programmes. However, choosing the 4-year option does not lead to the award of a BSc degree and then a subsequent Masters qualification;  graduating students get a single qualification usually termed an “integrated Masters”.

It is possible for a student to take a BSc and then do a taught MSc programme afterwards, perhaps at a different university, but there are relatively few MSC programmes for Physics  in the UK because the vast majority of students who are interested in postgraduate study will already have registered for 4-year undergraduate programmes. That’s not to say there are none, however. There are notable MSc programmes dotted around, but they tend to be rather specialist; examples related to my own area include Astronomy and Cosmology at Sussex and Astrophysics at Queen Mary. Our own MSc in Frontiers in Quantum Technology is the only such course in the United Kingdom.

To a large extent these courses survive by recruiting students from outside the UK because the market from home students is so small. No department can afford to put on an entire MSc programme for the benefit of just one or two students. Often these stand-alone courses share modules with the final year of the undergraduate Masters, which also helps keep them afloat.

So why does it matter whether one Masters is PG while the other is UG? One difference is that the MSc lasts a calendar year (rather than an academic year). In terms of material covered, this means it contains 180 credits compared to the 120 credits of an undergraduate programme. Typically the MSc will have 120 credits of taught courses, examined in June as with UG programmes, followed by 60 credits worth of project work over the summer, handed in in September, though at Sussex some of our programmes are split 90 credits coursework and 90 credits of project.

The reason why this question comes up so frequently nowadays is that the current generation of applicants to university (and their parents) are facing fees of £9K per annum. The cost of doing a 3-year BSc is then about £27K compared to £36K for an MPhys. When rushing through the legislation to allow universities to charge this amount, the Powers That Be completely forgot about PG programmes, which have accordingly maintained their fees at a relatively low level, despite the fact that these are not controlled by government. For example, the MSc Astronomy at Sussex attracts a fee of about £6K for home students and £17K for overseas students. These levels are roughly consistent with the UG fees paid by  home students on the previous fee regime (approx £3.5K per annum, bearing in mind that you get 1.5 times as much teaching on an MSc compared to a year of an MPhys).

Being intelligent people, prospective physicists look at the extra £9K they have to pay for the 4th year of an MPhys and compare it with the current rate for an entire MSc and come to the conclusion that they should just do a BSc then switch. This seems to be not an unreasonable calculation to make.

However, there are some important things to bear in mind. Firstly, unlike UG programmes, the fee for PG programmes is basically unregulated. Universities can charge whatever they like and can increase them in the future if they decide to. See, for example, the list at Sussex University which shows that MSc fees already vary by more than a factor of four from one school to another. Incidentally, that in itself shows the absurdity of charging the same fee for UG degrees regardless of subject…

Now the point is that if one academic year of UG teaching costs £9K for future students, there is no way any department can justify putting on an entire calendar of advanced courses (i.e. at least 50% more teaching at an extremely specialist level) for less than half the  income per student. Moreover undergraduate courses in laboratory-based sciences attract an additional contribution of around £1.4K (“the unit of resource”) paid by the government to the University concerned via HEFCE.  The logical fee level for MSc programmes is mininum of about 1.5 times the UG fee, plus the unit of resource applied to full calendar year, which is a whopping £15.6K (similar to the current whopping amount already paid by overseas students for these programmes). It’s therefore clear that you cannot take the current MSc fee levels as a guide to what they will be in three years’ time, when you will qualify to enter a taught PG programme. Prices will certainly have risen by then. I doubt if there will be a sudden step-change, but they will rise.

The picture has changed significantly since the Chancellor of the Exchequer announced in the Autumn Statement last year that loans of up to £10,000 would be made available to students on postgraduate (Masters) courses from 2016/17 onwards.  Welcome though this scheme may be it does not apply to students wanting to start a Masters programme this September (i.e. for Academic Year 2015/16).

I’d say that, contrary to what many people seem to think,  if you take into the full up-front fee and the lack of student loans etc, the cost of a BSc + MSc is  already significantly greater than doing an MPhys, and in future the cost of the former route will inevitably increase. I therefore don’t think this is a wise path for most Physics undergraduates to take, assuming that they want their MSc to qualify them for a career in Physics research, either in a university or a commercial organization, perhaps via the PhD degree, and they’re not so immensely rich that money is no consideration.

The exception to this conclusion is for the student who wishes to switch to another field at Masters level,  to do a specialist MSc in a more applied discipline such as medical physics or engineering. Then it might make sense, as long as you can find a way to deal with the need to pay up-front for such courses.

Now comes the plug for Sussex. Last week the University of Sussex unveiled a huge  boost to the University’s flagship Chancellor’s Masters Scholarships means that 100 students graduating this summer with a first-class degree from any UK university will be eligible to receive a £10,000 package (non-repayable)  to study for a Masters degree at Sussex. There are also specific schemes to support students who are already at Sussex; see here.

I’m drawing this to the attention of readers of this blog primarily to point out that the Department of Physics & Astronomy at the University of Sussex is one of relatively few UK universities to have a significant and well-established programme of Masters (MSc) courses, including courses in Physics, Particle Physics,  Cosmology, and Astronomy. In particular, as I mentioned above, we are the only Department in the United Kingdom to have an MSc in Quantum Technology, an area which has just benefitted from a substantial cash investment from the UK government.

Wisely, the University of Sussex has introduced special measures to encourage current Integrated Masters students to stay on their degree rather than bailing out into a BSc and taking a Masters. However, this scheme is a great opportunity for high-flying physics graduates from other universities to get a funded place on any of our MSc programmes to start later this year. Indeed, the deal that is being offered is so good that I would recommend students who are currently in the third year of 4-year MPhys or MSci integrated Masters programmes, perhaps at a dreary University in the Midlands, to consider ditching  your current course, switching to a BSc and graduating in June in order to take up this opportunity. The last year of an integrated Masters consists of 120 credits of material for which you will have to be a further £9K of fees; a standalone Masters at Sussex would involve 180 credits and be essentially free if you get a scholarship.

Think about it, especially if you are interested in specializing in Quantum Technology. Sussex is the only university in the UK where you can take an MSc in this subject! This is a one-off opportunity, since (a) this scheme will be replaced by loans from 2016/17 and (b) the fees will almost certainly have risen by next year for the reasons I outlined above.

In conclusion, though, I have to say that, like many other aspects of Higher Education in the Disunited Kingdom, this system is a mess. I’d prefer to see the unified system of 3 year UG Bachelor degrees, 2-year Masters, and 3-year PhD that pertains throughout most of continental Europe.

P.S. In the interest of full disclosure, I should point out an even worse anomaly. I did a 3-year Honours degree in Natural Science at Cambridge University for which I was awarded not a BSc but a BA (Bachelor of Arts). A year or so later this – miraculously and with no effort on my part – turned into an MA. Work that one out if you can.

Helping Blind Physicists

Posted in Education with tags , , , , on February 4, 2015 by telescoper

The Department of Physics & Astronomy at the University of Sussex has been supporting some fantastic research into the accessibility of science education. Daniel Hajas, a blind second year physics undergraduate student has been working with Dr. Kathy Romer, Reader in Astrophysics, on a research project related to innovative assistive technology.

Daniel came up with the idea of an audio-tactile graphics display (TGD) that should allow representation of graphical information in audio and tactile modalities, mostly focusing on figures used in mathematical sciences such as graphs, geometric shapes etc. The TGD is a device  with approximate dimensions of a tablet that can sit on a table top and can be connected with a PC using either a wired or wireless solution.

During the summer of 2014, Daniel wrote a research proposal, attended an assistive technology oriented conference and since the beginning of this academic year has been searching for partners/funding. Daniel and Kathy recently submitted an application to the Inclusive Technology Price (ITP).

Since October they have made contact with IT and cognitive science experts from the Sussex IT department and are also in contact with an LHC Sound project (CERN) team member to assist with sonification. Daniel and Kathy plan to establish collaboration with experts from various fields, find research partners and funding. Such an interdisciplinary research requires collaboration of various Sussex Departments if not other Universities from across the UK.

Daniel's 3D Vector Board

Daniel’s 3D Vector Board

Daniel has also been busy inventing the ‘3D vector board’, a small plastic board with two flexible rubber stripes perpendicular to each other which can be can moved around such that they show the axes of a coordinate system. The board has a grid on it with 1×1 cm squares. At the junctions four little holes are drilled in the corner of the squares. This allows the vectors (metal sticks of different length) to be fixed on the board. Since there are horizontal, diagonal and vertical sticks i.e. the sticks are either in the plane, perpendicular to or in an angle respect to the plane of the board 3D vector scenarios can be modelled easily.

Although Daniel intended to use the board solely for his own purposes, feedback suggests this relatively simple tool could be used efficiently in education for demonstrational purposes. Both visually impaired and sighted students could benefit from it. Sketches on paper or black boards only allow 2D representations. The 3D vector board might also work well in illustrating aims of the TGD project. Although the main goal is to develop a very advanced high-tech assistive device over a period of years, Daniel and Kathy might also come up with a number of low-tech ideas to improve accessibility of mathematical sciences for visually impaired students.

See Daniel’s project website for further details about his research.

Follow

Get every new post delivered to your Inbox.

Join 4,111 other followers