## A Bit of Green Trivia..

Posted in Film, History, The Universe and Stuff, Uncategorized with tags , , , on March 8, 2014 by telescoper

Following on from yesterday’s post about George Green, I thought I’d add this little bit of Green trivia.

George Green’s sponsor and patron  was the mathematician Edward Bromhead, a Baronet and member of the landed gentry of the county of Lincolnshire. Two generations later in the Bromhead family you will find a certain Gonville Bromhead (presumably named after Gonville & Caius College, the Cambridge college that both Edward Bromhead and George Green attended).As a young man, in January 1879 Lt. Gonville Bromhead fought in the Battle of Rorke’s Drift. Almost a century later he was played by Michael Caine in the film Zulu.

Not a lot of people know that.

## Is Inflation Testable?

Posted in The Universe and Stuff with tags , , , , , , , , on March 4, 2014 by telescoper

It seems the little poll about cosmic inflation I posted last week with humorous intent has ruffled a few feathers, but at least it gives me the excuse to wheel out an updated and edited version of an old piece I wrote on the subject.

Just over thirty  years ago a young physicist came up with what seemed at first to be an absurd idea: that, for a brief moment in the very distant past, just after the Big Bang, something weird happened to gravity that made it push rather than pull.  During this time the Universe went through an ultra-short episode of ultra-fast expansion. The physicist in question, Alan Guth, couldn’t prove that this “inflation” had happened nor could he suggest a compelling physical reason why it should, but the idea seemed nevertheless to solve several major problems in cosmology.

Three decades later, Guth is a professor at MIT and inflation is now well established as an essential component of the standard model of cosmology. But should it be? After all, we still don’t know what caused it and there is little direct evidence that it actually took place. Data from probes of the cosmic microwave background seem to be consistent with the idea that inflation happened, but how confident can we be that it is really a part of the Universe’s history?

According to the Big Bang theory, the Universe was born in a dense fireball which has been expanding and cooling for about 14 billion years. The basic elements of this theory have been in place for over eighty years, but it is only in the last decade or so that a detailed model has been constructed which fits most of the available observations with reasonable precision. The problem is that the Big Bang model is seriously incomplete. The fact that we do not understand the nature of the dark matter and dark energy that appears to fill the Universe is a serious shortcoming. Even worse, we have no way at all of describing the very beginning of the Universe, which appears in the equations used by cosmologists as a “singularity”- a point of infinite density that defies any sensible theoretical calculation. We have no way to define a priori the initial conditions that determine the subsequent evolution of the Big Bang, so we have to try to infer from observations, rather than deduce by theory, the parameters that govern it.

The establishment of the new standard model (known in the trade as the “concordance” cosmology) is now allowing astrophysicists to turn back the clock in order to understand the very early stages of the Universe’s history and hopefully to understand the answer to the ultimate question of what happened at the Big Bang itself and thus answer the question “How did the Universe Begin?”

Paradoxically, it is observations on the largest scales accessible to technology that provide the best clues about the earliest stages of cosmic evolution. In effect, the Universe acts like a microscope: primordial structures smaller than atoms are blown up to astronomical scales by the expansion of the Universe. This also allows particle physicists to use cosmological observations to probe structures too small to be resolved in laboratory experiments.

Our ability to reconstruct the history of our Universe, or at least to attempt this feat, depends on the fact that light travels with a finite speed. The further away we see a light source, the further back in time its light was emitted. We can now observe light from stars in distant galaxies emitted when the Universe was less than one-sixth of its current size. In fact we can see even further back than this using microwave radiation rather than optical light. Our Universe is bathed in a faint glow of microwaves produced when it was about one-thousandth of its current size and had a temperature of thousands of degrees, rather than the chilly three degrees above absolute zero that characterizes the present-day Universe. The existence of this cosmic background radiation is one of the key pieces of evidence in favour of the Big Bang model; it was first detected in 1964 by Arno Penzias and Robert Wilson who subsequently won the Nobel Prize for their discovery.

The process by which the standard cosmological model was assembled has been a gradual one, but the latest step was taken by the European Space Agency’s Planck mission . I’ve blogged about the implications of the Planck results for cosmic inflation in more technical detail here. In a nutshell, for several years this satellite mapped  the properties of the cosmic microwave background and how it varies across the sky. Small variations in the temperature of the sky result from sound waves excited in the hot plasma of the primordial fireball. These have characteristic properties that allow us to probe the early Universe in much the same way that solar astronomers use observations of the surface of the Sun to understand its inner structure,  a technique known as helioseismology. The detection of the primaeval sound waves is one of the triumphs of modern cosmology, not least because their amplitude tells us precisely how loud the Big Bang really was.

The pattern of fluctuations in the cosmic radiation also allows us to probe one of the exciting predictions of Einstein’s general theory of relativity: that space should be curved by the presence of matter or energy. Measurements from Planck and its predecessor WMAP reveal that our Universe is very special: it has very little curvature, and so has a very finely balanced energy budget: the positive energy of the expansion almost exactly cancels the negative energy relating of gravitational attraction. The Universe is (very nearly) flat.

The observed geometry of the Universe provides a strong piece of evidence that there is an mysterious and overwhelming preponderance of dark stuff in our Universe. We can’t see this dark matter and dark energy directly, but we know it must be there because we know the overall budget is balanced. If only economics were as simple as physics.

Computer Simulation of the Cosmic Web

The concordance cosmology has been constructed not only from observations of the cosmic microwave background, but also using hints supplied by observations of distant supernovae and by the so-called “cosmic web” – the pattern seen in the large-scale distribution of galaxies which appears to match the properties calculated from computer simulations like the one shown above, courtesy of Volker Springel. The picture that has emerged to account for these disparate clues is consistent with the idea that the Universe is dominated by a blend of dark energy and dark matter, and in which the early stages of cosmic evolution involved an episode of accelerated expansion called inflation.

A quarter of a century ago, our understanding of the state of the Universe was much less precise than today’s concordance cosmology. In those days it was a domain in which theoretical speculation dominated over measurement and observation. Available technology simply wasn’t up to the task of performing large-scale galaxy surveys or detecting slight ripples in the cosmic microwave background. The lack of stringent experimental constraints made cosmology a theorists’ paradise in which many imaginative and esoteric ideas blossomed. Not all of these survived to be included in the concordance model, but inflation proved to be one of the hardiest (and indeed most beautiful) flowers in the cosmological garden.

Although some of the concepts involved had been formulated in the 1970s by Alexei Starobinsky, it was Alan Guth who in 1981 produced the paper in which the inflationary Universe picture first crystallized. At this time cosmologists didn’t know that the Universe was as flat as we now think it to be, but it was still a puzzle to understand why it was even anywhere near flat. There was no particular reason why the Universe should not be extremely curved. After all, the great theoretical breakthrough of Einstein’s general theory of relativity was the realization that space could be curved. Wasn’t it a bit strange that after all the effort needed to establish the connection between energy and curvature, our Universe decided to be flat? Of all the possible initial conditions for the Universe, isn’t this very improbable? As well as being nearly flat, our Universe is also astonishingly smooth. Although it contains galaxies that cluster into immense chains over a hundred million light years long, on scales of billions of light years it is almost featureless. This also seems surprising. Why is the celestial tablecloth so immaculately ironed?

Guth grappled with these questions and realized that they could be resolved rather elegantly if only the force of gravity could be persuaded to change its sign for a very short time just after the Big Bang. If gravity could push rather than pull, then the expansion of the Universe could speed up rather than slow down. Then the Universe could inflate by an enormous factor (1060 or more) in next to no time and, even if it were initially curved and wrinkled, all memory of this messy starting configuration would be lost. Our present-day Universe would be very flat and very smooth no matter how it had started out.

But how could this bizarre period of anti-gravity be realized? Guth hit upon a simple physical mechanism by which inflation might just work in practice. It relied on the fact that in the extreme conditions pertaining just after the Big Bang, matter does not behave according to the classical laws describing gases and liquids but instead must be described by quantum field theory. The simplest type of quantum field is called a scalar field; such objects are associated with particles that have no spin. Modern particle theory involves many scalar fields which are not observed in low-energy interactions, but which may well dominate affairs at the extreme energies of the primordial fireball.

Classical fluids can undergo what is called a phase transition if they are heated or cooled. Water for example, exists in the form of steam at high temperature but it condenses into a liquid as it cools. A similar thing happens with scalar fields: their configuration is expected to change as the Universe expands and cools. Phase transitions do not happen instantaneously, however, and sometimes the substance involved gets trapped in an uncomfortable state in between where it was and where it wants to be. Guth realized that if a scalar field got stuck in such a “false” state, energy – in a form known as vacuum energy – could become available to drive the Universe into accelerated expansion.We don’t know which scalar field of the many that may exist theoretically is responsible for generating inflation, but whatever it is, it is now dubbed the inflaton.

This mechanism is an echo of a much earlier idea introduced to the world of cosmology by Albert Einstein in 1916. He didn’t use the term vacuum energy; he called it a cosmological constant. He also didn’t imagine that it arose from quantum fields but considered it to be a modification of the law of gravity. Nevertheless, Einstein’s cosmological constant idea was incorporated by Willem de Sitter into a theoretical model of an accelerating Universe. This is essentially the same mathematics that is used in modern inflationary cosmology.  The connection between scalar fields and the cosmological constant may also eventually explain why our Universe seems to be accelerating now, but that would require a scalar field with a much lower effective energy scale than that required to drive inflation. Perhaps dark energy is some kind of shadow of the inflaton

Guth wasn’t the sole creator of inflation. Andy Albrecht and Paul Steinhardt, Andrei Linde, Alexei Starobinsky, and many others, produced different and, in some cases, more compelling variations on the basic theme. It was almost as if it was an idea whose time had come. Suddenly inflation was an indispensable part of cosmological theory. Literally hundreds of versions of it appeared in the leading scientific journals: old inflation, new inflation, chaotic inflation, extended inflation, and so on. Out of this activity came the realization that a phase transition as such wasn’t really necessary, all that mattered was that the field should find itself in a configuration where the vacuum energy dominated. It was also realized that other theories not involving scalar fields could behave as if they did. Modified gravity theories or theories with extra space-time dimensions provide ways of mimicking scalar fields with rather different physics. And if inflation could work with one scalar field, why not have inflation with two or more? The only problem was that there wasn’t a shred of evidence that inflation had actually happened.

This episode provides a fascinating glimpse into the historical and sociological development of cosmology in the eighties and nineties. Inflation is undoubtedly a beautiful idea. But the problems it solves were theoretical problems, not observational ones. For example, the apparent fine-tuning of the flatness of the Universe can be traced back to the absence of a theory of initial conditions for the Universe. Inflation turns an initially curved universe into a flat one, but the fact that the Universe appears to be flat doesn’t prove that inflation happened. There are initial conditions that lead to present-day flatness even without the intervention of an inflationary epoch. One might argue that these are special and therefore “improbable”, and consequently that it is more probable that inflation happened than that it didn’t. But on the other hand, without a proper theory of the initial conditions, how can we say which are more probable? Based on this kind of argument alone, we would probably never really know whether we live in an inflationary Universe or not.

But there is another thread in the story of inflation that makes it much more compelling as a scientific theory because it makes direct contact with observations. Although it was not the original motivation for the idea, Guth and others realized very early on that if a scalar field were responsible for inflation then it should be governed by the usual rules governing quantum fields. One of the things that quantum physics tells us is that nothing evolves entirely smoothly. Heisenberg’s famous Uncertainty Principle imposes a degree of unpredictability of the behaviour of the inflaton. The most important ramification of this is that although inflation smooths away any primordial wrinkles in the fabric of space-time, in the process it lays down others of its own. The inflationary wrinkles are really ripples, and are caused by wave-like fluctuations in the density of matter travelling through the Universe like sound waves travelling through air. Without these fluctuations the cosmos would be smooth and featureless, containing no variations in density or pressure and therefore no sound waves. Even if it began in a fireball, such a Universe would be silent. Inflation puts the Bang in Big Bang.

The acoustic oscillations generated by inflation have a broad spectrum (they comprise oscillations with a wide range of wavelengths), they are of small amplitude (about one hundred thousandth of the background); they are spatially random and have Gaussian statistics (like waves on the surface of the sea; this is the most disordered state); they are adiabatic (matter and radiation fluctuate together) and they are formed coherently.  This last point is perhaps the most important. Because inflation happens so rapidly all of the acoustic “modes” are excited at the same time. Hitting a metal pipe with a hammer generates a wide range of sound frequencies, but all the different modes of the start their oscillations at the same time. The result is not just random noise but something moderately tuneful. The Big Bang wasn’t exactly melodic, but there is a discernible relic of the coherent nature of the sound waves in the pattern of cosmic microwave temperature fluctuations seen in the Cosmic Microwave Background. The acoustic peaks seen in the  Planck  angular spectrum  provide compelling evidence that whatever generated the pattern did so coherently.

There are very few alternative theories on the table that are capable of reproducing these results, but does this mean that inflation really happened? Do they “prove” inflation is correct? More generally, is the idea of inflation even testable?

So did inflation really happen? Does Planck prove it? Will we ever know?

It is difficult to talk sensibly about scientific proof of phenomena that are so far removed from everyday experience. At what level can we prove anything in astronomy, even on the relatively small scale of the Solar System? We all accept that the Earth goes around the Sun, but do we really even know for sure that the Universe is expanding? I would say that the latter hypothesis has survived so many tests and is consistent with so many other aspects of cosmology that it has become, for pragmatic reasons, an indispensable part our world view. I would hesitate, though, to say that it was proven beyond all reasonable doubt. The same goes for inflation. It is a beautiful idea that fits snugly within the standard cosmological and binds many parts of it together. But that doesn’t necessarily make it true. Many theories are beautiful, but that is not sufficient to prove them right.

When generating theoretical ideas scientists should be fearlessly radical, but when it comes to interpreting evidence we should all be unflinchingly conservative. The Planck measurements have also provided a tantalizing glimpse into the future of cosmology, and yet more stringent tests of the standard framework that currently underpins it. Primordial fluctuations produce not only a pattern of temperature variations over the sky, but also a corresponding pattern of polarization. This is fiendishly difficult to measure, partly because it is such a weak signal (only a few percent of the temperature signal) and partly because the primordial microwaves are heavily polluted by polarized radiation from our own Galaxy. Polarization data from Planck are yet to be released; the fiendish data analysis challenge involved is the reason for the delay.  But there is a crucial target that justifies these endeavours. Inflation does not just produce acoustic waves, it also generates different modes of fluctuation, called gravitational waves, that involve twisting deformations of space-time. Inflationary models connect the properties of acoustic and gravitational fluctuations so if the latter can be detected the implications for the theory are profound. Gravitational waves produce very particular form of polarization pattern (called the B-mode) which can’t be generated by acoustic waves so this seems a promising way to test inflation. Unfortunately the B-mode signal is expected to be very weak and the experience of WMAP suggests it might be swamped by foregrounds. But it is definitely worth a go, because it would add considerably to the evidence in favour of inflation as an element of physical reality.

But would even detection of primordial gravitational waves really test inflation? Not really. The problem with inflation is that it is a name given to a very general idea, and there are many (perhaps infinitely many) different ways of implementing the details, so one can devise versions of the inflationary scenario that produce a wide range of outcomes. It is therefore unlikely that there will be a magic bullet that will kill inflation dead. What is more likely is a gradual process of reducing the theoretical slack as much as possible with observational data, such as is happening in particle physics. For example, we have not yet identified the inflaton field (nor indeed any reasonable candidate for it) but we are gradually improving constraints on the allowed parameter space. Progress in this mode of science is evolutionary not revolutionary.

Many critics of inflation argue that it is not a scientific theory because it is not falsifiable. I don’t think falsifiability is a useful concept in this context; see my many posts relating to Karl Popper. Testability is a more appropriate criterion. What matters is that we have a systematic way of deciding which of a set of competing models is the best when it comes to confrontation with data. In the case of inflation we simply don’t have a compelling model to test it against. For the time being therefore, like it or not, cosmic inflation is clearly the best model we have. Maybe someday a worthy challenger will enter the arena, but this has not happened yet.

Most working cosmologists are as aware of the difficulty of testing inflation as they are of its elegance. There are also those  who talk as if inflation were an absolute truth, and those who assert that it is not a proper scientific theory (because it isn’t falsifiable). I can’t agree with either of these factions. The truth is that we don’t know how the Universe really began; we just work on the best ideas available and try to reduce our level of ignorance in any way we can. We can hardly expect  the secrets of the Universe to be so easily accessible to our little monkey brains.

## Lincoln – Green Shoots for Maths and Physics?

Posted in Education with tags , , , , on March 3, 2014 by telescoper

I noticed over the weekend that there’s a job being advertised at the University of Lincoln designated Founding Head of the School of Mathematics and Physics. It seems the powers that be at Lincoln University (which is in the Midlands) have decided to set up an entire new activity in Mathematics and Physics. I’m pointing this out not because of any personal connection with the position, but because it’s refreshing to see a new(ish) Higher Education Institute apparently willing to take the plunge and invest in a new venture, particularly because it includes Physics. It wasn’t at all long ago that UK Physics departments were being closed down – the University of Reading being a prominent example, in 2006. I think Reading is thinking of starting up Physics again, in fact. Perhaps these are the green shoots that presage a new spring for Physics in this country? I do hope so.

It won’t be an easy task to start up a new department from scratch in Lincoln: grant funding is tight and the competition for students among established institutions is already so intense that it will be very difficult for a brand new outfit to break through. Nevertheless, I think it’s a praiseworthy initiative and I wish it well.

## Sussex University – the Place for Undergraduate Physics Research!

Posted in Education, The Universe and Stuff with tags , , , , , , , on February 27, 2014 by telescoper

One of the courses we offer in the School of Physics & Astronomy here at the University of Sussex is the integrated Masters in Physics with a Research Placement. Aimed at high-flying students with ambitions to become research physicists, this programme includes a paid research placement as a Junior Research Associate each summer vacation for the duration of the course; that means between Years 1 & 2, Years 2 & 3 and Years 3 & 4 . This course has proved extremely attractive to a large number of very talented students and it exemplifies the way the Department of Physics & Astronomy integrates world-class research with its teaching in a uniquely successful and imaginative way.

Here’s a little video made by the University that features Sophie Williamson, who is currently in her second year (and who also in the class to whom I’m currently teaching a module on Theoretical Physics:

This week we had some very good news about another of our undergraduate researchers, Talitha Bromwich, who is now in the final year of her MPhys degree, and is pictured below with her supervisor Dr Simon Peeters:

Talitha spent last summer working on the DEAP3600 dark-matter detector after being selected for the University’s Junior Research Associate scheme. Her project won first prize at the University’s JRA poster exhibition last October, and she was then chosen to present her findings – alongside undergraduate researchers from 22 other universities – in Westminster yesterday as part of the annual Posters in Parliament exhibition, organized under the auspices of the British Conference of Undergraduate Research (BCUR).

A judging panel – consisting of Ben Wallace MP, Conservative MP for Wyre and Preston North; Sean Coughlan, Education Correspondent for the BBC; and Professor Julio Rivera, President of the US Council of Undergraduate Research; and Katherine Harrington of the Higher Education Academy – decided to award Talitha’s project First Prize in this extremely prestigious competition.

Congratulations to Talitha for her prizewinning project! I’m sure her outstanding success will inspire future generations of Sussex undergraduates too!

## Galaxies, Glow-worms and Chicken Eyes

Posted in Bad Statistics, The Universe and Stuff with tags , , , , , , , , on February 26, 2014 by telescoper

I just came across a news item based on a research article in Physical Review E by Jiao et al. with the abstract:

Optimal spatial sampling of light rigorously requires that identical photoreceptors be arranged in perfectly regular arrays in two dimensions. Examples of such perfect arrays in nature include the compound eyes of insects and the nearly crystalline photoreceptor patterns of some fish and reptiles. Birds are highly visual animals with five different cone photoreceptor subtypes, yet their photoreceptor patterns are not perfectly regular. By analyzing the chicken cone photoreceptor system consisting of five different cell types using a variety of sensitive microstructural descriptors, we find that the disordered photoreceptor patterns are “hyperuniform” (exhibiting vanishing infinite-wavelength density fluctuations), a property that had heretofore been identified in a unique subset of physical systems, but had never been observed in any living organism. Remarkably, the patterns of both the total population and the individual cell types are simultaneously hyperuniform. We term such patterns “multihyperuniform” because multiple distinct subsets of the overall point pattern are themselves hyperuniform. We have devised a unique multiscale cell packing model in two dimensions that suggests that photoreceptor types interact with both short- and long-ranged repulsive forces and that the resultant competition between the types gives rise to the aforementioned singular spatial features characterizing the system, including multihyperuniformity. These findings suggest that a disordered hyperuniform pattern may represent the most uniform sampling arrangement attainable in the avian system, given intrinsic packing constraints within the photoreceptor epithelium. In addition, they show how fundamental physical constraints can change the course of a biological optimization process. Our results suggest that multihyperuniform disordered structures have implications for the design of materials with novel physical properties and therefore may represent a fruitful area for future research.

The point made in the paper is that the photoreceptors found in the eyes of chickens possess a property called disordered hyperuniformity which means that the appear disordered on small scales but exhibit order over large distances. Here’s an illustration:

It’s an interesting paper, but I’d like to quibble about something it says in the accompanying news story. The caption with the above diagram states

Left: visual cell distribution in chickens; right: a computer-simulation model showing pretty much the exact same thing. The colored dots represent the centers of the chicken’s eye cells.

Well, as someone who has spent much of his research career trying to discern and quantify patterns in collections of points – in my case they tend to be galaxies rather than photoreceptors – I find it difficult to defend the use of the phrase “pretty much the exact same thing”. It’s notoriously difficult to look at realizations of stochastic point processes and decided whether they are statistically similar or not. For that you generally need quite sophisticated mathematical analysis.  In fact, to my eye, the two images above don’t look at all like “pretty much the exact same thing”. I’m not at all sure that the model works as well as it is claimed, as the statistical analysis presented in the paper is relatively simple: I’d need to see some more quantitative measures of pattern morphology and clustering, especially higher-order correlation functions, before I’m convinced.

Anyway, all this reminded me of a very old post of mine about the difficulty of discerning patterns in distributions of points. Take the two (not very well scanned)  images here as examples:

You will have to take my word for it that one of these is a realization of a two-dimensional Poisson point process (which is, in a well-defined sense completely “random”) and the other contains spatial correlations between the points. One therefore has a real pattern to it, and one is a realization of a completely unstructured random process.

I sometimes show this example in popular talks and get the audience to vote on which one is the random one. The vast majority usually think that the one on the right is the one that is random and the left one is the one with structure to it. It is not hard to see why. The right-hand pattern is very smooth (what one would naively expect for a constant probability of finding a point at any position in the two-dimensional space) , whereas the  left one seems to offer a profusion of linear, filamentary features and densely concentrated clusters.

In fact, it’s the left picture that was generated by a Poisson process using a Monte Carlo random number generator. All the structure that is visually apparent is imposed by our own sensory apparatus, which has evolved to be so good at discerning patterns that it finds them when they’re not even there!

The right process is also generated by a Monte Carlo technique, but the algorithm is more complicated. In this case the presence of a point at some location suppresses the probability of having other points in the vicinity. Each event has a zone of avoidance around it; the points are therefore anticorrelated. The result of this is that the pattern is much smoother than a truly random process should be. In fact, this simulation has nothing to do with galaxy clustering really. The algorithm used to generate it was meant to mimic the behaviour of glow-worms (a kind of beetle) which tend to eat each other if they get too close. That’s why they spread themselves out in space more uniformly than in the random pattern. In fact, the tendency displayed in this image of the points to spread themselves out more smoothly than a random distribution is in in some ways reminiscent of the chicken eye problem.

The moral of all this is that people are actually pretty hopeless at understanding what “really” random processes look like, probably because the word random is used so often in very imprecise ways and they don’t know what it means in a specific context like this. The point about random processes, even simpler ones like repeated tossing of a coin, is that coincidences happen much more frequently than one might suppose. By the same token, people are also pretty hopeless at figuring out whether two distributions of points resemble each other in some kind of statistical sense, because that can only be made precise if one defines some specific quantitative measure of clustering pattern, which is not easy to do.

## From Real Time to Imaginary Time

Posted in Brighton, Education, The Universe and Stuff with tags , , , , , , , , , , , on February 24, 2014 by telescoper

Yesterday, after yet another Sunday afternoon in my office on the University of Sussex campus, I once again encountered the baffling nature of the “real time boards” at the bus-stop at Falmer Station (just over the road from the University). These boards are meant to show the expected arrival times of buses; an example can be seen on the left of the picture below, taken at Churchill Square (in the City Centre).

The real-time board system works pretty well in central Brighton, but it’s a very different story at Falmer, especially for the Number 23 which is my preferred bus home. Yesterday provided a typical illustration of the problem: the time of the first bus on the list, a No. 23, was shown as “1 min” when I arrived at the stop. It then quickly moved to “due” (a word which I’ll comment about later). It then moved back to “2 mins” for about 5 minutes and then back to “due” again. It stayed like that for over 10 minutes at which point the bus that was second on the list (a No. 28 from Lewes) appeared. Rather than risk waiting any longer for the 23 I got on the 28 and had a slightly longer walk home from the stop at the other end. Just as well I did because the 23 vanished entirely from the screen as soon as I boarded the other bus. This apparent time-travel isn’t unusual at Falmer, although I’ve never really understood why.

By sheer coincidence when I got to the bus stop to catch a bus to campus this morning there was a chap from Brighton and Hove buses there. He was explaining what sometimes goes wrong with the real time boards to a lady, so I joined in the conversation and asked him if he knew why Falmer is so unreliable. He was happy to oblige. It turns out that the way the real-time boards work depends on each bus having a GPS system that communicates to a central computer via a radio link. If the radio link drops out for some reason – as it apparently does quite often up at Falmer (mobile phone connectivity is poor here also) – the system looks up the expected time of the bus after the one that it has lost contact with. Thus it is that a bus can apparently be “due” and then apparently go back in time. Also, if a bus has to divert from the route programmed into the GPS tracker then it is also removed from the real-time boards.

However, there is another system in operation alongside the GPS tracker. When a bus actually stops at a stop and opens its doors the onboard computer communicates this to the central system at the same time as the location signs inside the bus are updated. At this point the real-time boards are reset.

The unreliability I’ve observed at Falmer is in fact caused by two problems: (i) the patchy radio coverage as the bus wanders around the hilly environs of Falmer campus; and (ii) the No. 23 is on a new route around the back of campus which means that it vanishes from the system entirely when it wanders off the old route, as would happen if the bus were to break down.

Mystery solved then, in a sense, but it means there’s a systematic problem that isn’t going to be fixed in the short-term. Would it be better to switch off the boards than have them show inaccurate information? Perhaps, but only if it were always wrong. In fact the boards seem to work OK for the more frequent bus, the No. 25. My strategy is therefore never to rely on the information provided concerning the No. 23 and just get the first bus that comes. It’s not a problem anyway during the week because there’s a bus every few minutes, but on a Sunday evening it is quite irksome to see apparently random times on the screens.

All this talk about real-time boards reminds me of a question I was asked in a lecture last week. I was starting a new section of my Theoretical Physics module for 2nd Year students on Complex Analysis: the Cauchy-Riemann equations, Conformal Transformations, Contour Integrals and all that Jazz. To start the section I went on a bit of a ramble about the ubiquity of complex numbers in physics and whether this means that imaginary numbers are, in some sense, real. You can find an enjoyable polemic on this subject, given the answer “no” to the question here.

Anyway, I got the class to suggest examples of the use of complex numbers in physics. The things you’d expect came up such as circuit theory, wave propagation etc. Then somebody mentioned that somewhere they had heard of imaginary time. The context had probably been provided Stephen Hawking who mentioned this in his book A Brief History of Time. In fact the trick of introducing imaginary time is called a Wick Rotation and the basic idea is simple. In special relativity we deal with four-dimensional space-time intervals of the form

$ds^2 = -c^2dt^2 + dx^2 + dy^2 +dz^2$,

i.e. the metric describing Minkowski space. The minus sign in front of the time bit is essential to the causal structure of space-time but it causes quite a few mathematical difficulties. However if we make the substitution

$\tau \rightarrow i c t$

then the metric becomes

$ds^2 = d\tau^2 + dx^2 + dy^2 +dz^2$,

which corresponds to a four-dimensional Euclidean space which is in many situations much easier to handle mathematically.

Complex variables and complex functions provide the theoretical physicist with a host of extremely elegant techniques for solving tricky problems. But does that mean they are somehow “built in” to nature? I don’t think so. I don’t think the Brighton & Hove Bus company uses imaginary time on its display boards either, although it does sometimes seem that way.

POSTSCRIPT. I forgot to include my planned rant about the use of the word “due”. The boards displaying train times at railway stations usually give the destination and planned departure time of the train, e.g. “Brighton 11.15″. If things are running to schedule this information is supplemented by the phrase “On Time”. If not, which is sadly a more likely contingency in the UK, this changes to “due 11.37″ or some such. This really annoys me.: the train is due at 11.15. If it doesn’t come until after then, it’s overdue or, in other words, late.

## The most beautiful equation?

Posted in The Universe and Stuff with tags , , , , on February 13, 2014 by telescoper

There’s an interesting article on the BBC website today that discusses the way mathematicians’ brains appear to perceive “beauty”. A (slightly) more technical version of the story can be found here. According to functional magnetic resonance imaging studies, it seems that beautiful equations excite the same sort of brain activity as beautiful music or art.

The question of why we think equations are beautiful is one that has come up a number of times on this blog. I suspect the answer is a slightly different one for theoretical physicists compared with pure mathematicians. Anyway, I thought it might be fun to invite people offer suggestions through the comments box as to the most beautiful equation along with a brief description of why.

I should set the ball rolling myself, and I will do so with this, the Dirac Equation:

This equation is certainly the most beautiful thing I’ve ever come across in theoretical physics, though I don’t find it easy to articulate precisely why. I think it’s partly because it is such a wonderfully compact fusion of special relativity with quantum mechanics but also partly because of the great leaps of the imagination that were needed along the journey to derive it and consequent admiration for the intellectual struggle involved. I feel it is therefore as much an emotional response to the achievement of another human being – such as one feels when hearing great music or looking at great art – as it is a rational response to the mathematical structure involved.

Anyway, feel free to suggest formulae or equations through the comments box, preferably with a brief explanation of why you think they’re so beautiful.

## Big Trouble with Big G

Posted in The Universe and Stuff with tags , , on February 4, 2014 by telescoper

An Antonymous email correspondent this morning drew my attention to an interesting article in the latest Physics World about the trials and tribulations of groups of physicists trying to measure Newton’s Gravitational Constant,  G. This is probably the first physical constant that most of us encounter when we’re learning the subject so it might seem strange that it’s the one which is known to the lowest accuracy. That’s not for want of trying to make the measurements more precise, just that gravity is such a very weak force that it’s very difficult to eliminate systematic effects down to the necessary level.

Just how difficult it is to measure Big G is demonstrated by the following graphic which shows the latest measurements:

Here’s the caption, so you can identify the various groups responsible for the various measurements:

Disagreeing over “big G” This chart shows wildly differing values of the gravitational constant, G, as measured by various high-profile research groups (blue). The values do not agree even within their error bars. Also shown are two values of G adopted by the Committee on Data for Science and Technology (CODATA) as international standards (red). The groups are based at the National Institute of Standards and Technology (NIST), the University of Washington (UWASH), the International Bureau of Weights and Measures (BIPM), the Measurement Standards Laboratory of New Zealand (MSL), the University of Zurich (UZURICH), the Huazhong University of Science and Technology (HUST) and the Joint Institute for Astrophysics (JILA).

Clearly there’s quite a lot of disagreement between recent results, with some a long way outside each other’s error bars. They can’t all be right, but who’s most likely to be wrong? Answers on a postcard.

I’m by no means an expert on experimental gravity so I won’t attempt to suggest who is right and who is wrong. What I will say is that although this kind of research is clearly extremely important it is clearly also fiendishly difficult. I’m not really surprised that the pieces of the puzzle haven’t fallen into place yet. The dedicated teams who have been tackling this problem for many decades deserve the deep admiration as well as the continued support of the physics community. Theoretical physics is generally perceived to be more glamorous and exciting than its experimental counterpart, but the subject as a whole is nothing without its empirical foundations. That said, I’m glad it’s not my job to measure Big G. I have neither the practical skill nor the patience to cope with so many frustrations!

## Methods of Images

Posted in Biographical, Cute Problems, Education with tags , , , , on January 29, 2014 by telescoper

I’ve had a very busy day today including giving a lecture on Electrostatics and the Method of Images and, in an unrelated lunch-hour activity, filing my tax return (and paying the requisite bill). The latter was the most emotionally draining.

With no time for a proper post, I thought I’d give some examples of the images produced by yesterday’s graduands, including some who used a particular approach called the Method of Selfies. Unfortunately some of these are spoiled by having a strange bearded person in the background.

But first you might like to try the following example using the actual Method of Images:

Given two parallel, grounded, infinite conducting planes a distance a apart, we place a charge +q between the plates, a distance x from one of them. What is the force on the charge?

This is, in fact, from Griffiths, David J. (2007) Introduction to Electrodynamics, 3rd Edition; Prentice Hall – Problem 3.35.

And now here are some of the official pictures from yesterday

## How to Address Gender Inequality in Physics

Posted in Education, The Universe and Stuff with tags , , , , on January 26, 2014 by telescoper

Last night I was drinking a glass or several of wine while listening to the radio and thinking about a brainwave I’d had on Friday. Naturally I decided to wait until I reconsidered it in the cold light and sobriety of day before posting it, which I have now done, so here it is.

The idea that came to me simply joins two threads of discussion that have appeared on this blog before. The first is that, despite strenuous efforts by many parties, the fraction of female students taking A-level Physics has flat-lined at 20% for over a decade. This is the reason why the proportion of female physics students at university is the same, i.e. 20%. In short, the problem lies within our school system.

The second line of argument is that A-level Physics is not a useful preparation for a Physics degree because it does not develop the sort of problem-solving skills or the ability to express physical concepts in mathematical language on which university physics depends. Most physics admissions tutors that I know care much more about the performance of students at A-level Mathematics than Physics.

Hitherto, most of the effort that has been expended on the first problem has been directed at persuading more girls to do Physics A-level. Since all universities require a Physics A-level for entry into a degree programme, this makes sense but it has not been successful.

I now believe that the only practical way to improve the gender balance on university physics course is to drop the requirement that applicants have A-level Physics entirely and only insist on Mathematics (which has a much more even gender mix). I do not believe that this would require many changes to course content but I do believe it would circumvent the barriers that our current school system places in the way of aspiring female physicists.

Not all UK universities seem very interested in widening participation, but those that are should seriously consider this approach.