Archive for Cosmology

Dark Energy – Lectures by Varun Sahni

Posted in The Universe and Stuff with tags , , on June 9, 2019 by telescoper

I thought I’d share this lecture course about Dark Energy here. It was delivered by Varun Sahni at an international school on cosmology earlier this year. The material is quite technical in places but I’m sure these lectures will prove a very helpful introduction to, for example, new PhD students in this area. Varun has been a very good friend and colleague of mine for many years, and he is an excellent lecturer!

Here are the three lectures:

Advertisements

The 2019 Gruber Prize for Cosmology: Nick Kaiser and Joe Silk

Posted in The Universe and Stuff with tags , , , , , , , on May 9, 2019 by telescoper

I’ve just heard that the Gruber Foundation has announced the winners of this year’s Gruber Prize for cosmology, namely Nick Kaiser and Joe Silk. Worthy winners the both of them! Congratulations!

Here’s some text taken from the press release:

The recipients of the 2019 prize are Nicholas Kaiser and Joseph Silk, both of whom have made seminal contributions to the theory of cosmological structure formation and to the creation of new probes of dark matter. Though they have worked mostly independently of each other, the two theorists’ results are complementary in these major areas, and have transformed modern cosmology — not once but twice.

The two recipients will share the $500,000 award, and each will be presented with a gold medal at a ceremony that will take place on 28 June at the CosmoGold conference at the Institut d’Astrophysique de Paris in France.

The physicists’ independent contributions to the theory of cosmological structure formation have been instrumental in building a more complete picture of how the early Universe evolved into the Universe as astronomers observe it today. In 1967 and 1968, Silk predicted that density fluctuations below a critical size in the Cosmic Microwave Background, the remnant radiation “echoing” the Big Bang, would have dissipated. This phenomenon, later verified by increasingly high precision measurements of the CMB, is now called “Silk Damping”.

In the meantime, ongoing observations of the large-scale structure of the Universe, which evolved from the larger CMB fluctuations, were subject to conflicting interpretations. In a series of papers beginning in 1984, Kaiser helped to resolve these debates by providing statistical tools that would allow astronomers to separate “noise” from data, reducing ambiguity in the observations.

Kaiser’s statistical methodology was also influential in dark matter research; the DEFW collaboration (Marc Davis, George Efstathiou, Carlos Frenk, and Simon D. M. White) utilised it to determine the distribution and velocity of dark matter in the Universe, and discovered its non-relativistic nature (moving at a velocity not approaching the speed of light). Furthermore, Kaiser devised an additional statistical methodology to detect dark matter distribution through weak lensing — an effect by which foreground matter distorts the light of background galaxies, providing a measure of the mass of both. Today weak lensing is among cosmology’s most prevalent tools.

Silk has also been impactful in dark matter research, having proposed in 1984 a method of investigating dark matter particles by exploring the possibilities of their self-annihilations into particles that we can identify (photons, positrons and antiprotons). This strategy continues to drive research worldwide.

Both Kaiser and Silk are currently affiliated with institutions in Paris, Kaiser as a professor at the École Normale Supérieure, and Silk as an emeritus professor and a research scientist at the Institut d’Astrophysique de Paris (in addition to a one-quarter appointment at The John Hopkins University). Among their numerous significant contributions to their field, their work on the CMB and dark matter has truly revolutionised our understanding of the Universe.

I haven’t worked directly with either Nick Kaiser or Joe Silk but both had an enormous influence on me, especially early on in my career. When I was doing my PhD, Nick was in Cambridge and Joe was in Berkeley. In fact I think Nick was the first person ever to ask me a question during a conference talk – which terrified the hell out of me because I didn’t know him except by scientific reputation and didn’t realize what a nice guy he is! Anyway his 1984 paper on cluster correlations was the direct motivation for my very first publication (in 1986).

I don’t suppose either will be reading this but heartiest congratulations to both, and if they follow my advice they won’t spend all the money in the same shop!

P.S. Both Nick and Joe are so distinguished that each has appeared in my Astronomy Lookalikes gallery (here and here).

Redshift and Distance in Cosmology

Posted in The Universe and Stuff with tags , , , , , on April 29, 2019 by telescoper

I was looking for a copy of this this picture this morning and when I found it I thought I’d share it here. It was made by Andy Hamilton and appears in this paper. I used it (with permission) in the textbook I wrote with Francesco Lucchin which was published in 2003.

I think this is a nice simple illustration of the effect of the density parameter Ω and the cosmological constant Λ on the relationship between redshift and (comoving) distance in the standard cosmological models based on the Friedman Equations.

On the left there is the old standard model (from when I was a lad) in which space is Euclidean and there is a critical density of matter; this is called the Einstein de Sitter model in which Λ=0. On the right you can see something much closer to the current standard model of cosmology, with a lower density of matter but with the addition of a cosmological constant. Notice that in the latter case the distance to an object at a given redshift is far larger than in the former. This is, for example, why supernovae at high redshift look much fainter in the latter model than in the former, and why these measurements are so sensitive to the presence of a cosmological constant.

In the middle there is a model with no cosmological constant but a low density of matter; this is an open Universe. Because it decelerates much more slowly than in the Einstein de Sitter model, the distance out to a given redshift is larger (but not quite as large as the case on the right, which is an accelerating model), but the main property of interest in the open model is that the space is not Euclidean, but curved. The effect of this is that an object of fixed physical size at a given redshift subtends a much smaller angle than in the cases either side. That shows why observations of the pattern of variations in the temperature of the cosmic microwave background across the sky yield so much information about the spatial geometry.

It’s a very instructive picture, I think!

Poisson (d’Avril) Point Processes

Posted in Uncategorized with tags , , , on April 2, 2019 by telescoper

I was very unimpressed by yesterday’s batch of April Fool jokes. Some of them were just too obvious:

I’m glad I didn’t try to do one.

Anyway, I noticed that an old post of mine was getting some traffic and when I investigated I found that some of the links to pictures were dead. So I’ve decided to refresh it and post again.

–0–

I’ve got a thing about randomness. For a start I don’t like the word, because it covers such a multitude of sins. People talk about there being randomness in nature when what they really mean is that they don’t know how to predict outcomes perfectly. That’s not quite the same thing as things being inherently unpredictable; statements about the nature of reality are ontological, whereas I think randomness is only a useful concept in an epistemological sense. It describes our lack of knowledge: just because we don’t know how to predict doesn’t mean that it can’t be predicted.

Nevertheless there are useful mathematical definitions of randomness and it is also (somtimes) useful to make mathematical models that display random behaviour in a well-defined sense, especially in situations where one has to take into account the effects of noise.

I thought it would be fun to illustrate one such model. In a point process, the random element is a “dot” that occurs at some location in time or space. Such processes occur in wide range of contexts: arrivals of buses at a bus stop, photons in a detector, darts on a dartboard, and so on.

Let us suppose that we think of such a process happening in time, although what follows can straightforwardly be generalised to things happening over an area (such a dartboard) or within some higher-dimensional region. It is also possible to invest the points with some other attributes; processes like this are sometimes called marked point processes, but I won’t discuss them here.

The “most” random way of constructing a simple point process is to assume that each event happens independently of every other event, and that there is a constant probability per unit time of an event happening. This type of process is called a Poisson process, after the French mathematician Siméon-Denis Poisson, who was born in 1781. He was one of the most creative and original physicists of all time: besides fundamental work on electrostatics and the theory of magnetism for which he is famous, he also built greatly upon Laplace’s work in probability theory. His principal result was to derive a formula giving the number of random events if the probability of each one is very low. The Poisson distribution, as it is now known and which I will come to shortly, is related to this original calculation; it was subsequently shown that this distribution amounts to a limiting of the binomial distribution. Just to add to the connections between probability theory and astronomy, it is worth mentioning that in 1833 Poisson wrote an important paper on the motion of the Moon.

In a finite interval of duration T the mean (or expected) number of events for a Poisson process will obviously just be proportional to the product of the rate per unit time and T itself; call this product λ.

The full distribution is then of the form:

This gives the probability that a finite interval contains exactly x events. It can be neatly derived from the binomial distribution by dividing the interval into a very large number of very tiny pieces, each one of which becomes a Bernoulli trial. The probability of success (i.e. of an event occurring) in each trial is extremely small, but the number of trials becomes extremely large in such a way that the mean number of successes is l. In this limit the binomial distribution takes the form of the above expression. The variance of this distribution is interesting: it is alsol.  This means that the typical fluctuations within the interval are of order the square root of l on a mean level of l, so the fractional variation is of the famous “one over root n” form that is a useful estimate of the expected variation in point processes.  Indeed, it’s a useful rule-of-thumb for estimating likely fluctuation levels in a host of statistical situations.

If football were a Poisson process with a mean number of goals per game of, say, 2 then would expect must games to have 2 plus or minus 1.4 (the square root of 2)  goals, i.e. between about 0.6 and 3.4. That is actually not far from what is observed and the distribution of goals per game in football matches is actually quite close to a Poisson distribution.

This idea can be straightforwardly extended to higher dimensional processes. If points are scattered over an area with a constant probability per unit area then the mean number in a finite area will also be some number l and the same formula applies.

As a matter of fact I first learned about the Poisson distribution when I was at school, doing A-level mathematics (which in those days actually included some mathematics). The example used by the teacher to illustrate this particular bit of probability theory was a two-dimensional one from biology. The skin of a fish was divided into little squares of equal area, and the number of parasites found in each square was counted. A histogram of these numbers accurately follows the Poisson form. For years I laboured under the delusion that it was given this name because it was something to do with fish, but then I never was very quick on the uptake.

This is all very well, but point processes are not always of this Poisson form. Points can be clustered, so that having one point at a given position increases the conditional probability of having others nearby. For example, galaxies like those shown in the nice picture are distributed throughout space in a clustered pattern that is very far from the Poisson form. But it’s very difficult to tell from just looking at the picture. What is needed is a rigorous statistical analysis.

 

The statistical description of clustered point patterns is a fascinating subject, because it makes contact with the way in which our eyes and brain perceive pattern. I’ve spent a large part of my research career trying to figure out efficient ways of quantifying pattern in an objective way and I can tell you it’s not easy, especially when the data are prone to systematic errors and glitches. I can only touch on the subject here, but to see what I am talking about look at the two patterns below:

pointbpointa

You will have to take my word for it that one of these is a realization of a two-dimensional Poisson point process and the other contains correlations between the points. One therefore has a real pattern to it, and one is a realization of a completely unstructured random process.

I show this example in popular talks and get the audience to vote on which one is the random one. The vast majority usually think that the top  is the one that is random and the bottom one is the one with structure to it. It is not hard to see why. The top pattern is very smooth (what one would naively expect for a constant probability of finding a point at any position in the two-dimensional space) , whereas the bottom one seems to offer a profusion of linear, filamentary features and densely concentrated clusters.

In fact, it’s the bottom  picture that was generated by a Poisson process using a  Monte Carlo random number generator. All the structure that is visually apparent is imposed by our own sensory apparatus, which has evolved to be so good at discerning patterns that it finds them when they’re not even there!

The top  process is also generated by a Monte Carlo technique, but the algorithm is more complicated. In this case the presence of a point at some location suppresses the probability of having other points in the vicinity. Each event has a zone of avoidance around it; the points are therefore anticorrelated. The result of this is that the pattern is much smoother than a truly random process should be. In fact, this simulation has nothing to do with galaxy clustering really. The algorithm used to generate it was meant to mimic the behaviour of glow-worms which tend to eat each other if they get  too close. That’s why they spread themselves out in space more uniformly than in the random pattern.

Incidentally, I got both pictures from Stephen Jay Gould’s collection of essays Bully for Brontosaurus and used them, with appropriate credit and copyright permission, in my own book From Cosmos to Chaos. I forgot to say this in earlier versions of this post.

The tendency to find things that are not there is quite well known to astronomers. The constellations which we all recognize so easily are not physical associations of stars, but are just chance alignments on the sky of things at vastly different distances in space. That is not to say that they are random, but the pattern they form is not caused by direct correlations between the stars. Galaxies form real three-dimensional physical associations through their direct gravitational effect on one another.

People are actually pretty hopeless at understanding what “really” random processes look like, probably because the word random is used so often in very imprecise ways and they don’t know what it means in a specific context like this.  The point about random processes, even simpler ones like repeated tossing of a coin, is that coincidences happen much more frequently than one might suppose.

I suppose there is an evolutionary reason why our brains like to impose order on things in a general way. More specifically scientists often use perceived patterns in order to construct hypotheses. However these hypotheses must be tested objectively and often the initial impressions turn out to be figments of the imagination, like the canals on Mars.

Now, I think I’ll complain to wordpress about the widget that links pages to a “random blog post”. I’m sure it’s not really random….

 

 

Machine Learning in the Physical Sciences

Posted in The Universe and Stuff with tags , , , , , on March 29, 2019 by telescoper

If, like me, you feel a bit left behind by goings-on in the field of Machine Learning and how it impacts on physics then there’s now a very comprehensive review by Carleo et al on the arXiv.

Here is a picture from the paper, which I have included so that this post has a picture in it:

The abstract reads:

Machine learning encompasses a broad range of algorithms and modeling tools used for a vast array of data processing tasks, which has entered most scientific disciplines in recent years. We review in a selective way the recent research on the interface between machine learning and physical sciences.This includes conceptual developments in machine learning (ML) motivated by physical insights, applications of machine learning techniques to several domains in physics, and cross-fertilization between the two fields. After giving basic notion of machine learning methods and principles, we describe examples of how statistical physics is used to understand methods in ML. We then move to describe applications of ML methods in particle physics and cosmology, quantum many body physics, quantum computing, and chemical and material physics. We also highlight research and development into novel computing architectures aimed at accelerating ML. In each of the sections we describe recent successes as well as domain-specific methodology and challenges.

The next step after Machine Learning will of course be Machine Teaching…

BICEP2: Is the Signal Cosmological?

Posted in Astrohype, The Universe and Stuff with tags , , on March 28, 2019 by telescoper

An article in Physics Today just reminded me just now that I have missed the fifth anniversary of the BICEP2 announcement of `the detection of primordial gravitational waves’. I know I’m a week but I thought I’d reblog the post I wrote on March 19th 2014.You will see that I was sceptical…

..and it subsequently turned out that I was right to be so.

In the Dark

I have a short gap in my schedule today so I thought I would use it to post a short note about the BICEP2 results announced to great excitement on Monday.

There has been a great deal of coverage in the popular media about a “Spectacular Cosmic Discovery” and this is mirrored by excitement at a more technical level about the theoretical implications of the BICEP2 results. Having taken a bit of time out last night to go through the discovery paper, I think I should say that I think all this excitement is very premature. In that respect I agree with the result of my straw poll.

First of all let me make it clear that the BICEP2 experiment is absolutely superb. It was designed and built by top-class scientists and has clearly functioned brilliantly to improve its sensitivity so much that it has gone so…

View original post 1,015 more words

Fine-tuning in Cosmology

Posted in The Universe and Stuff with tags , , , , , on March 25, 2019 by telescoper

I forgot to post a link to a paper by Fred Adams that appeared on the arXiv last month on the topic of the fine-tuning of the Universe which I had bookmarked for a blog a while ago.

My heart always sinks when the arXiv informs me that the abstract of a paper is `abridged’ so here’s the full version from the PDF you can download for yourself here. Please be aware, though, that it’s a lengthy paper running to over two hundred pages:

My own view on this topic is that it is indeed remarkable that the Universe is finely-tuned to exactly the extent required to allow authors to write such long papers about the fine-tuning of the Universe…