Yesterday I went to a nice Colloquium by Rob Crain of Liverpool John Moores University (which is in the Midlands). Here’s the abstract of his talk which was entitled
Cosmological hydrodynamical simulations of the galaxy population:
I will briefly recap the motivation for, and progress towards, numerical modelling of the formation and evolution of the galaxy population – from cosmological initial conditions at early epochs through to the present day. I will introduce the EAGLE project, a flagship program of such simulations conducted by the Virgo Consortium. These simulations represent a major development in the discipline, since they are the first to broadly reproduce the key properties of the evolving galaxy population, and do so using energetically-feasible feedback mechanisms. I shall present a broad range of results from analyses of the EAGLE simulation, concerning the evolution of galaxy masses, their luminosities and colours, and their atomic and molecular gas content, to convey some of the strengths and limitations of the current generation of numerical models.
I added the link to the EAGLE project so you can find more information. As one of the oldies in the audience I can’t help remembering the old days of the galaxy formation simulation game. When I started my PhD back in 1985 the state of the art was a gravity-only simulation of 323 particles in a box. Nowadays one can manage about 20003 particles at the same time aas having a good go at dealing not only with gravity but also the complex hydrodynamical processes involved in assembling a galaxy of stars, gas, dust and dark matter from a set of primordial fluctuations present in the early Universe. In these modern simulations one does not just track the mass distribution but also various themrmodynamic properties such as temperature, pressure, internal energy and entropy, which means that they require large supercomputers. This certainly isn’t a solved problem – different groups get results that differ by an order of magnitude in some key predictions – but the game has certainly moved on dramatically in the past thirty years or so.
Another thing that has certainly improved a lot is data visualization: here is a video of one of the EAGLE simulations, showing a region of the Universe about 25 MegaParsecs across. The gas is colour-coded for temperature. As the simulation evolves you can see the gas first condense into the filaments of the Cosmic Web, thereafter forming denser knots in which stars form and become galaxies, experiencing in some cases explosive events which expel the gas. It’s quite a messy business, which is why one has to do these things numerically rather than analytically, but it’s certainly fun to watch!Follow @telescoper
I recently posted a piece of music by the great blues and boogie-woogie pianist Jimmy Yancey. According to the blog stats page that post is proving quite popular so I thought I’d add another piece the same musician. This is Jimmy Yancey’s characteristically bluesy take on The Rocks, based on one of the more conventional left-hand patterns used in boogie-woogie that you will probably recognize from other contexts.Follow @telescoper
This afternoon I went to yet another meeting about assessment and feedback in University teaching involving members of staff and students from the School of Physics & Astronomy here at Cardiff University as well as some people from other schools and departments. Positive though this afternoon’s discussion was, it didn’t do anything to dissuade me from a long-held view that the entire education system holds back the students’ ability to learn by assessing them far too much. This is a topic that I’ve blogged about a few times before over the years (see, e.g., here) but given that the problem hasn’t gone away (and indeed is probably going to get worse as a result of the Teaching Excellence Framework which the Westminster government is trying to impose on universities), I make no apologies for repeating the main points here.
One important point we need to resolve to pin down essentially what is meant by “Research-led Teaching”, which is what we’re supposed to be doing at universities. In my view too much teaching is not really led by research at all, but mainly driven by assessment. The combination of the introduction of modular programmes and the increase of continuously assessed coursework has led to a cycle of partial digestion and regurgitation that involves little in the way of real learning and certainly nothing like the way research is done. I don’t know why we’ve got into this situation but it can’t be allowed to continue.
I’m not going to argue for turning the clock back entirely but, for the record, my undergraduate degree involved no continuous assessment at all (apart from a theory project I opted for in my final year. Having my entire degree result based on the results of six three-hour unseen examinations in the space of three days is not an arrangement I can defend, but note that despite the lack of continuous assessment I still spent less time in the examination hall than present-day students.
That’s not to say I didn’t have coursework. I did, but it was formative rather than summative; in other words it was for the student to learn about the subject, rather for the staff to learn about the student. I handed in my stuff every week, it was marked and annotated by a supervisor, then returned and discussed at a supervision.
People often tell me that if a piece of coursework “doesn’t count” then the students won’t do it. There is an element of truth in that, of course. But I had it drummed into me that the only way really to learn my subject (Physics) was by doing it. I did all the coursework I was given because I wanted to learn and I knew that was the only way to do it. I think we need to establish that as a basic principle of education in physics (and similar subjects).
The very fact that coursework didn’t count for assessment made the feedback written on it all the more useful when it came back because if I’d done badly I could learn from my mistakes without losing marks. This also encouraged me to experiment a little, such as using a method different from that suggested in the question. That’s a dangerous strategy nowadays, as many seem to want to encourage students to behave like robots, but surely we should be encouraging students to exercise their creativity rather than simply follow the instructions? The other side of this is that more challenging assignments can be set, without worrying about what the average mark will be or what specific learning outcome they address.
I suppose what I’m saying is that the idea of Learning for Learning’s Sake, which is what in my view defines what a university should strive for, is getting lost in a wilderness of modules, metrics, percentages and degree classifications. We’re focussing too much on those few aspects of the educational experience that can be measured, ignoring the immeasurable benefit (and pleasure) that exists for all of us humans in exploring new ways to think about the world around us.Follow @telescoper
This week the UK Supreme Court is hearing an appeal by HM Government against the judgment recently delivered by the High Court which was that the UK Government must seek the approval of Parliament before it can invoke Article 50 of the Lisbon Treaty and thus begin the process of leaving the European Union. You can watch the proceedings live here. I had a brief look myself this morning but as I’m not a legal expert I found it rather hard to follow as it’s rather technical stuff. That wasn’t helped by the rather dull delivery of James Eade QC who was presenting the government’s case. Nevertheless, it is a very good thing that we can see how the law work in practice. I was surprised at the lack of gowns and wigs!
Although Eade seemed (to me) be on a very sticky wicket for some of the time, it’s impossible for me to come to any informed inference about who’s likely to win. Out of interest, to see what other people think, I therefore had a quick look at the betting markets. Traditional bookmakers (such as William Hill) are offering 1-3 (i.e. 3-1 ON) for the original decision being upheld so they’re clearly expecting the appeal to fail.
These days, however, I’ve started to get interested in other kinds of betting markets, especially the BetFair Exchange. This allows customers to act as bookmakers as well as punters by offering the option to “lay” and/or “back” various possible bets. “Laying” betting means effectively acting as a bookie, proposing odds on a particular outcome. i.e. selling a bet. “Backing” a bet means buying a bet. The exchange then advertises this to prospective bettors who sign up of they are prepared to stake money on that particular outcome at those particular odds. It’s very similar in concept to other trading services, e.g. share dealing. Matches aren’t always made of course, so not every bet that’s offered gets accepted. If that happens you can try again with more generous odds.
The advantage of this type of betting is that it represents an “efficient market”. Such a market occurs when all the money going into the market equals all the money being paid out in the market – there is no leakage or profits being taken. Efficient betting markets rarely exist outside of betting exchanges – bookmakers need to reap a profit in order to run a business. For example, though William Hill is offering 1-3 on the Supreme Court ruling being upheld, the odds they offer against this outcome are 12-5. These are not “true odds” in the sense that they can’t represent a consistent pair of probabilities of the two outcomes (as they don’t add up to one). In the case of an exchange market a bet laid at 1-3 is automatically backed at 3-1. These can then be regarded as “true odds”.
This is what the BetFair Exchange on the Supreme Court hearing looks like at the moment (you might want to click on the image to make it clearer):
The odds are given in a slightly funny way, giving the gross return for a unit stake (including the stake). In more normal language “4.3” would be 100-30, i.e. a £1 bet gets you £3.33 plus your £1 back. A bet on “overrule” at “4” (3-1) corresponds to a bet against “uphold” at 1.33 (1-3), reflecting what I was saying about “true odds”.
The first thing that struck me is the figure at the top right: £38,427. This is the value of all bets matched in this market. By BetFair standards this is very low. A typical Premiership football match will involve bets at least ten times as big as this. As in the court case itself there just isn’t very much action!
Apart from that you can see that the odds here are broadly similar with William Hill etc with implied odds around 3-1 to 4-1 against overruling.
Before you ask, I’m not going to bet on this myself. My betting strategy usually involves betting on the outcome I don’t want to happen. Although I think Parliament should be involved in Article 50 I am just happy that this matter should be left to our independent judiciary to decide.Follow @telescoper
Many people seem to think that astronomers spend all their time looking at pretty pictures of stars and galaxies. Actually a large part of observational astronomy isn’t about making images of things but doing spectroscopy. In fact the rise of astronomical spectroscopy is what turned astronomy into astrophysics. But that’s not to say that spectra can’t be pretty either. Here is an example (from here) which shows the light from the quasar HE0940-1050 taken by the UVES instrument mounted on ESO’s Very Large Telescope in Chile.
This quasar is an interesting object, at a redshift of z= 3.0932 (which converts to a look-back time of about 11.6 billion years). The dark bands and lines you can see in the spectrum are caused by absorption of the light from the quasar by clouds of hydrogen gas between the quasar and the observer; the strength of the absorption indicates how much gas the light from the quasar has travelled through. The absorption occurs at a particular wavelength corresponding to the Lyman-α transition but, because the clouds are all at different redshifts, each produces a line at a different observed wavelength in the quasar spectrum. There are many lines, which is why the collection of clouds responsible for them is often called the Lyman-α Forest. In effect the quasar sample is very much like a core sample, as if we were able to drill back in time to the quasar through the material that lies along the line of sight.
This spectrum is particularly remarkable because of the number of faint lines that can be seen: it’s like a detailed DNA Fingerprint of cosmic structure. It’s also very pretty.Follow @telescoper
Interesting “inside” story by a student of the discovery of gravitational waves, from the Classical and Quantum Gravity Website.
Professor Stephen Fairhurst (mentioned in the post) is a member of the Gravitational Physics group at Cardiff University, and Director of the Data Innovation Research Institute.
Written by Samantha Usman, who is currently pursuing an MPhil at Cardiff University, UK under the supervision of Prof. Stephen Fairhurst. She graduated in May 2016 with a BS in Mathematics and Physics at Syracuse University. While at Syracuse, Usman worked with Prof. Duncan Brown on improving LIGO’s sensitivity to gravitational waves from binary star systems. In her spare time, Usman trains in Brazilian jiu jitsu and Muay Thai kickboxing and enjoys walks with her Australian Shepherd, Marble.
The discovery of gravitational waves from an undergraduate’s perspective
The first time I learned LIGO might have detected a gravitational wave, I was listening in on a conference call on September 16, 2015. Two days earlier, ripples in the fabric of space from massive black holes crashing into each other at half the speed of light had passed through the…
View original post 1,036 more words