I just remembered that last night I happened across an interesting episode of The Essay on Radio 3. It was about the first ever photograph of an astronomical nebula, which happened to be of the Orion Nebula (M42). The programme features Omar Nasim, a lecturer in History at Kent University, and is available on iPlayer or as a download here. It’s only 15 minutes long, but absolutely fascinating.
Here is the photograph concerned, taken by Henry Draper in 1880:
The stars of the constellation Orion are clearly over-exposed in order to reveal the much fainter light from the nebula, and the resolution is poor compared to, e.g., this glorious Hubble Space Telescope image:
The Orion Nebula seen by Hubble. Credit: ESA/NASA/Hubble Space Telscope
Nevertheless the Draper photograph is of great historical importance, as it changed the way astronomers made images of such objects (by photography rather than by drawing) and ushered in a new era of scientific research.
It was an interesting coincidence that, last night, on the eve of my last day working at the University of Sussex before moving to Cardiff University, there was a game of cricket between Sussex and Glamorgan at the County Ground in Hove. Naturally I decided to go along and was fortunate to have Dorothy Lamb along for company. To be precise this wasn’t “proper cricket”, but a Natwest T20 “Blast”. Unfortunately the weather dampened the squib considerably. Yesterday’s weather forecast predicted rain in the afternoon clearing by the time the game started (at 18.30), but when we got to the ground it was still drizzling:
After a lot of faffing about play did actually get under way at about 19.50, the match to be reduced to 14 overs a side because of the late start.
You can see the full scorecard here. Glamorgan batted first, struggling right from the start despite some wayward bowling from Sussex. Having been 62 for 8 at one point they were probably relieved to get into three figures, though they only just managed this: they were all out for 101 in the last over. Sussex batted and got off to a much better start, but then the rain came back so they went off. They then came back again but only one ball was beowled before the rain (which was really just drizzle) started again so they went off again. And so on. In the end only four overs and one ball were possible before the rain came back for good and the match was abandoned with no result. The upshot of this was that Glamorgan qualified for the Quarter Finals and Sussex didn’t. Glamorgan were lucky. Sussex were 30-1 when play was halted but a minimum of five overs have to be bowled for a result to be declared. A few minutes more play and Sussex would almost certainly have won. Such is life.
The results of the Stern Review of the process for assessing university research and allocating public funding has been published today. This is intended to inform the way the next Research Excellence Framework (REF) will be run, probably in 2020, so it’s important for all researchers in UK universities.
Here are the main recommendations, together with brief comments from me (in italics):
All research active staff should be returned in the REF. Good in principle, but what is to stop institutions moving large numbers of staff onto teaching-only contracts (which is what happened in New Zealand when such a move was made)?
Outputs should be submitted at Unit of Assessment level with a set average number per FTE but with flexibility for some faculty members to submit more and others less than the average.Outputs are countable and therefore “fewer” rather than “less”. Other than that, having some flexibility seems fair to me as long as it’s not easy to game the system. Looking it more detail at the report it suggests that some could submit up to six and others potentially none, with an average of perhaps two across the UoA. I’m not sure precise numbers make sense, but the idea seems reasonable.
Outputs should not be portable. Presumably this doesn’t mean that only huge books can be submitted, but that outputs do not transfer when staff transfer. I don’t think this is workable, but that what should happen is that credit for research should be shared between institutions when a researcher moves from one to another.
Panels should continue to assess on the basis of peer review. However, metrics should be provided to support panel members in their assessment, and panels should be transparent about their use. Good. Metrics only tell part of the story.
Institutions should be given more flexibility to showcase their interdisciplinary and collaborative impacts by submitting ‘institutional’ level impact case studies, part of a new institutional level assessment. It’s a good idea to promote interdisciplinarity, but it’s not easy to make it happen…
Impact should be based on research of demonstrable quality. However, case studies could be linked to a research activity and a body of work as well as to a broad range of research outputs. This would be a good move. The existing rules for Impact seem unnecessarily muddled.
Guidance on the REF should make it clear that impact case studies should not be narrowly interpreted, need not solely focus on socio-economic impacts but should also include impact on government policy, on public engagement and understanding, on cultural life, on academic impacts outside the field, and impacts on teaching. Also good.
A new, institutional level Environment assessment should include an account of the institution’s future research environment strategy, a statement of how it supports high quality research and research-related activities, including its support for interdisciplinary and cross-institutional initiatives and impact. It should form part of the institutional assessment and should be assessed by a specialist, cross-disciplinary panel. Seems like a reasonable idea, but a “specialisr cross-disciplinary” panel might be hard to assemble…
That individual Unit of Assessment environment statements are condensed, made complementary to the institutional level environment statement and include those key metrics on research intensity specific to the Unit of Assessment. Seems like a reasonable idea.
Where possible, REF data and metrics should be open, standardised and combinable with other research funders’ data collection processes in order to streamline data collection requirements and reduce the cost of compiling and submitting information. Reasonable, but a bit vague.
That Government, and UKRI, could make more strategic and imaginative use of REF, to better understand the health of the UK research base, our research resources and areas of high potential for future development, and to build the case for strong investment in research in the UK. This sounds like it means more political interference in the allocation of research funding…
Government should ensure that there is no increased administrative burden to Higher Education Institutions from interactions between the TEF and REF, and that they together strengthen the vital relationship between teaching and research in HEIs. I believe that when I see it.
Any further responses (stern or otherwise) are welcome through the comments box!
It seems that Physics & Astronomy research at the University of Sussex has been ranked as 13th in western Europe and 7th in the UK by leading academic publishers, Nature Research, and has been profiled as one of its top-25 “rising stars” worldwide.
I was tempted to describe this rise as ‘meteoric’ but in my experience meteors generally fall down rather than rise up.
Anyway, as regular readers of this blog will know, I’m generally very sceptical of the value of league tables and there’s no reason to treat this one as qualitatively any different. Here is an explanation of the (rather curious) methodology from the University of Sussex news item:
The Nature Index 2016 Rising Stars supplement identifies the countries and institutions showing the most significant growth in high-quality research publications, using the Nature Index, which tracks the research of more than 8,000 global institutions – described as “players to watch”.
The top 100 most improved institutions in the index between 2012 and 2015 are ranked by the increase in their contribution to 68 high-quality journals. From this top 100, the supplement profiles 25 rising stars – one of which is Sussex – that are already making their mark, and have the potential to shine in coming decades.
The institutions and countries examined have increased their contribution to a selection of top natural science journals — a metric known as weighted fractional count (WFC) — from 2012 to 2015.
Mainly thanks to a quadrupling of its physical sciences score, Sussex reached 351 in the Global 500 in 2015. That represents an 83.9% rise in its contribution to index papers since 2012 — the biggest jump of any UK research organisation in the top 100 most improved institutions.
It’s certainly a strange choice of metric, as it only involves publications in “high quality” journals, presumably selected by Journal Impact Factor or some other arbitrary statistical abominatio, then taking the difference in this measure between 2012 and 2015 and expressing the change as a percentage. I noticed one institution in the list has improved by over 4600%, which makes Sussex’s change of 83.9% seem rather insignificant…
But at least this table provides some sort of evidence that the investment made in Physics & Astronomy over the last few years has made a significant (and positive) difference. The number of research faculty in Physics & Astronomy has increased by more than 60% since 2012 so one would have been surprised not to have seen an increase in publication output over the same period. On the other hand, it seems likely that many of the high-impact papers published since 2012 were written by researchers who arrived well before then because Physics research is often a slow burner. The full impact of the most recent investments has probably not yet been felt. I’m therefore confident that Physics at Sussex has a very exciting future in store as its rising stars look set to rise still further! It’s nice to be going out on a high note!
One of the topics that came up in the discussion sessions at the meeting I was at over the weekend was the possible tension between cosmological parameters, especially relating to the determination of the Hubble constant (H0) by Planck and by “traditional” methods based on the cosmological distance ladder; see here for an overview of the latter. Coincidentally, I found this old preprint while tidying up my office yesterday:
Things have changed quite a bit since 1979! Before getting to the point I should explain that Planck does not determine H0 directly, as it is not one of the six numbers used to specify the minimal model used to fit the data. These parameters do include information about H0, however, so it is possible to extract a value from the data indirectly. In other words it is a derived parameter:
The above summary shows that values of the Hubble constant obtained in this way lie around the 67 to 68 km/s/Mpc mark, with small changes if other measures are included. According to the very latest Planck paper on cosmological parameter estimates the headline determination is H0 = (67.8 +/- 0.9) km/s/Mpc.
Note however that a recent “direct” determination of the Hubble constant by Riess et al. using Hubble Space Telescope data quotes a headline value of (73.24+/-1.74) km/sec/Mpc. Had these two values been obtained in 1979 we wouldn’t have worried because the errors would have been much larger, but nowadays the measurements are much more precise and there does seem to be a hint of a discrepancy somewhere around the 3 sigma level depending on precisely which determination you use. On the other hand the history of Hubble constant determinations is one of results being quoted with very small “internal” errors that turned out to be much smaller than systematic uncertainties.
I think it’s fair to say that there isn’t a consensus as to how seriously to take this apparent “tension”. I certainly can’t see anything wrong with the Riess et al. result, and the lead author is a Nobel prize-winner, but I’m also impressed by the stunning success of the minimal LCDM model at accounting for such a huge data set with a small set of free parameters. If one does take this tension seriously it can be resolved by adding an extra parameter to the model or by allowing one of the fixed properties of the LCDM model to vary to fit the data. Bayesian model selection analysis however tends to reject such models on the grounds of Ockham’s Razor. In other words the price you pay for introducing an extra free parameter exceeds the benefit in improved goodness of fit. GAIA may shortly reveal whether or not there are problems with the local stellar distance scale, which may reveal the source of any discrepancy. For the time being, however, I think it’s interesting but nothing to get too excited about. I’m not saying that I hope this tension will just go away. I think it will be very interesting if it turns out to be real. I just think the evidence at the moment isn’t convincing me that there’s something beyond the standard cosmological model. I may well turn out to be wrong.
It’s quite interesting to think how much we scientists tend to carry on despite the signs that things might be wrong. Take, for example, Newton’s Gravitational Constant, G. Measurements of this parameter are extremely difficult to do, but different experiments do seem to be in disagreement with each other. If Newtonian gravity turned out to be wrong that would indeed be extremely exciting, but I think it’s a wiser bet that there are uncontrolled experimental systematics. On the other hand there is a danger that we might ignore evidence that there’s something fundamentally wrong with our theory. It’s sometimes a difficult judgment how seriously to take experimental results.
Anyway, I don’t know what cosmologists think in general about this so there’s an excuse for a poll:
Yesterday we had a request for a shareable summary of the #LeaveTheDark manifesto. That’s quite difficult to do in brief. There’s a lot to it. A more complete manifesto will follow within the next few days.
But in the meantime we thought these little images might be helpful for people who’d like to spread the word a bit. There’s no issue with copyright or anything like that. We want and need people to share our stuff. That’s the only way we’ll reach enough people to turn last week’s idea into next year’s reality! And we will make this a reality.
So feel free to download and share these images across your networks. The more the merrier…
Thankyou! Together we really can make a difference and turn back the growing swell of hatred that threatens to engulf our society.
About a year ago I wrote a blog post about a mysterious “line” in the X-ray spectra of galaxy clusters corresponding to an energy of around 3.5 keV. The primary reference for the claim is a paper by Bulbul et al which is, of course, freely available on the arXiv.
The key graph from that paper is this:
The claimed feature – it stretches the imagination considerably to call it a “line” – is shown in red. No, I’m not particularly impressed either, but this is what passes for high-quality data in X-ray astronomy!
High-resolution X-ray spectroscopy with Hitomi was expected to resolve the origin of the faint unidentified E=3.5 keV emission line reported in several low-resolution studies of various massive systems, such as galaxies and clusters, including the Perseus cluster. We have analyzed the Hitomi first-light observation of the Perseus cluster. The emission line expected for Perseus based on the XMM-Newton signal from the large cluster sample under the dark matter decay scenario is too faint to be detectable in the Hitomi data. However, the previously reported 3.5 keV flux from Perseus was anomalously high compared to the sample-based prediction. We find no unidentified line at the reported flux level. The high flux derived with XMM MOS for the Perseus region covered by Hitomi is excluded at >3-sigma within the energy confidence interval of the most constraining previous study. If XMM measurement uncertainties for this region are included, the inconsistency with Hitomi is at a 99% significance for a broad dark-matter line and at 99.7% for a narrow line from the gas. We do find a hint of a broad excess near the energies of high-n transitions of Sxvi (E=3.44 keV rest-frame) – a possible signature of charge exchange in the molecular nebula and one of the proposed explanations for the 3.5 keV line. While its energy is consistent with XMM pn detections, it is unlikely to explain the MOS signal. A confirmation of this interesting feature has to wait for a more sensitive observation with a future calorimeter experiment.
And here is the killer plot:
The spectrum looks amazingly detailed, which makes the demise of Hitomi all the more tragic, but the 3.5 keV is conspicuous by its absence. So there you are, yet another supposedly significant feature that excited a huge amount of interest turns out to be nothing of the sort. To be fair, as the abstract states, the anomalous line was only seen by stacking spectra of different clusters and might still be there but too faint to be seen in an individual cluster spectrum. Nevertheless I’d say the probability of there being any feature at 3.5 keV has decreased significantly after this observation.
The views presented here are personal and not necessarily those of my employer (or anyone else for that matter).
Feel free to comment on any of the posts on this blog but comments may be moderated; anonymous comments and any considered by me to be abusive will not be accepted. I do not necessarily endorse, support, sanction, encourage, verify or agree with the opinions or statements of any information or other content in the comments on this site and do not in any way guarantee their accuracy or reliability.