Archive for the Bad Statistics Category

US Election Night and Day

Posted in Bad Statistics, Biographical, Politics on November 4, 2020 by telescoper

Before you ask, no I didn’t stay up all night for the US presidential election results. I went to bed at 11pm and woke up as usual at 7am when my radio came on. I had a good night’s sleep. It’s not that I was confident of the outcome – I didn’t share the optimism of many of my friends that a Democrat landslide was imminent – it’s just that I’ve learnt not to get stressed by things that are out of my control.

On the other hand, my mood on waking to discover that the election was favouring the incumbent Orange Buffoon is accurately summed up by this image:

Regardless of who wins, I find it shocking that so many are prepared to vote for Trump a second time. There might have been an excuse first time around that they didn’t know quite how bad he was. Now they do, and there are still 65 million people (and counting) willing to vote for him. That’s frightening.

As I write (at 4pm on November 3rd) it still isn’t clear who will be the next President, but the odds have shortened dramatically on Joe Biden (currently around 1/5) having been short on Donald Trump when the early results came in; Trump’s odds have now drifted out between 3/1 and 4/1. Biden is now clearly favourite, but favourites don’t always win.

What has changed dramatically during the course of the day has been the enormous impact of mail-in and early voting results in key states. In Wisconsin these votes turned around a losing count for Biden into an almost certain victory by being >70% in his favour. A similar thing looks likely to happen in Michigan too. Assuming he wins Wisconsin, Joe Biden needs just two of Michigan, Nevada, Pennsylvania and Georgia to reach the minimum of 270 electoral college votes needed to win the election. He is ahead in two – Michigan and Nevada.

This is by no means certain – the vote in each of these states is very close and they could even all go to Trump. What does seem likely is that Biden will win the popular vote quite comfortably and may even get over 50%. That raises the issue again of why America doesn’t just count the votes and decide on the basis of a simple majority, rather than on the silly electoral college system, but that’s been an open question for years. Trump won on a minority vote last time, against Hillary Clinton, as did Bush in 2000.

It’s also notable that this election has once again seeing the pollsters confounded. Most were predicted a comfortable Biden victory. Part of the problem is the national polls lack sufficient numbers in the swing states to be useful, but even the overall voting tally seems set to be much closer than the ~8% margin in many polls.

Obviously there is a systematic problem of some sort. Perhaps it’s to do with sample selection. Perhaps it’s because Trump supporters are less likely to answer opinion poll questions honestly. Perhaps its due to systematic suppression of the vote in pro-Democrat areas. There are potentially many more explanations, but the main point is that when polls have a systematic bias like this, you can’t treat the polling error statistically as a quantity that varies from positive to negative independently from one state to another, as some of the pundits do, because it is replicated across all States.

As I mentioned in a post last week, I placed a comfort bet on Trump of €50 at 9/5. He might still win but if he doesn’t this is one occasion on which I’d be happy to lose money.

P.S. The US elections often make me think about how many of the States I have actually visited. The answer is (mostly not for very long): Kansas, South Dakota, Colorado, Iowa, Missouri, Arkansas, Louisiana, California, Arizona, New York, New Jersey, Maryland, Massachusetts, New Hampshire, Maine, and Pennsylvania. That’s way less than a majority. I’ve also been to Washington DC but that’s not a State..

How Reliable Are University Rankings?

Posted in Bad Statistics, Education with tags , on April 21, 2020 by telescoper

I think most of you probably know the answer to this question already, but now there’s a detailed study on this topic. Here is the abstract of a paper on the arXiv on the subject

University or college rankings have almost become an industry of their own, published by US News \& World Report (USNWR) and similar organizations. Most of the rankings use a similar scheme: Rank universities in decreasing score order, where each score is computed using a set of attributes and their weights; the attributes can be objective or subjective while the weights are always subjective. This scheme is general enough to be applied to ranking objects other than universities. As shown in the related work, these rankings have important implications and also many issues. In this paper, we take a fresh look at this ranking scheme using the public College dataset; we both formally and experimentally show in multiple ways that this ranking scheme is not reliable and cannot be trusted as authoritative because it is too sensitive to weight changes and can easily be gamed. For example, we show how to derive reasonable weights programmatically to move multiple universities in our dataset to the top rank; moreover, this task takes a few seconds for over 600 universities on a personal laptop. Our mathematical formulation, methods, and results are applicable to ranking objects other than universities too. We conclude by making the case that all the data and methods used for rankings should be made open for validation and repeatability.

The italics are mine.

I have written many times about the worthlessness of University league tables (e.g. here).

Among the serious objections I have raised is that the way they are presented is fundamentally unscientific because they do not separate changes in data (assuming these are measurements of something interesting) from changes in methodology (e.g. weightings). There is an obvious and easy way to test for the size of the weighting effect, which is to construct a parallel set of league tables each year, with the current year’s input data but the previous year’s methodology, which would make it easy to isolate changes in methodology from changes in the performance indicators. No scientifically literate person would accept the result of this kind of study unless the systematic effects can be shown to be under control.

Yet purveyors of league table twaddle all refuse to perform this simple exercise. I myself asked the Times Higher to do this a few years ago and they categorically refused, thus proving that they are not at all interested in the reliability of the product they’re peddling.

Snake oil, anyone?

Bad Statistics and COVID-19

Posted in Bad Statistics, Covid-19 with tags , , , on March 27, 2020 by telescoper

It’s been a while since I posted anything in the Bad Statistics folder. That’s not as if the present Covid-19 outbreak hasn’t provided plenty of examples, it’s that I’ve had my mind on other things. I couldn’t resist, however, sharing this cracker that I found on Twitter:

The paper concerned can be found here from which the key figure is this:

This plots the basic reproductive rate R against temperature for Coronavirus infections from 100 Chinese cities. The argument is that the trend means that higher temperatures correspond to weakened transmission of the virus (as happens with influenza). I don’t know if this paper has been peer-reviewed. I sincerely hope not!

I showed this plot to a colleague of mine the other day who remarked “well, at least all the points lie on a plane”. It looks to me that if if you removed just one point – the one with R>4.5 – then the trend would vanish completely.

The alleged correlation is deeply unimpressive on its own, quite apart from the assumption that any correlation present represents a causative effect due to temperature – there could be many confounding factors.

 

P.S. Among the many hilarious responses on Twitter was this:

 

Taxing Figures

Posted in Bad Statistics, Politics with tags , , , on January 29, 2020 by telescoper

Following the campaign for the forthcoming General Election in Ireland has confirmed (not entirely unexpectedly) that politicians over here are not averse to peddling demonstrable untruths.

One particular example came up in recent televised debate during which Fine Gael leader Leo Varadkar talked about his party’s plans for tax cuts achieved by raising the salary at which workers start paying the higher rate of income tax. Here’s a summary of the proposal from the Irish Times:

Fine Gael wants to increase the threshold at which people hit the higher rate of income tax from €35,300 to €50,000, which it says will be worth €3,000 to the average earner if the policy is fully implemented.

Three thousand (per year) to the average earner! Sounds great!

But let’s look at the figures. There are two tax rates in Ireland. The first part of your income up to a certain amount is taxed at 20% – this is known as the Standard Rate. The remainder of your income is taxed at 40% which is known as the Higher Rate. The cut-off point for the standard rate depends on circumstances, but for a single person it is currently €35,300.

According to official statistics the average salary is €38,893 per year, as has been widely reported. Let’s call that €38,900 for round figures. Note that figure includes overtime and other earnings, not just basic wages.

It’s worth pointing out that in Ireland (as practically everywhere else) the distribution of earnings is very skewed. here is an example showing weekly earnings in Ireland a few years ago to demonstrate the point.

 

This means that there are more people earning less than the average salary (also known as the mean)  than above it. In Ireland over 60% of people earn less than the average.  Using the mean in examples like this* is rather misleading – the median would be less influenced by a few very high salaries –  but let’s continue with it for the sake of argument.

So how much will a person earning €38,900 actually benefit from raising the higher rate tax threshold to €50,000? For clarity I’ll consider this question in isolation from other proposed changes.

Currently such a person pays tax at 40% on the portion of their salary exceeding the threshold which is €38,900 – €35,300 = €3600. Forty per cent of that figure is €1440. If the higher rate threshold is raised above their earnings level this €3600 would instead be taxed at the Standard rate of 20%, which means that €720 would be paid instead of €1440. The net saving is therefore €720 per annum. This is a saving, but it’s nowhere near €3000. Fine Gael’s claim is therefore demonstrably false.

If you look at the way the tax bands work it is clear that a person earning over €50,000 would save an amount which is equivalent to 20% of the difference between €35,300 and €50,000 which is a sum close to €3000, but that only applies to people earning well over the average salary. For anyone earning less than €50,000 the saving is much less.

The untruth lies therefore in the misleading use of the term `average salary’.

Notice furthermore that anyone earning less than the higher rate tax threshold will not benefit in any way from the proposed change, so it favours the better off. That’s not unexpected for Fine Gael. A fairer change (in my view) would involve increasing the higher rate threshold and also the higher rate itself.

All this presupposes of course that you think cutting tax is a good idea at this time. Personally I don’t. Ireland is crying out for greater investment in public services and infrastructure so I think it’s inadvisable to make less money available for these purposes, which is what cutting tax would do.

 

*Another example is provided by the citation numbers for papers in the Open Journal of Astrophysics. The average number of citations for the 12 papers published in 2019 was around 34 but eleven of the twelve had fewer citations than this: the average is dragged up by one paper with >300 citations.

 

Phase Correlations and the LIGO Data Analysis Paper

Posted in Bad Statistics, The Universe and Stuff with tags , , , on September 1, 2019 by telescoper

I have to admit I haven’t really kept up with developments in the world of gravitational waves this summer, though there have been a number of candidate events reported in the third observing run (O3) of Advanced LIGO  which began in April 2019 to which I refer you if you’re interested.

I did notice, however, that late last week a new paper from the LIGO Scientific Collaboration and Virgo Collaboration appeared on the arXiv. This is entitled A guide to LIGO-Virgo detector noise and extraction of transient gravitational-wave signals and has the following abstract:

The LIGO Scientific Collaboration and the Virgo Collaboration have cataloged eleven confidently detected gravitational-wave events during the first two observing runs of the advanced detector era. All eleven events were consistent with being from well-modeled mergers between compact stellar-mass objects: black holes or neutron stars. The data around the time of each of these events have been made publicly available through the Gravitational-Wave Open Science Center. The entirety of the gravitational-wave strain data from the first and second observing runs have also now been made publicly available. There is considerable interest among the broad scientific community in understanding the data and methods used in the analyses. In this paper, we provide an overview of the detector noise properties and the data analysis techniques used to detect gravitational-wave signals and infer the source properties. We describe some of the checks that are performed to validate the analyses and results from the observations of gravitational-wave events. We also address concerns that have been raised about various properties of LIGO-Virgo detector noise and the correctness of our analyses as applied to the resulting data.

It’s an interesting paper that gives quite a lot of detail, especially about signal extraction and parameter-fitting, so it’s very well worth reading.

Two particular things caught my eye about this. One is that there’s no list of authors anywhere in the paper, which seems a little strange. This policy may not be new, of course. I did say I haven’t really been keeping up.

The other point I’ll mention relates to this Figure, the caption of which refers to paper [41], the famous `Danish paper‘:

The Fourier phase is plotted vertically (between 0 and 2π) and the frequency horizontally. A random-phase distribution should have the phases uniformly distributed at each frequency. I think we can agree, without further statistical analysis,  that the blue points don’t have that property!  Of course nobody denies that the strongly correlated phases  in the un-windowed data are at least partly an artifact of the application of a Fourier transform to a non-stationary time series.

I suppose by showing that using a window function to apodize the data removes phase correlations is meant to represent some form of rebuttal of the claims made in the Danish paper. If so, it’s not very convincing.

For a start the caption just says that after windowing resulting `phases appear randomly distributed‘. Could they not provide some more meaningful statistical statement than a simple eyeball impression? The text says little more:

In addition to causing spectral leakage, improper windowing of the data can result in spurious phase correlations in the Fourier transform. Figure 4 shows a scatter plot of the Fourier phase as a function of frequency … both with and without the application of a window function. The un-windowed data shows a strong phase correlation, while the windowed data does not.

(I added the link to the explanation of `spectral leakage’.)

As I have mentioned before on this blog, the human eye is very poor at distinguishing pattern from randomness. There are some subtleties involved in testing for correlated phases (e.g. because they are periodic) but there are various techniques available: I’ve worked on this myself (see, e.g., here and here.). The phases shown may well be consistent with a uniform random distribution, but I’m surprised the LIGO authors didn’t present a proper statistical analysis of the windowed phases to prove beyond doubt the point they seem to be trying to make.

Then again, later on in the caption, there is a statement that `the phases show some clustering around the 60 Hz power line’. So, on the one hand the phases `appear random’, but on the other hand they’re not. There are other plausible clusters elsewhere too. What about them?

I’m afraid the absence of quantitative detail means I don’t find this a very edifying discussion!

 

Hubble Tension: an “Alternative” View?

Posted in Bad Statistics, The Universe and Stuff with tags , , , , , on July 25, 2019 by telescoper

There was a new paper last week on the arXiv by Sunny Vagnozzi about the Hubble constant controversy (see this blog passim). I was going to refrain from commenting but I see that one of the bloggers I follow has posted about it so I guess a brief item would not be out of order.

Here is the abstract of the Vagnozzi paper:

I posted this picture last week which is relevant to the discussion:

The point is that if you allow the equation of state parameter w to vary from the value of w=-1 that it has in the standard cosmology then you get a better fit. However, it is one of the features of Bayesian inference that if you introduce a new free parameter then you have to assign a prior probability over the space of values that parameter could hold. That prior penalty is carried through to the posterior probability. Unless the new model fits observational data significantly better than the old one, this prior penalty will lead to the new model being disfavoured. This is the Bayesian statement of Ockham’s Razor.

The Vagnozzi paper represents a statement of this in the context of the Hubble tension. If a new floating parameter w is introduced the data prefer a value less than -1 (as demonstrated in the figure) but on posterior probability grounds the resulting model is less probable than the standard cosmology for the reason stated above. Vagnozzi then argues that if a new fixed value of, say, w = -1.3 is introduced then the resulting model is not penalized by having to spread the prior probability out over a range of values but puts all its prior eggs in one basket labelled w = -1.3.

This is of course true. The problem is that the value of w = -1.3 does not derive from any ab initio principle of physics but by a posteriori of the inference described above. It’s no surprise that you can get a better answer if you know what outcome you want. I find that I am very good at forecasting the football results if I make my predictions after watching Final Score

Indeed, many cosmologists think any value of w < -1 should be ruled out ab initio because they don’t make physical sense anyway.

 

 

 

Statistical Analysis of the 1919 Eclipse Measurements

Posted in Bad Statistics, The Universe and Stuff with tags , , , , on May 27, 2019 by telescoper

So the centenary of the famous 1919 Eclipse measurements is only a couple of days away and to mark it I have a piece on RTÉ Brainstorm published today in advance of my public lecture on Wednesday.

I thought I’d complement the more popular piece by posting a very short summary of how the measurements were analyzed for those who want a bit more technical detail.

The idea is simple. Take a photograph during a solar eclipse during which some stars are visible in the sky close enough to the Sun to be deflected by its gravity. Take a similar photograph of the same stars at night at some other time when the Sun is elsewhere. Compare the positions of the stars on the two photographs and the star positions should have shifted slightly on the eclipse plates compared to the comparison plate. This gravitational shift should be radially outwards from the centre of the Sun.

One can measure the coordinates of the stars in two directions: Right Ascension (x) and Declination (y) and the corresponding (small) difference between the positions in each direction are Dx and Dy on the right hand side of the equations above.

In the absence of any other effects these deflections should be equal to the deflection in each component calculated using Einstein’s theory or Newtonian value. This is represented by the two terms Ex(x,y) and Ey(x,y) which give the calculated components of the deflection in both x and y directions scaled by a parameter α which is the object of interest – α should be precisely a factor two larger in Einstein’s theory than in the `Newtonian’ calculation.

The problem is that there are several other things that can cause differences between positions of stars on the photographic plate, especially if you remember that the eclipse photographs have to be taken out in the field rather than at an observatory.  First of all there might be an offset in the coordinates measured on the two plates: this is represented by the terms c and f in the equations above. Second there might be a slightly different magnification on the two photographs caused by different optical performance when the two plates were exposed. These would result in a uniform scaling in x and y which is distinguishable from the gravitational deflection because it is not radially outwards from the centre of the Sun. This scale factor is represented by the terms a and e. Third, and finally, the plates might be oriented slightly differently, mixing up x and y as represented by the cross-terms b and d.

Before one can determine a value for α from a set of measured deflections one must estimate and remove the other terms represented by the parameters a-f. There are seven unknowns (including α) so one needs at least seven measurements to get the necessary astrometric solution.

The approach Eddington wanted to use to solve this problem involved setting up simultaneous equations for these parameters and eliminating variables to yield values for α for each plate. Repeating this over many allows one to beat down the measurement errors by averaging and return a final overall value for α. The 1919 eclipse was particularly suitable for this experiment because (a) there were many bright stars positioned close to the Sun on the sky during totality and (b) the duration of totality was rather long – around 7 minutes – allowing many exposures to be taken.

This was indeed the approach he did use to analyze the data from the Sobral plates, but tor the plates taken at Principe during poor weather he didn’t have enough star positions to do this: he therefore used estimates of the scale parameters (a and e) taken entirely from the comparison plates. This is by no means ideal, though he didn’t really have any choice.

If you ask me a conceptually better approach would be the Bayesian one: set up priors on the seven parameters then marginalize over a-f  to leave a posterior distribution on α. This task is left as an exercise to the reader.

 

 

Dos and Don’ts of reduced chi-squared

Posted in Bad Statistics, The Universe and Stuff with tags , , on April 26, 2019 by telescoper

Yesterday I saw a tweet about an arXiv paper and thought I’d share it here. The paper, I mean. It’s not new but I’ve never seen it before and I think it’s well worth reading. The abstract of the paper is:

Reduced chi-squared is a very popular method for model assessment, model comparison, convergence diagnostic, and error estimation in astronomy. In this manuscript, we discuss the pitfalls involved in using reduced chi-squared. There are two independent problems: (a) The number of degrees of freedom can only be estimated for linear models. Concerning nonlinear models, the number of degrees of freedom is unknown, i.e., it is not possible to compute the value of reduced chi-squared. (b) Due to random noise in the data, also the value of reduced chi-squared itself is subject to noise, i.e., the value is uncertain. This uncertainty impairs the usefulness of reduced chi-squared for differentiating between models or assessing convergence of a minimisation procedure. The impact of noise on the value of reduced chi-squared is surprisingly large, in particular for small data sets, which are very common in astrophysical problems. We conclude that reduced chi-squared can only be used with due caution for linear models, whereas it must not be used for nonlinear models at all. Finally, we recommend more sophisticated and reliable methods, which are also applicable to nonlinear models.

I added the link at the beginning; you can download a PDF of the paper here.

I’ve never really understood why this statistic (together with related frequentist-inspired ideas) is treated with such reverence by astronomers, so this paper offers a valuable critique to those tempted to rely on it blindly.

 

 

Bad Statistics and the Gender Gap

Posted in Bad Statistics with tags , , , on April 3, 2019 by telescoper

So there’s an article in Scientific American called How to Close the Gender Gap in the Labo(u)r Force (I’ve added a `u’ to `Labour’ so that it can be understood in the UK).

I was just thinking the other day that it’s been a while since I added any posts to the `Bad Statistics’ folder, but this Scientific American article offers a corker:

That parabola is a  `Regression line’? Seriously? Someone needs to a lesson in how not to over-fit data! It’s plausible that the orange curve might be the best-fitting parabola to the blue points, but that doesn’t mean that it provides a sensible description of the data…

I can see a man walking a dog in the pattern of points to the top right: can I get this observation published in Scientific American?

 

 

Grave Wave Doubts?

Posted in Bad Statistics, The Universe and Stuff with tags , , , , on November 1, 2018 by telescoper

coverns

I noticed this morning that this week’s New Scientist cover feature (by Michael Brooks)is entitled Exclusive: Grave doubts over LIGO’s discovery of gravitational waves. The article is behind a paywall – and I’ve so far been unable to locate a hard copy in Maynooth so I haven’t read it yet but it is about the so-called `Danish paper’ that pointed out various unexplained features in LIGO data associated with the first detection of gravitational waves of a binary black hole merger.

I did know this piece was coming, however, as I spoke to the author on the phone some time ago to clarify some points I made in previous blog posts on this issue (e.g. this one and that one). I even ended up being quoted in the article:

Not everyone agrees the Danish choices were wrong. “I think their paper is a good one and it’s a shame that some of the LIGO team have been so churlish in response,” says Peter Coles, a cosmologist at Maynooth University in Ireland.

I stand by that comment, as I think certain members – though by no means all – of the LIGO team have been uncivil in their reaction to the Danish team, implying that they consider it somehow unreasonable that the LIGO results such be subject to independent scrutiny. I am not convinced that the unexplained features in the data released by LIGO really do cast doubt on the detection, but unexplained features there undoubtedly are. Surely it is the job of science to explain the unexplained?

It is an important aspect of the way science works is that when a given individual or group publishes a result, it should be possible for others to reproduce it (or not as the case may be). In normal-sized laboratory physics it suffices to explain the experimental set-up in the published paper in sufficient detail for another individual or group to build an equivalent replica experiment if they want to check the results. In `Big Science’, e.g. with LIGO or the Large Hadron Collider, it is not practically possible for other groups to build their own copy, so the best that can be done is to release the data coming from the experiment. A basic problem with reproducibility obviously arises when this does not happen.

In astrophysics and cosmology, results in scientific papers are often based on very complicated analyses of large data sets. This is also the case for gravitational wave experiments. Fortunately, in astrophysics these days, researchers are generally pretty good at sharing their data, but there are a few exceptions in that field.

Even allowing open access to data doesn’t always solve the reproducibility problem. Often extensive numerical codes are needed to process the measurements and extract meaningful output. Without access to these pipeline codes it is impossible for a third party to check the path from input to output without writing their own version, assuming that there is sufficient information to do that in the first place. That researchers should publish their software as well as their results is quite a controversial suggestion, but I think it’s the best practice for science. In any case there are often intermediate stages between `raw’ data and scientific results, as well as ancillary data products of various kinds. I think these should all be made public. Doing that could well entail a great deal of effort, but I think in the long run that it is worth it.

I’m not saying that scientific collaborations should not have a proprietary period, just that this period should end when a result is announced, and that any such announcement should be accompanied by a release of the data products and software needed to subject the analysis to independent verification.

Given that the detection of gravitational waves is one of the most important breakthroughs ever made in physics, I think this is a matter of considerable regret. I also find it difficult to understand the reasoning that led the LIGO consortium to think it was a good plan only to go part of the way towards open science, by releasing only part of the information needed to reproduce the processing of the LIGO signals and their subsequent statistical analysis. There may be good reasons that I know nothing about, but at the moment it seems to me to me to represent a wasted opportunity.

CLARIFICATION: The LIGO Consortium released data from the first observing run (O1) – you can find it here – early in 2018, but this data set was not available publicly at the time of publication of the first detection, nor when the team from Denmark did their analysis.

I know I’m an extremist when it comes to open science, and there are probably many who disagree with me, so here’s a poll I’ve been running for a year or so on this issue:

Any other comments welcome through the box below!

UPDATE: There is a (brief) response from LIGO (& VIRGO) here.