Archive for statistics

A Keno Game Problem

Posted in Cute Problems with tags , , , , on July 25, 2014 by telescoper

It’s been a while since I posted anything in the Cute Problems category so, given that I’ve got an unexpected gap of half an hour today, I thought I’d return to one of my side interests, the mathematics and games and gambling.

There is a variety of gambling games called Keno games in which a player selects (or is given) a set of numbers, some or all of which the player hopes to match with numbers drawn without replacement from a larger set of numbers. A common example of this type of game is Bingo. These games mostly originate in the 19th Century when travelling carnivals and funfairs often involved booths in which customers could gamble in various ways; similar things happen today, though perhaps with more sophisticated games.

In modern Casino Keno (sometimes called Race Horse Keno) a player receives a card with the numbers from 1 to 80 marked on it. He or she then marks a selection between 1 and 15 numbers and indicates the amount of a proposed bet; if n numbers are marked then the game is called `n-spot Keno’. Obviously, in 1-spot Keno, only one number is marked. Twenty numbers are then drawn without replacement from a set comprising the integers 1 to 80, using some form of randomizing device. If an appropriate proportion of the marked numbers are in fact drawn the player gets a payoff calculated by the House. Below you can see the usual payoffs for 10-spot Keno:

tabke
If fewer than five of your numbers are drawn, you lose your £1 stake. The expected gain on a £1 bet can be calculated by working out the probability of each of the outcomes listed above multiplied by the corresponding payoff, adding these together and then subtracting the probability of losing your stake (which corresponds to a gain of -£1). If this overall expected gain is negative (which it will be for any competently run casino) then the expected loss is called the house edge. In other words, if you can expect to lose £X on a £1 bet then X is the house edge.

What is the house edge for 10-spot Keno?

Answers through the comments box please!

Time for a Factorial Moment…

Posted in Bad Statistics with tags , , on July 22, 2014 by telescoper

Another very busy and very hot day so no time for a proper blog post. I suggest we all take a short break and enjoy a Factorial Moment:

Factorial Moment

I remember many moons ago spending ages calculating the factorial moments of the Poisson-Lognormal distribution, only to find that they were well known. If only I’d had Google then…

Uncertain Attitudes

Posted in Bad Statistics, Politics with tags , , , , on May 28, 2014 by telescoper

It’s been a while since I posted anything in the bad statistics file, but an article in today’s Grauniad has now given me an opportunity to rectify that omission.
The piece concerned, entitled Racism on the rise in Britain is based on some new data from the British Social Attitudes survey; the full report can be found here (PDF). The main result is shown in this graph:

Racism_graph

The version of this plot shown in the Guardian piece has the smoothed long-term trend (the blue curve, based on a five-year moving average of the data and clearly generally downward since 1986) removed.

In any case the report, as is sadly almost always the case in surveys of this kind, neglects any mention of the statistical uncertainty in the survey. In fact the last point is based on a sample of 2149 respondents. Suppose the fraction of the population describing themselves as having some prejudice is p. For a sample of size n with x respondents indicating that they describe themselves as “very prejudiced or a little prejudiced” then one can straightforwardly estimate p \simeq x/n. So far so good, as long as there is no bias induced by the form of the question asked nor in the selection of the sample…

However, a little bit of mathematics involving the binomial distribution yields an answer for the uncertainty in this estimate of p in terms of the sampling error:

\sigma = \sqrt{\frac{p(1-p)}{n}}

For the sample size given, and a value p \simeq 0.35 this amounts to a standard error of about 1%. About 95% of samples drawn from a population in which the true fraction is p will yield an estimate within p \pm 2\sigma, i.e. within about 2% of the true figure. This is consistent with the “noise” on the unsmoothed curve and it shows that the year-on-year variation shown in the unsmoothed graph is largely attributable to sampling uncertainty; note that the sample sizes vary from year to year too. The results for 2012 and 2013 are 26% and 30% exactly, which differ by 4% and are therefore explicable solely in terms of sampling fluctuations.

I don’t know whether racial prejudice is on the rise in the UK or not, nor even how accurately such attitudes are measured by such surveys in the first place, but there’s no evidence in these data of any significant change over the past year. Given the behaviour of the smoothed data however, there is evidence that in the very long term the fraction of population identifying themselves as prejudiced is actually falling.

Newspapers however rarely let proper statistics get in the way of a good story, even to the extent of removing evidence that contradicts their own prejudice.

Galaxies, Glow-worms and Chicken Eyes

Posted in Bad Statistics, The Universe and Stuff with tags , , , , , , , , on February 26, 2014 by telescoper

I just came across a news item based on a research article in Physical Review E by Jiao et al. with the abstract:

Optimal spatial sampling of light rigorously requires that identical photoreceptors be arranged in perfectly regular arrays in two dimensions. Examples of such perfect arrays in nature include the compound eyes of insects and the nearly crystalline photoreceptor patterns of some fish and reptiles. Birds are highly visual animals with five different cone photoreceptor subtypes, yet their photoreceptor patterns are not perfectly regular. By analyzing the chicken cone photoreceptor system consisting of five different cell types using a variety of sensitive microstructural descriptors, we find that the disordered photoreceptor patterns are “hyperuniform” (exhibiting vanishing infinite-wavelength density fluctuations), a property that had heretofore been identified in a unique subset of physical systems, but had never been observed in any living organism. Remarkably, the patterns of both the total population and the individual cell types are simultaneously hyperuniform. We term such patterns “multihyperuniform” because multiple distinct subsets of the overall point pattern are themselves hyperuniform. We have devised a unique multiscale cell packing model in two dimensions that suggests that photoreceptor types interact with both short- and long-ranged repulsive forces and that the resultant competition between the types gives rise to the aforementioned singular spatial features characterizing the system, including multihyperuniformity. These findings suggest that a disordered hyperuniform pattern may represent the most uniform sampling arrangement attainable in the avian system, given intrinsic packing constraints within the photoreceptor epithelium. In addition, they show how fundamental physical constraints can change the course of a biological optimization process. Our results suggest that multihyperuniform disordered structures have implications for the design of materials with novel physical properties and therefore may represent a fruitful area for future research.

The point made in the paper is that the photoreceptors found in the eyes of chickens possess a property called disordered hyperuniformity which means that the appear disordered on small scales but exhibit order over large distances. Here’s an illustration:

chicken_eyes

It’s an interesting paper, but I’d like to quibble about something it says in the accompanying news story. The caption with the above diagram states

Left: visual cell distribution in chickens; right: a computer-simulation model showing pretty much the exact same thing. The colored dots represent the centers of the chicken’s eye cells.

Well, as someone who has spent much of his research career trying to discern and quantify patterns in collections of points – in my case they tend to be galaxies rather than photoreceptors – I find it difficult to defend the use of the phrase “pretty much the exact same thing”. It’s notoriously difficult to look at realizations of stochastic point processes and decided whether they are statistically similar or not. For that you generally need quite sophisticated mathematical analysis.  In fact, to my eye, the two images above don’t look at all like “pretty much the exact same thing”. I’m not at all sure that the model works as well as it is claimed, as the statistical analysis presented in the paper is relatively simple: I’d need to see some more quantitative measures of pattern morphology and clustering, especially higher-order correlation functions, before I’m convinced.

Anyway, all this reminded me of a very old post of mine about the difficulty of discerning patterns in distributions of points. Take the two (not very well scanned)  images here as examples:

points

You will have to take my word for it that one of these is a realization of a two-dimensional Poisson point process (which is, in a well-defined sense completely “random”) and the other contains spatial correlations between the points. One therefore has a real pattern to it, and one is a realization of a completely unstructured random process.

I sometimes show this example in popular talks and get the audience to vote on which one is the random one. The vast majority usually think that the one on the right is the one that is random and the left one is the one with structure to it. It is not hard to see why. The right-hand pattern is very smooth (what one would naively expect for a constant probability of finding a point at any position in the two-dimensional space) , whereas the  left one seems to offer a profusion of linear, filamentary features and densely concentrated clusters.

In fact, it’s the left picture that was generated by a Poisson process using a Monte Carlo random number generator. All the structure that is visually apparent is imposed by our own sensory apparatus, which has evolved to be so good at discerning patterns that it finds them when they’re not even there!

The right process is also generated by a Monte Carlo technique, but the algorithm is more complicated. In this case the presence of a point at some location suppresses the probability of having other points in the vicinity. Each event has a zone of avoidance around it; the points are therefore anticorrelated. The result of this is that the pattern is much smoother than a truly random process should be. In fact, this simulation has nothing to do with galaxy clustering really. The algorithm used to generate it was meant to mimic the behaviour of glow-worms (a kind of beetle) which tend to eat each other if they get too close. That’s why they spread themselves out in space more uniformly than in the random pattern. In fact, the tendency displayed in this image of the points to spread themselves out more smoothly than a random distribution is in in some ways reminiscent of the chicken eye problem.

The moral of all this is that people are actually pretty hopeless at understanding what “really” random processes look like, probably because the word random is used so often in very imprecise ways and they don’t know what it means in a specific context like this. The point about random processes, even simpler ones like repeated tossing of a coin, is that coincidences happen much more frequently than one might suppose. By the same token, people are also pretty hopeless at figuring out whether two distributions of points resemble each other in some kind of statistical sense, because that can only be made precise if one defines some specific quantitative measure of clustering pattern, which is not easy to do.

Double Indemnity – Statistics Noir

Posted in Film with tags , , , , on February 20, 2014 by telescoper

The other day I decided to treat myself by watching a DVD of the  film  Double Indemnity. It’s a great movie for many reasons, not least because when it was released in 1944 it immediately established much of the language and iconography of the genre that has come to be known as film noir, which I’ve written about on a number of occasions on this blog; see here for example. Like many noir movies the plot revolves around the destructive relationship between a femme fatale and male anti-hero and, as usual for the genre, the narrative strategy involves use of flashbacks and a first-person voice-over. The photography is done in such a way as to surround the protagonists with dark, threatening shadows. In fact almost every interior in the film (including the one shown in the clip below) has Venetian blinds for this purpose. These chiaroscuro lighting effects charge even the most mundane encounters with psychological tension or erotic suspense.

di6

To the left is an example still from Double Indemnity which shows a number of trademark features. The shadows cast by venetian blinds on the wall, the cigarette being smoked by Barbara Stanwyck and the curious construction of the mise en scene are all very characteristic of the style. What is even more wonderful about this particular shot however is the way the shadow of Fred McMurray’s character enters the scene before he does. The Barbara Stanwyck character is just about to shoot him with a pearl-handled revolver; this image suggests that he is already on his way to the underworld as he enters the room.

I won’t repeat any more of the things I’ve already said about this great movie, but I will say a couple of things that struck me watching it again at the weekend. The first is that even after having seen it dozens of times of the year I still found it intense and gripping. The other is that I think one of the contributing factors to its greatness which is not often discussed is a wonderful cameo by Edward G Robinson , who steals every scene he appears in as the insurance investigator Barton Keyes. Here’s an example, which I’ve chosen because it provides an interesting illustration of the the scientific use of statistical information, another theme I’ve visited frequently on this blog:

Statistical Challenges in 21st Century Cosmology

Posted in The Universe and Stuff with tags , , on December 2, 2013 by telescoper

I received the following email about a forthcoming conference which is probably of interest to a (statistically) significant number of readers of this blog so I thought I’d share it here with an encouragement to attend:

–o–

IAUS306 – Statistical Challenges in 21st Century Cosmology

We are pleased to announce the IAU Symposium 306 on Statistical Challenges in 21st Century Cosmology, which will take place in Lisbon, Portugal from 26-29 May 2014, with a tutorial day on 25 May.  Apologies if you receive this more than once.

Full exploitation of the very large surveys of the Cosmic Microwave Background, Large-Scale Structure, weak gravitational lensing and future 21cm surveys will require use of the best statistical techniques to answer the major cosmological questions of the 21st century, such as the nature of Dark Energy and gravity.

Thus it is timely to emphasise the importance of inference in cosmology, and to promote dialogue between astronomers and statisticians. This has been recognized by the creation of the IAU Working Group in Astrostatistics and Astroinformatics in 2012.

IAU Symposium 306 will be devoted to problems of inference in cosmology, from data processing to methods and model selection, and will have an important element of cross-disciplinary involvement from the statistics communities.

Keynote speakers

• Cosmic Microwave Background :: Graca Rocha (USA / Portugal)

• Weak Gravitational Lensing :: Masahiro Takada (Japan)

• Combining probes :: Anais Rassat (Switzerland)

• Statistics of Fields :: Sabino Matarrese (Italy)

• Large-scale structure :: Licia Verde (Spain)

• Bayesian methods :: David van Dyk (UK)

• 21cm cosmology :: Mario Santos (South Africa / Portugal)

• Massive parameter estimation :: Ben Wandelt (France)

• Overwhelmingly large datasets :: Alex Szalay (USA)

• Errors and nonparametric estimation :: Aurore Delaigle (Australia)

You are invited to submit an abstract for a contributed talk or poster for the meeting, via the meeting website. The deadline for abstract submission is 21st March 2014. Full information on the scientific rationale, programme, proceedings, critical dates, and local arrangements will be on the symposium web site here.

Deadlines

13 January 2014 – Grant requests

21 March 2014 – Abstract submission

4 April 2014 – Notification of abstract acceptance

11 April 2014 – Close of registration

30 June 2014 – Manuscript submission

Australia: Cyclones go up to Eleven!

Posted in Bad Statistics with tags , , , , , , , on October 14, 2013 by telescoper

I saw a story on the web this morning which points out that Australians can expect 11 cyclones this season.

It’s not a very good headline, because it’s a bit misleading about what the word “expected” means. In fact the number eleven is the average number of cyclones, which is not necessarily the number expected, despite the fact that “expected value” or “expectation value” . If you don’t understand this criticism, ask yourself how many legs you’d expect a randomly-chosen person to have. You’d probably settle on the answer “two”, but that is the most probable number, i.e. the mode, which in this case exceeds the average. If one person in a thousand has only one leg then a group of a thousand has 1999 legs between them, so the average (or arithmetic mean) is 1.999. Most people therefore have more than the average number of legs…

I’ve always found it quite annoying that physicists use the term “expectation value” to mean “average” because it implies that the average is the value you would expect. In the example given above you wouldn’t expect a person to have the average number of legs – if you assume that the actual number is an integer, it’s actually impossible to find a person with 1.999! In other words, the probability of finding someone in that group with the average number of legs in the group is exactly zero.

The same confusion happens when newspapers talk about the “average wage” which is considerably higher than the wage most people receive.

In any case the point is that there is undoubtedly a considerable uncertainty in the prediction of eleven cyclones per season, and one would like to have some idea how large an error bar is associated with that value.

Anyway, statistical pedantry notwithstanding, it is indeed impressive that the number of cyclones in a season goes all the way up to eleven..

Follow

Get every new post delivered to your Inbox.

Join 4,351 other followers