I guess there is nothing wrong about describing the assumption. in Bayesian language, as a spiked prior. But it is equally correct to say that the simulated t tests describe repeated t tests on samples in which the means (a) are identical (H0), or (b) which differ be a specified amount. These are the standard assumptions for any comparison of two means, and I see no necessity to use Bayesian language at all. It’s true that you need to specify the prevalence of cases in which H0 is true/untrue. That’s an ordinary objective probability, though in practice it will rarely be possible to estimate its value. What changed my mind about the problem is the realisation that you can get similar values for the false discovery rate without having to make assumptions about about this prior.

(a) If you consider only P values that are close to 0.05 (rather than P < 0.05) the false discovery rate is at least 30% regardless of what you assume about the prevalence (prior), and

(b) the Sellke et al. approach also gives (similar) results that are independent of the prior.

In the light of these approaches, I don't think it's possible to deny that we have been badly misled by null-hypothesis testing as it is almost universally practised, at least in biomedical sciences.

]]>Here’s another high-profile but very dodgy statistical analysis I blogged about a few years ago:

https://telescoper.wordpress.com/2008/11/12/cerebral-asymmetry-is-it-all-in-the-mind/

]]>Sadly that isn’t true at all.

For example, very recently, *Science* trumpeted on Twitter Non-invasive stimulation of the brain can improve memories . . .”. The paper was behind a paywall, so most tweeters would not have read it. In fact most of the paper was about fMRI and the bit about memory was a subsection of one Figure and it had P = 0.043.

The evidence was pathetically poor and this sort of innumerate behaviour by glamour journals does great harm to science.

]]>The all have spiked priors, based on non-exhaustive hypotheses, and misuse of power. I’ve been through this before and can’t review it just now. But check those references from my blog.

]]>I don’t believe that’s true. All you have to do is simulate a lot of t tests and look only at the results that produce a P value close to 0.05. The false discovery rate is is disastrous. There is an R script that will this for you in http://arxiv.org/abs/1407.5296

And you can reach a very similar conclusion on the basis of the work of Sellke, T., Bayarri, M. J., and Berger, J. O. (2001),

American Statistician, 55, 62-71

Quiter the opposite, actually. You can’t do a proper Bayesian study without specifying a prior, so you need to put all your assumptions on the table. This is not the case with frequentist studies which often claim to be “model free” but aren’t.

Of course it’s true that Bayesian inference can be done incorrectly, but that’s true of any type of analysis and at least Bayesians don’t set out to be wrong…

]]>Ha ha! Right you are, Sesh, in my experience. I don’t know anything about cosmology, other than residual recall of quantum physics classes at Swarthmore College, so I should not even be commenting here. Rest assured, though, that your observation 😉 is applicable to many fields of study:

]]>“quite often with Bayesian statistics the problematic choice of the prior is glossed over or ignored.”