Technically that it is a response ðŸ˜‰

]]>That does not merit a response.

]]>Sigh too. Some people just don’t want to think.

]]>Sigh. My point about priors is *precisely* that garbage in is garbage out…

]]>I did answer your question.

What if I care about the means. What can you tell me about them?

Most statisticians, Bayesian or Frequentist or whatever recognise that the problem is incorrectly formulated.

Another question – how many parameters does the problem have?

My definition of magic is the ability to take garbage in and give a reliable answer out. Probability theory can’t do that so it isn’t magic. It is useful within a well-defined domain.

Your pseudo-rational pose doesn’t fix that.

]]>My previous answer was accidentally truncated. I meant to day I donâ€™t find the argument – that you can choose a prior that gives a different, possibly nonsensical answer – convincing at all.

I am surprised that you seem to hink the laws of probability are “magic”, but reading your other comments perhaps I shouldn’t be.

I note your refusal to answer my question. My answer to yours is that I don’t care about the distribution of the means, which is why I marginalised over them.

]]>I’m afraid I don’t find the argument

]]>Your loss I guess (I imagine you’d say the same to me). Last question – what is your estimate for one of the means as the sample size goes to infinity? What is the posterior variance for one of the means?

]]>I’m afraid I don’t find the argument

]]>My first priority would always to be to set up a well-posed problem so that Bayes or Likelihood or whatever had a chance of being reliable.

I have no problem using Bayes if applicable, but it is not unproblematically applicable in this case. You are expecting magic where there is none.

Why don’t you respecify the model in terms of the sums and differences of the observations and see what happens? If you want to include prior information after this re-specification then have at it. Unfortunately it won’t demonstrate your original point any more.

]]>