ln(a) + S_{Cauchy}(a=1)

and RHS is

ln(sigma) + S_{Gaussian}(sigma=1)

so that the eqn simply means that ‘a’ and \sigma are in a fixed ratio (whose value involves pi, e, Euler’s const etc).

Anton

]]>S_{Cauchy}(a) = S_{Gaussian}(\sigma)

which is indeed an equation relating ‘a’ of the Cauchy distribution with \sigma of the Gaussian distribution. Are you saying that this leads somewhere…?

Anton

]]>I didn’t make my question clear. Sure the variance of the Cauchy is infinite but it can be constructed in such a way that it has a parameter describing its width (i.e. replace z by z/a where a is a scale parameter). Now the variance will always be infinite but the shape of the distribution and consequently the entropy will depend on a.

For a given a and the entropy associated with it for a Cauchy distribution, one can find a Gaussian distribution with variance chosen to give the same entropy as a Cauchy distribution with any particular a. This requirement would give an equation between a and sigma for the two distributions to have the same entropy.

Do you get my point?

Peter

]]>And this is exactly what you find, because the variance of the Cauchy distribution is… infinity!

Anton

]]>Assuming zero mean there would therefore seem to be a Gaussian distribution with variance adjusted so that it has the same entropy as a given Cauchy distribution (allowing forms like a^2 + z^2 instead of 1+z^2 in the denominator). What’s the relationship between the parameters of the Cauchy and the Gaussian that is “isentropic” to it, and what does this mean?

]]>[sqrt(pi) – 1][ln2 + g/2] + ln\pi

where g stands for Euler’s constant. That’s a nice mix of transcendentals.

]]>I think the entropy may be divergent too, but I’ve never really looked at it. Interesting.

]]>Its entropy, which might be a more meaningful measure of the width of a distribution than the standard deviation, is finite, although \int dz p(z) log p(z) (over all z) is a meaty integral to calculate for the Cauchy form.

]]>Anton ]]>