## A Question of Entropy

We haven’t had a poll for a while so here’s one for your entertainment.

An article has appeared on the BBC Website entitled *Web’s random numbers are too weak, warn researchers*. The piece is about the techniques used to encrypt data on the internet. It’s a confusing piece, largely because of the use of the word “random” which is tricky to define; see a number of previous posts on this topic. I’ll steer clear of going over that issue again. However, there is a paragraph in the article that talks about entropy:

An unshuffled pack of cards has a low entropy, said Mr Potter, because there is little surprising or uncertain about the order the cards would be dealt. The more a pack was shuffled, he said, the more entropy it had because it got harder to be sure about which card would be turned over next.

I won’t prejudice your vote by saying what I think about this statement, but here’s a poll so I can try to see what you think.

Of course I also welcome comments via the box below…

Follow @telescoper
August 10, 2015 at 10:33 am

Need to first define which type of entropy you mean — Shannon (i.e. information content) or thermodynamic entropy. If the latter then there’s no difference in the entropy — the microstates aren’t thermodynamically accessible. If the former then the article’s technically correct, I’d have thought — lower Shannon entropy for an ordered deck than for a randomised arrangement of cards.

August 10, 2015 at 12:16 pm

I think the entropy of an unshuffled pack of cards is probably lower than that of one which has been set on fire…

August 10, 2015 at 1:15 pm

Yep, that one never gets old! https://twitter.com/colinrosenthal/status/630643570731577344

August 10, 2015 at 10:40 am

Interesting question, here is my opinion based on 18 months of study into data processing using information entropy concepts and Bayesian statistics: without specifying what one means by “entropy”, the statement is meaningless.

The concept of informational entropy can be derived from considering Bayesian statistics for normally distributed variables about which we have no prior information. In that case, one has to choose a prior term which avoids bias – this looks like an expression for thermodynamic entropy, hence the name.

The key point is that one must know _nothing_ about the system – as soon as we have some prior information then our assessment of the Bayesian statistics also changes. One can choose to continue using the entropic prior, or not, but it no longer has an entirely rigorous meaning. If one did apply an entropic prior to a pack of cards about which we know “the pack has not been shuffled”, the entropic term would be exactly zero for any sensible measurement relating to card ordering.

Another way to look at the question is to make a more hand-wavy analogy to thermodynamic entropy. In thermodynamics, we really care about macroscopic observables – pressure, temperature, etc – and entropy is a number defined by the number of different unique arrangements of the microscopic “stuff” which defines those variables. Any particular arrangement is a “microstate”, and physical entropy is related to the number of microstates which give the same macrostate result. The typical example of this is the arrangement (and velocity distribution) of atoms in a gas.

In this picture, cards in a pack are like atoms in a gas. To make a statement about the entropy of any particular arrangement of cards is meaningless because any single arrangement of cards is one microstate of the pack. Under that interpretation, one really needs to find a macroscopic observable which corresponds to measuring properties about the pack. If that observable is to do with “how shuffled the pack is”, then the result seems to be fairly straightforward. If we observe the pack has not been shuffled, then there is exactly one microstate which can correspond to that observation and the entropy is zero.

So the answer to your question is “yes” and “no”, but for somewhat complicated reasons. I voted for “.. has an entropy which is impossible to define unambiguously” because that seemed in spirit with my answer here, even though one can actually define the answer when the question is framed appropriately.

Thanks for the mental exercise!

August 10, 2015 at 10:48 am

Are we allowed to mention the typo in your copying of the articles title? At least you are proving you don’t just cut’n’paste the titles…

August 10, 2015 at 10:55 am

Was the entropy of my version greater than the original?

(I’ve corrected it now…)

August 10, 2015 at 11:08 am

I’d say I’d like more information about the situation. (Had the pack of cards been shuffled before, and how much? What definition of entropy is being used here? What do the statements high and low entropy mean here precisely? Do the allowed states include removing cards from the pack? Do the allowed states include throwing the cards through a window?)

So I voted for “has an entropy which is impossible to define unambiguously”.

However, the writer of the article is using a reasonable example to explain a concept. That is entirely legitimate and it would be churlish to criticise the writer in any way for helping readers to understand by using this example.

August 10, 2015 at 12:07 pm

You need to include in the system a card player who knows the standard ordering. Just like Maxwell’s Demon.

It’s actually a very good example to use in the context of information encryption, where you really want your coding algorithms to be aware of all the tricks that might be available to those trying to spy on you.

August 10, 2015 at 6:45 pm

I suppose an interesting side question might be whether the entropy of the unshuffled pack is the same as an old pack which has been shuffled many times and has randomly reverted to the original order? Thinking more… does the entropy inherent in the pack or does it depend on _knowing_ the order? If the latter, it must change when the order is examined, and this seems wrong. I think this confirms “it all depends what you mean by entropy” answer…

August 10, 2015 at 8:38 pm

The longer is the list of information necessary to fully describe the pack of cards, the higher is the entropy. If you have never seen a pack of card, shuffled or unshuffled, what’s the difference?

And then, there is the physical aspect down to the elementary particles…

Entropy is a simple theoretical concept, but it is a concept of seemingly infinite complexity in practice.

August 11, 2015 at 4:53 am

There is only one arrangement of cards in an unshuffled deck. There are 52! possible combinations of cards. The shuffled deck has 52!-1 possible combinations. We are dividing this into two partitions of shuffled and unshuffled. The shuffled deck has more entropy because there are more ways to arrange a shuffled deck. What’s the point of passwords if supercomputers can crack them in seconds, a quantum computer using Shor’s algorithm cracks them in nanoseconds or if some mad genius has a proof that P=NP?

I need someone smart to argue the steady state theory with we can start by assuming that the CMB is in a state of high entropy and that neutron anti-neutron pairs can be created and decay into long lasting protons. Anyone here good with Monte Carlo simulations? I suspect that there will be domains of matter and anti-matter traveling at high velocities relative to each other.

August 11, 2015 at 10:15 am

Any particular permutation of cards corresponds to one configuration. Surely a shuffled configuration is just as unique as an unshuffled one, unless you describe it in some way other than as an ordered list of cards?

August 11, 2015 at 3:50 pm

Any particular permutation — i.e. any particular microstate, if we use the thermodynamic analogy — is indeed equally likely. But we can define different ‘macrostates’, including an unshuffled deck, which are associated with a much smaller number of microstates (and are thus lower in Shannon entropy) than a completely random arrangement of cards.

August 11, 2015 at 3:54 pm

Each permutation is only one microstate, whether you label it “random” or not.

August 11, 2015 at 5:43 pm

@telescoper (August 11, 2015 at 3:54 pm)

“Each permutation is only one microstate, whether you label it “random” or not.”Agree entirely. And all microstates are equally likely. But consider this in terms of hands in a poker game. We can assign a variety of different ‘macrostates’ — a flush, a royal flush, pair, full house, four of a kind, junk hand etc… Why are we so pleased to get a royal flush if all microstates are equally likely (as they are)? It’s because there’s a huge difference in the entropy (i.e. number of microstates) associated with a royal flush as compared to that of a “junk hand”.

As !n says above, take this to the limit of a completely ordered (i.e. unshuffled) deck vs a randomised (shuffled) deck. There’s just one way to get that ordered macrostate (i.e. one microstate) vs 52! -1 microstates associated with any other configuration. The Shannon entropy of the former is consequently much lower.

August 11, 2015 at 5:59 pm

it depends at what level you describe “any other configuration”. If you describe it as a list of cards it has the same Shannon entropy as any other, as it is just one microstate as there is only one way to get it. If you describe it in terms of a smaller number of variables than the original list (i.e. a macrostate). In order to make a statement abou entropy you need to say exactly how you specify the state…

August 11, 2015 at 6:50 pm

” In order to make a statement about entropy you need to say exactly how you specify the state…”Of course, and computer scientists would do this on the basis of Kolmogorov complexity (or algorithmic complexity). A computer scientist I collaborated with a few years back, Nat Krasnogor (formerly at Nottingham, now at Newcastle), introduced me to the Universal Similarity Metric – a wonderful concept in terms of information “compressibility”, not least because the moniker makes it sound like it came straight from the pages of The Hitchhiker’s Guide To The Galaxy.

“If you describe it as a list of cards it has the same Shannon entropy as any other, as it is just one microstate as there is only one way to get it as it is just one microstate as there is only one way to get it.”It doesn’t make a lot of sense to me to speak of the entropy of a specific

microstate. As you say at the end of your comment, it’s a question of how we define the macrostates. Strategies based on Kolmogorov complexity, however, provide a robust mechanism of defining the difference between a randomised/shuffled deck of cards and an ordered state (which can be defined algorithmically).And so I remain of the opinion that the article was correct to claim that the Shannon entropy of the ordered, unshuffled deck of cards is lower than that of the shuffled deck.

(One thing I keep meaning to write a blog post about is the question of the Shannon entropy and Kolmogorov complexity of something like the Mandelbrot set. From one perspective it’s effectively infinitely complex, in that it doesn’t matter what “zoom” level we choose of a region of the set on the complex plane — we never “reach the end”. Yet the algorithm required to produce the Mandelbrot set is exceptionally compact; a couple of lines of code will do it. Specification of the state/complexity is again key.)

August 11, 2015 at 7:50 pm

To change the subject slightly I would also suggest that if the deck is shuffled mechanically then friction will produce some heating of the cards so I think the thermodynamic entropy will probably be higher..

August 11, 2015 at 8:07 pm

@telescoper (August 11, 2015 at 7:50 pm)

Yes, but that’s a very different matter to the thermodynamic entropy of a particular permutation of cards being different from that of any other. It’s not, because they’re not thermally accessible states.

In addition to the friction between the cards there is also the question of the work you’re doing and the energy you’re expending when you’re shuffling the cards…

August 11, 2015 at 8:08 pm

Not really. I’d get a PhD student to do the work..

August 11, 2015 at 8:09 pm

Oh, well played. Game, set and match to you, Peter!

August 14, 2015 at 1:36 pm

Reblogged this on oogenhand.

August 15, 2015 at 12:11 am

My first reaction was to wonder what a “macrostate” was for the cards, in line with what was said above.

However- at the risk of igniting another well-worn debate- it could be said that what microstates we regard as “possible” reflect our information about the system. I have no meaningful information about the order of a shuffled pack of cards, but I know that a freshly opened box will be in order, and hence have far fewer possible microstates. (I can’t, however, remember the order in which the suits go, so I’d have to give it an entropy of log(4!) …).

August 16, 2015 at 9:51 am

The truth is, the observer is a part of the system. But then, the observer is not an isolated system by itself (unbound in space and time)… So, in the end, you have to know the configuration of the entire universe to answer the question. Being a part of the universe, you can’t answer unambiguously without limiting artificially the system.

August 17, 2015 at 11:14 pm

Does Trane have more entropy than Bird? The entropy has to be defined first – then the working parameter is the “change” in entropy, say after evolution…. between initial and final states of the “same” system. The statement is not precisely made.