Archive for thermodynamics

The Big Bang Exploded?

Posted in Biographical, The Universe and Stuff with tags , , , on October 15, 2018 by telescoper

I suspect that I’m not the only physicist who receives unsolicited correspondence from people with wacky views on Life, the Universe and Everything. Being a cosmologist, I probably get more of this stuff than those working in less speculative branches of physics. Because I’ve written a few things that appeared in the public domain, I probably even get more than most cosmologists (except the really famous ones of course).

Many “alternative” cosmologists have now discovered email, and indeed the comments box on this blog, but there are still a lot who send their ideas through regular post. Whenever I get a envelope with an address on it that has been typed by an old-fashioned typewriter it’s a dead giveaway that it’s going to be one of those. Sometimes they are just letters (typed or handwritten), but sometimes they are complete manuscripts often with wonderfully batty illustrations. I remember one called Dark Matter, The Great Pyramid and the Theory of Crystal Healing. I used to have an entire filing cabinet filled with things like his, but I took the opportunity of moving from Cardiff some time ago to throw most of them out.

One particular correspondent started writing to me after the publication of my little book, Cosmology: A Very Short Introduction. This chap sent a terse letter to me pointing out that the Big Bang theory was obviously completely wrong. The reason was obvious to anyone who understood thermodynamics. He had spent a lifetime designing high-quality refrigeration equipment and therefore knew what he was talking about (or so he said). He even sent me this booklet about his ideas, which for some reason I have neglected to send for recycling:

His point was that, according to the Big Bang theory, the Universe cools as it expands. Its current temperature is about 3 Kelvin (-270 Celsius or thereabouts) but it is now expanding and cooling. Turning the clock back gives a Universe that was hotter when it was younger. He thought this was all wrong.

The argument is false, my correspondent asserted, because the Universe – by definition – hasn’t got any surroundings and therefore isn’t expanding into anything. Since it isn’t pushing against anything it can’t do any work. The internal energy of the gas must therefore remain constant and since the internal energy of an ideal gas is only a function of its temperature, the expansion of the Universe must therefore be at a constant temperature (i.e. isothermal, rather than adiabatic). He backed up his argument with bona fide experimental results on the free expansion of gases.

I didn’t reply and filed the letter away. Another came, and I did likewise. Increasingly overcome by some form of apoplexy his letters got ruder and ruder, eventually blaming me for the decline of the British education system and demanding that I be fired from my job. Finally, he wrote to the President of the Royal Society demanding that I be “struck off” and forbidden (on grounds of incompetence) ever to teach thermodynamics in a University. The copies of the letters he sent me are still will the pamphlet.

I don’t agree with him that the Big Bang is wrong, but I’ve never had the energy to reply to his rather belligerent letters. However, I think it might be fun to turn this into a little competition, so here’s a challenge for you: provide the clearest and most succint explanation of why the temperature of the expanding Universe does fall with time, despite what my correspondent thought.

Answers via the comment box please!

The Bayesian Second Law of Thermodynamics

Posted in The Universe and Stuff with tags , , , on April 3, 2017 by telescoper

I post occasionally about Bayesian probability, particularly with respect to Bayesian inference, and related applications to physics and other things, such as thermodynamics, so in that light here’s a paper I stumbled across yesterday. It’s not a brand new paper – it came out on the ArXiv in 2015 – but it’s of sufficiently long-term interest to warrant sharing on here. Here’s the abstract:

You can download the full paper here. There’s also an accessible commentary by one of the authors here.

The interface between thermodynamics, statistical mechanics, information theory  and probability is a fascinating one, but too often important conceptual questions remain unanswered, or indeed unasked, while the field absorbs itself in detailed calculations. Refreshingly, this paper takes the opposite approach.

 

 

 

Yet another cute physics problem

Posted in Cute Problems, The Universe and Stuff with tags , , on November 22, 2011 by telescoper

I’ve spent all day either teaching or writing draft grant applications and am consequently a bit knackered, so in lieu of one of my usual rambling dissertations here is another example from the file marked Cute Physics Problems, this time from thermodynamics. It’s quite straightforward. Or is it? Most people I’ve asked this question in private have got it wrong, so let’s see if the blogosphere is smarter:

Three identical bodies of constant  heat capacity are at temperatures of 300, 300 and 100 K. If no work is done on the system and no heat transferred to it from outside, what is the highest temperature to which any one of the bodies can be raised by the operation of heat engine(s)?

The Thermodynamics of Beards

Posted in Beards, The Universe and Stuff with tags , , , , , , , on July 14, 2009 by telescoper

When I was an undergraduate studying physics, my physics supervisor (who happens to be a regular contributor to the comments on this blog) introduced me to thermodynamics by explaining that Ludwig Boltzmann committed suicide in 1906, as did Paul Ehrenfest in 1933. Now it was my turn to study what had driven them both to take their own lives.

I didn’t think this was the kind of introduction likely to inspire a joyful curiosity in the subject, but it probably wasn’t the reason why I found the subject as difficult as I did. I thought it was a hard subject because it seemed to me to possess arbitrary rules that didn’t emerge from a simpler underlying principle, but simply had to be memorized. Lurking somewhere under it was obviously something statistical, but what it was or how it worked was never made clear. I was frequently told that the best thing to do was just memorize all the different examples given and not try to understand where it all came from. I tried doing this but, partly because I have a very poor memory, I didn’t so very well in the final examination on this topic. I was prejudiced against it for years afterwards.

Actually, now I have grown to like thermodynamics as a subject and have read quite a bit about its historical development. The field of thermodynamics is usually presented to students as a neat and tidy system of axioms and definitions. The resulting laws are written in the language of idealised gases, perfect mechanical devices and reversible equilibrium paths but, despite this, have many applications in realistic practical situations. What is particularly interesting about these laws is that it took a very long time indeed to establish them even at this macroscopic level. The deeper understanding of their origin in the microphysics of atoms and molecules took even longer and was an even more difficult journey.   I thought it might be  fun to celebrate  the tangled history of this fascinating subject, at least for a little while.  Unlike quantum physics and relativity, thermodynamics is not regarded as a very “glamorous” part of science by the general public, but it did occupy the minds of the greatest physicists of the nineteenth century, and I think the story deserves to be better appreciated. I don’t have space to give a complete account, so I apologize in advance for the omissions.

I thought it would also be fun to show pictures of the principal characters. As you’ll see, after  a very clean-shaven start, the history of thermodynamics is dominated by a succession of rather splendid beards…

I’ll start the story with Nicolas Léonard Sadi Carnot (left), who  was born in 1796. His family background was, to say the least, unusual. His father Lasare was known as the “Organizer of Victory” for the Revolutionary Army in 1794 and subsequently became Napoleon’s minister of war. Against all expectations he quit politics in 1807 and became a mathematician. Sadi had a brother, by the splendid name of Hippolyte, who was also a politician and whose son became president of France. Sadi himself was educated partly by his father and partly at the Ecole Polytecnhique. He served in the army as an engineer and was eventually promoted to Captain. He left the army in 1828, only to die of cholera in 1832 during an epidemic in Paris.

Carnot’s work on the theory of “heat engines” was astonishingly original and eventually had enormous impact, essentially creating the new science of thermodynamics, but he only published one paper before his untimely death and it attracted little attention during his lifetime. Reflections on the Motive Power of Fire appeared in 1824, but its importance was not really recognized until 1849, when it was read by William Thomson (later Lord Kelvin) who, together Rudolf Clausius, made it more widely known.

In the late 18th century, Britain was in the grip of an industrial revolution largely generated by the use of steam power. These engines had been invented by the pragmatic British, but the theory by which they worked was pretty much non-existent. Carnot realised that steam-driven devices in use at the time were horrendously inefficient. As a nationalist, he hoped that by thinking about the underlying principles of heat and energy he might be able to give his native France a competitive edge over perfidious Albion. He thought about the problem of heat engines in the most general terms possible, even questioning whether there might be an alternative to steam as the best possible “working substance”. Despite the fact that he employed many outdated concepts, including the so-called caloric theory of heat, Carnot’s paper was full of brilliant insights. In particular he considered the behaviour of an idealized friction-free engine in which the working substance moves from a heat source to a heat sink in a series of small equilibrium steps so that the entire process is reversible. The changes of pressure and volume involved in such a process are now known as a Carnot cycle.

By remarkably clear reasoning, Carnot was able to prove a famous theorem that the efficiency of such a cycle depends only on the temperature Tin of the heat source and the temperature Tout. He showed that the maximum fraction of the heat available to be used to do mechanical work is independent of the working substance and is equal to (Tin-Tout)/Tout; this is called Carnot’s theorem. Carnot’s results were probably considered too abstract to be of any use to engineers, but they contain ideas that are linked with the First Law of Thermodynamics, and they eventually led Clausius and Thomson independently to the statement of the Second Law discussed below.

James Prescott Joule (right) was growing up in a wealthy brewing family. He was born in 1818 and was educated at home by none other than John Dalton. He became interested in science and soon started doing experiments in a laboratory near the family brewery. He was a skilful practical physicist and was able to measure the heat and temperature changes involved in various situations. Between 1837 and 1847 he established the basic principle that heat and other forms of energy (such as mechanical work) were equivalent and that, when all forms are included, energy is conserved. Joule measured the amount of mechanical work required to produce a given amount of heat in 1843, by studying the heat released in water by the rotation of paddles powered by falling weights. The SI unit of energy is named in his honour.

William Thomson, 1st Baron Kelvin of Largs, was born in 1824 and came to dominate British physics throughout the second half of the 19th  century. He was extremely prolific, writing over 600 research papers and several books. No-one since has managed to range so widely and so successfully across the realm of natural sciences. He was also unusually generous with his ideas (perhaps because he had so many), and in giving credit to other scientists, such as Carnot.  He wasn’t entirely enlightened, however: he was a vigorous opponent of the admission of women to the  University.

Kelvin worked on many theoretical aspects of physics, but was also extremely practical. He directed the first successful transatlantic cable telegraph project, and his house in Glasgow was one of the first to be lit by electricity. Unusually among physicists he became wealthy through his scientific work. One can dream.

One of the keys to Kelvin’s impact on science in Britain was that immediately after graduating from Cambridge in 1845 he went to work in Paris for a year. This opened his eyes to the much more sophisticated mathematical approaches being used by physicists on the continent. British physics, especially at Cambridge, had been held back by an excessive reverence for the work of Newton and the rather cumbersome form of calculus (called “fluxions”) it had inherited from him. Much of Kelvin’s work on theoretical topics used the modern calculus which had been developed in mainland Europe. More specifically, it was during this trip to Paris that he heard of the paper by Carnot, although it took him another three years to get his hands on a copy. When he returned from Paris in 1846, the young William Thomson became Professor of Natural Philosophy at Glasgow University, a post he held for an astonishing 53 years.

Initially inspired by Carnot’s work, Kelvin became one of the most important figures in the development of the theory of heat. In 1848 he proposed an absolute scale of temperature now known as the Kelvin or thermodynamic scale, which practically corresponds with the Celsius scale except with an offset such that the triple point of water, at zero degrees Celsius, is at 273.16 Kelvin.  He also worked with Joule on experiments concerning heat flow.

At around the same time as Kelvin, another prominent character in the story of thermodynamics was playing his part. Rudolf Clausius (right) was born in 1822. His father was a Prussian pastor and owner of a small school that the young Rudolf attended. He later went to university in Berlin to study history, but switched to science. He was constantly short of money, which meant that it took him quite a long time to graduate but he eventually ended up as a professor of physics, first in Zürich and then later in Wurzburg and Bonn. During the Franco-Prussian war, he and his students set up a volunteer ambulance service and during the course of its operations, Rudolf Clausius was badly wounded.

By the 1850s, thanks largely to the efforts of Kelvin, Carnot’s work was widely recognized throughout Europe. Carnot had correctly realised that in a steam engine, heat “moves” as the steam descends from a higher temperature to a lower one. He, however, envisaged that this heat moved through the engine intact.  On the other hand, the work of Joule had established The First law of Thermodynamics, which states that heat is actually lost in this process, or more precisely heat is converted into mechanical work. Clausius was troubled by the apparent conflict between the views of Carnot and Joule, but eventually realised that they could be reconciled if one could assume that heat does not pass spontaneously from a colder to a hotter body. This was the original statement of what has become known as the Second Law of Thermodynamics.  The following year, Kelvin came up with a different expression of essentially the same law.  Clausius further developed the idea that heat must tend to dissipate and in 1865 he introduced the term “entropy”  as a measure of the amount of heat gained or lost by a body divided by its absolute temperature. An equivalent statement of the Second Law is that the entropy of an isolated system can never decrease: it can only either increase or remain constant. This principle was intensely controversial at the time, but Kelvin and Maxwell fought vigorously in its defence, and it was eventually accepted into the canon of Natural Law.

So far in this brief historical diversion, I have focussed on thermodynamics at a macroscopic level, in the form that eventually emerged as the laws of thermodynamics presented in the previous section. During roughly the same period, however, a parallel story was unfolding that revolved around explaining the macroscopic behaviour of matter in terms of the behaviour of its microscopic components. The goal of this programme was to understand quantitative measures such as temperature and pressure in terms of related quantities describing individual atoms or molecules. I’ll end this bit of history with a brief description of three of the most important contributors to this strand.

James_clerk_maxwell

James Clerk Maxwell (above) was probably the greatest physicist of the nineteenth century, and although he is most celebrated for his phenomenal work on the unified theory of electricity and magnetism, he was also a great pioneer in the kinetic theory of gases, He was born in 1831 and went to school at the Edinburgh academy, which was a difficult experience for him because he had a country accent and invariably wore home-made clothes that made him stand out among the privileged town-dwellers who formed the bulk of the school population. Aged 15, he invented a method of drawing curves using string and drawing pins as a kind of generalization of the well-known technique of drawing an ellipse. This work was published in the Proceedings of the Royal Society of Edinburgh in 1846, a year before Maxwell went to University. After a spell at Edinburgh he went to Cambridge in 1850; while there he won the prestigious Smith’s prize in 1854. He subsequently obtained a post in Aberdeen at Marischal College where he married the principal’s daughter, but was then made redundant. In 1860 he moved to London but when his father died in 1865 he resigned his post at King’s college and became a gentleman farmer doing scientific research in his spare time. In 1874 he was persuaded to move to Cambridge as the first Cavendish Professor of Experimental Physics, charged with the responsibility of setting up the now-famous Cavendish laboratory. He contracted cancer five years later and died, aged 48, in 1879.

Maxwell’s contributions to the kinetic theory of gases began by building on the idea, originally due to Daniel Bernoulli, that a gas consists of molecules in constant motion colliding with each other and with the walls of whatever container is holding it. Rudolf Clausius had already realised that although the gas molecules travel very fast, gases diffuse into each other only very slowly. He deduced, correctly, that molecules must only travel a very short distance between collisions. From about 1860, Maxwell started to work on the application of statistical methods to this general picture. He worked out the probability distribution of molecular velocities in a gas in equilibrium at a given temperature; Boltzmann (see below) independently derived the same result. Maxwell showed how the distribution depends on temperature and also proved that heat must be stored in a gas in the form of kinetic energy of the molecules, thus establishing a microscopic version of the first law of thermodynamics. He went on to explain a host of experimental properties such as viscosity, diffusion and thermal conductivity using this theory.

Maxwell was lucky that he was able to make profound intellectual discoveries without apparently suffering from significant mental strain. Unfortunately, the same could not be said of Ludwig Eduard Boltzmann, who was born in 1844 and grew up in the Austrian towns of Linz and Wels, where his father was employed as a tax officer. He received his doctorate from the University of Vienna in 1866 and subsequently held a series of professorial appointments at Graz, Vienna, Munich and Leipzig. Throughout his life he suffered from bouts of depression which worsened when he was subjected to sustained attack from the Vienna school of positivist philosophers, who derided the idea that physical phenomena could be explained in terms of atoms. Despite this antagonism, he taught many students who went on to become very distinguished and he also had a very wide circle of friends. In the end, though, the lack of acceptance of his work got him so depressed that he committed suicide in 1906. Max Planck arranged for his gravestone to be marked with “S=klogW”, which is now known as Boltzmann’s law; the constant k is called Boltzmann’s constant.

The final member of the cast of characters in this story is Josiah Willard Gibbs (left). He born in 1839 and received his doctorate from Yale University in 1863, gaining only the second PhD ever to be awarded in the USA.  After touring Europe for a while he returned to Yale in 1871 to become a professor, but he received no salary for the first nine years of this appointment. The university rules at that time only allowed salaries to be paid to staff in need of money; having independent means, Gibbs was apparently not entitled to a salary. Gibbs was a famously terrible teacher and few students could make any sense of his lectures (not a rare occurence amongst those trying to learn thermodynamics). His research papers are written in a very obscure style which makes it easy to believe he found it difficult to express himself in the lecture theatre. Gibbs actually founded the field of chemical thermodynamics, but few chemists understood his work while he was still alive. His great contribution to statistical mechanics was likewise poorly understood. It was only in the 1890s when his works were translated into German that his achievements became more widely recognised. Both Planck and Einstein held him in very high regard, but even they found his work difficult to understand. He died in 1903.

So there you are. The only one who didn’t have a beard was French and called Sadi. ’nuff said.

Why the Big Bang is Wrong…

Posted in Biographical, The Universe and Stuff with tags , , on July 7, 2009 by telescoper

I suspect that I’m not the only physicist who has a filing cabinet filled with unsolicited correspondence from people with wacky views on everything from UFOs to Dark Matter. Being a cosmologist, I probably get more of this stuff than those working in less speculative branches of physics. Because I’ve written a few things that appeared in the public domain (and even appeared on TV and radio a few times), I probably even get more than most cosmologists (except the really  famous ones of course).

I would estimate that I get two or three items of correspondence of this kind per week. Many “alternative” cosmologists have now discovered email, but there are still a lot who send their ideas through regular post. In fact, whenever I get a envelope with an address on it that has been typed by an old-fashioned typewriter it’s usually a dead giveaway that it’s going to be one of  those. Sometimes they are just letters (typed or handwritten), but sometimes they are complete manuscripts often with wonderfully batty illustrations. I have one in front of me now called Dark Matter, The Great Pyramid and the Theory of Crystal Healing. I might even go so far as to call that one bogus. I have an entire filing cabinet in my office at work filled with things like it. I could make a fortune if I set up a journal for these people. Alarmingly, electrical engineers figure prominently in my files. They seem particularly keen to explain why Einstein was wrong…

I never reply, of course. I don’t have time, for one thing.  I’m also doubtful whether there’s anything useful to be gained by trying to engage in a scientific argument with people whose grip on the basic concepts is so tenuous (as perhaps it is on reality). Even if they have some scientific training, their knowledge and understanding of physics is usually pretty poor.

I should explain that, whenever I can, if someone writes or emails with a genuine question about physics or astronomy – which often happens – I always reply. I think that’s a responsibility for anyone who gets taxpayers’ money. However, I don’t reply to letters that are confrontational or aggressive or which imply that modern science is some sort of conspiracy to conceal the real truth.

One particular correspondent started writing to me after the publication of my little book, Cosmology: A Very Short Introduction. I won’t gave his name, but he was an individual who had some scientific training (not an electrical engineer, I hasten to add). This chap sent a terse letter to me pointing out that the Big Bang theory was obviously completely wrong.  The reason was  obvious to anyone who understood thermodynamics. He had spent a lifetime designing high-quality refrigeration equipment  and therefore knew what he was talking about (or so he said).

His point was that, according to  the Big Bang theory, the Universe cools as it expands. Its current temperature is about 3 Kelvin (-270 Celsius or therabouts) but it is now expanding. Turning the clock back gives a Universe that was hotter when it was younger. He thought this was all wrong.

The argument is false, my correspondent asserted, because the Universe – by definition –  hasn’t got any surroundings and therefore isn’t expanding into anything. Since it isn’t pushing against anything it can’t do any work. The internal energy of the gas must therefore remain constant and since the internal energy of an ideal gas is only a function of its temperature, the expansion of the Universe must therefore be at a constant temperature (i.e. isothermal, rather than adiabatic, as in the Big Bang theory). He backed up his argument with bona fide experimental results on the free expansion of gases.

I didn’t reply and filed the letter away. Another came, and I did likewise. Increasingly overcome by some form of apoplexy his letters got ruder and ruder, eventually blaming me for the decline of the British education system and demanding that I be fired from my job. Finally, he wrote to the President of the Royal Society demanding that I be “struck off” – not that I’ve ever been “struck on” – and forbidden (on grounds of incompetence) ever to teach thermodynamics in a University.

Actually, I’ve never taught thermodynamics in any University anyway, but I’ve kept the letter (which was cc-ed to me) in case I am ever asked. It’s much better than a sick note….

This is a good example of a little knowledge being a dangerous thing. My correspondent clearly knew something about thermodynamics. But, obviously, I don’t agree with him that the Big Bang is wrong.

Although I never actually replied to this question myself, I thought it might be fun to turn this into a little competition, so here’s a challenge for you: provide the clearest and most succint explanation of why the temperature of the expanding Universe does fall with time, despite what my correspondent thought.

Answers via the comment box please, in language suitable for a nutter non-physicist.

Arrows and Demons

Posted in The Universe and Stuff with tags , , , , , on April 12, 2009 by telescoper

My recent post about randomness and non-randomness spawned a lot of comments over on cosmic variance about the nature of entropy. I thought I’d add a bit about that topic here, mainly because I don’t really agree with most of what is written in textbooks on this subject.

The connection between thermodynamics (which deals with macroscopic quantities) and statistical mechanics (which explains these in terms of microscopic behaviour) is a fascinating but troublesome area.  James Clerk Maxwell (right) did much to establish the microscopic meaning of the first law of thermodynamics he never tried develop the second law from the same standpoint. Those that did were faced with a conundrum.  

 

The behaviour of a system of interacting particles, such as the particles of a gas, can be expressed in terms of a Hamiltonian H which is constructed from the positions and momenta of its constituent particles. The resulting equations of motion are quite complicated because every particle, in principle, interacts with all the others. They do, however, possess an simple yet important property. Everything is reversible, in the sense that the equations of motion remain the same if one changes the direction of time and changes the direction of motion for all the particles. Consequently, one cannot tell whether a movie of atomic motions is being played forwards or backwards.

This means that the Gibbs entropy is actually a constant of the motion: it neither increases nor decreases during Hamiltonian evolution.

But what about the second law of thermodynamics? This tells us that the entropy of a system tends to increase. Our everyday experience tells us this too: we know that physical systems tend to evolve towards states of increased disorder. Heat never passes from a cold body to a hot one. Pour milk into coffee and everything rapidly mixes. How can this directionality in thermodynamics be reconciled with the completely reversible character of microscopic physics?

The answer to this puzzle is surprisingly simple, as long as you use a sensible interpretation of entropy that arises from the idea that its probabilistic nature represents not randomness (whatever that means) but incompleteness of information. I’m talking, of course, about the Bayesian view of probability.

 First you need to recognize that experimental measurements do not involve describing every individual atomic property (the “microstates” of the system), but large-scale average things like pressure and temperature (these are the “macrostates”). Appropriate macroscopic quantities are chosen by us as useful things to use because they allow us to describe the results of experiments and measurements in a  robust and repeatable way. By definition, however, they involve a substantial coarse-graining of our description of the system.

Suppose we perform an idealized experiment that starts from some initial macrostate. In general this will generally be consistent with a number – probably a very large number – of initial microstates. As the experiment continues the system evolves along a Hamiltonian path so that the initial microstate will evolve into a definite final microstate. This is perfectly symmetrical and reversible. But the point is that we can never have enough information to predict exactly where in the final phase space the system will end up because we haven’t specified all the details of which initial microstate we were in.  Determinism does not in itself allow predictability; you need information too.

If we choose macro-variables so that our experiments are reproducible it is inevitable that the set of microstates consistent with the final macrostate will usually be larger than the set of microstates consistent with the initial macrostate, at least  in any realistic system. Our lack of knowledge means that the probability distribution of the final state is smeared out over a larger phase space volume at the end than at the start. The entropy thus increases, not because of anything happening at the microscopic level but because our definition of macrovariables requires it.

ham

This is illustrated in the Figure. Each individual microstate in the initial collection evolves into one state in the final collection: the narrow arrows represent Hamiltonian evolution.

 

However, given only a finite amount of information about the initial state these trajectories can’t be as well defined as this. This requires the set of final microstates has to acquire a  sort of “buffer zone” around the strictly Hamiltonian core;  this is the only way to ensure that measurements on such systems will be reproducible.

The “theoretical” Gibbs entropy remains exactly constant during this kind of evolution, and it is precisely this property that requires the experimental entropy to increase. There is no microscopic explanation of the second law. It arises from our attempt to shoe-horn microscopic behaviour into framework furnished by macroscopic experiments.

Another, perhaps even more compelling demonstration of the so-called subjective nature of probability (and hence entropy) is furnished by Maxwell’s demon. This little imp first made its appearance in 1867 or thereabouts and subsequently led a very colourful and influential life. The idea is extremely simple: imagine we have a box divided into two partitions, A and B. The wall dividing the two sections contains a tiny door which can be opened and closed by a “demon” – a microscopic being “whose faculties are so sharpened that he can follow every molecule in its course”. The demon wishes to play havoc with the second law of thermodynamics so he looks out for particularly fast moving molecules in partition A and opens the door to allow them (and only them) to pass into partition B. He does the opposite thing with partition B, looking out for particularly sluggish molecules and opening the door to let them into partition A when they approach.

The net result of the demon’s work is that the fast-moving particles from A are preferentially moved into B and the slower particles from B are gradually moved into A. The net result is that the average kinetic energy of A molecules steadily decreases while that of B molecules increases. In effect, heat is transferred from a cold body to a hot body, something that is forbidden by the second law.

All this talk of demons probably makes this sound rather frivolous, but it is a serious paradox that puzzled many great minds. Until it was resolved in 1929 by Leo Szilard. He showed that the second law of thermodynamics would not actually be violated if entropy of the entire system (i.e. box + demon) increased by an amount every time the demon measured the speed of a molecule so he could decide whether to let it out from one side of the box into the other. This amount of entropy is precisely enough to balance the apparent decrease in entropy caused by the gradual migration of fast molecules from A into B. This illustrates very clearly that there is a real connection between the demon’s state of knowledge and the physical entropy of the system.

By now it should be clear why there is some sense of the word subjective that does apply to entropy. It is not subjective in the sense that anyone can choose entropy to mean whatever they like, but it is subjective in the sense that it is something to do with the way we manage our knowledge about nature rather than about nature itself. I know from experience, however, that many physicists feel very uncomfortable about the idea that entropy might be subjective even in this sense.

On the other hand, I feel completely comfortable about the notion:. I even think it’s obvious. To see why, consider the example I gave above about pouring milk into coffee. We are all used to the idea that the nice swirly pattern you get when you first pour the milk in is a state of relatively low entropy. The parts of the phase space of the coffee + milk system that contain such nice separations of black and white are few and far between. It’s much more likely that the system will end up as a “mixed” state. But then how well mixed the coffee is depends on your ability to resolve the size of the milk droplets. An observer with good eyesight would see less mixing than one with poor eyesight. And an observer who couldn’t perceive the difference between milk and coffee would see perfect mixing. In this case entropy, like beauty, is definitely in the eye of the beholder.

The refusal of many physicists to accept the subjective nature of entropy arises, as do so many misconceptions in physics, from the wrong view of probability.