## Spin, Entanglement and Quantum Weirdness

After writing a post about spinning cricket balls a while ago I thought it might be fun to post something about the role of spin in quantum mechanics.

Spin is a concept of fundamental importance in quantum mechanics, not least because it underlies our most basic theoretical understanding of matter. The standard model of particle physics divides elementary particles into two types, fermions and bosons, according to their spin.  One is tempted to think of  these elementary particles as little cricket balls that can be rotating clockwise or anti-clockwise as they approach an elementary batsman. But, as I hope to explain, quantum spin is not really like classical spin: batting would be even more difficult if quantum bowlers were allowed!

Take the electron,  for example. The amount of spin an electron carries is  quantized, so that it always has a magnitude which is ±1/2 (in units of Planck’s constant; all fermions have half-integer spin). In addition, according to quantum mechanics, the orientation of the spin is indeterminate until it is measured. Any particular measurement can only determine the component of spin in one direction. Let’s take as an example the case where the measuring device is sensitive to the z-component, i.e. spin in the vertical direction. The outcome of an experiment on a single electron will lead a definite outcome which might either be “up” or “down” relative to this axis.

However, until one makes a measurement the state of the system is not specified and the outcome is consequently not predictable with certainty; there will be a probability of 50% probability for each possible outcome. We could write the state of the system (expressed by the spin part of its wavefunction  ψ prior to measurement in the form

|ψ> = (|↑> + |↓>)/√2

This gives me an excuse to use  the rather beautiful “bra-ket” notation for the state of a quantum system, originally due to Paul Dirac. The two possibilities are “up” (↑­) and “down” (↓) and they are contained within a “ket” (written |>)which is really just a shorthand for a wavefunction describing that particular aspect of the system. A “bra” would be of the form <|; for the mathematicians this represents the Hermitian conjugate of a ket. The √2 is there to insure that the total probability of the spin being either up or down is 1, remembering that the probability is the square of the wavefunction. When we make a measurement we will get one of these two outcomes, with a 50% probability of each.

At the point of measurement the state changes: if we get “up” it becomes purely |↑>  and if the result is  “down” it becomes |↓>. Either way, the quantum state of the system has changed from a “superposition” state described by the equation above to an “eigenstate” which must be either up or down. This means that all subsequent measurements of the spin in this direction will give the same result: the wave-function has “collapsed” into one particular state. Incidentally, the general term for a two-state quantum system like this is a qubit, and it is the basis of the tentative steps that have been taken towards the construction of a quantum computer.

Notice that what is essential about this is the role of measurement. The collapse of  ψ seems to be an irreversible process, but the wavefunction itself evolves according to the Schrödinger equation, which describes reversible, Hamiltonian changes.  To understand what happens when the state of the wavefunction changes we need an extra level of interpretation beyond what the mathematics of quantum theory itself provides,  because we are generally unable to write down a wave-function that sensibly describes the system plus the measuring apparatus in a single form.

So far this all seems rather similar to the state of a fair coin: it has a 50-50 chance of being heads or tails, but the doubt is resolved when its state is actually observed. Thereafter we know for sure what it is. But this resemblance is only superficial. A coin only has heads or tails, but the spin of an electron doesn’t have to be just up or down. We could rotate our measuring apparatus by 90° and measure the spin to the left (←) or the right (→). In this case we still have to get a result which is a half-integer times Planck’s constant. It will have a 50-50 chance of being left or right that “becomes” one or the other when a measurement is made.

Now comes the real fun. Suppose we do a series of measurements on the same electron. First we start with an electron whose spin we know nothing about. In other words it is in a superposition state like that shown above. We then make a measurement in the vertical direction. Suppose we get the answer “up”. The electron is now in the eigenstate with spin “up”.

We then pass it through another measurement, but this time it measures the spin to the left or the right. The process of selecting the electron to be one with  spin in the “up” direction tells us nothing about whether the horizontal component of its spin is to the left or to the right. Theory thus predicts a 50-50 outcome of this measurement, as is observed experimentally.

Suppose we do such an experiment and establish that the electron’s spin vector is pointing to the left. Now our long-suffering electron passes into a third measurement which this time is again in the vertical direction. You might imagine that since we have already measured this component to be in the up direction, it would be in that direction again this time. In fact, this is not the case. The intervening measurement seems to “reset” the up-down component of the spin; the results of the third measurement are back at square one, with a 50-50 chance of getting up or down.

This is just one example of the kind of irreducible “randomness” that seems to be inherent in quantum theory. However, if you think this is what people mean when they say quantum mechanics is weird, you’re quite mistaken. It gets much weirder than this! So far I have focussed on what happens to the description of single particles when quantum measurements are made. Although there seem to be subtle things going on, it is not really obvious that anything happening is very different from systems in which we simply lack the microscopic information needed to make a prediction with absolute certainty.

At the simplest level, the difference is that quantum mechanics gives us a theory for the wave-function which somehow lies at a more fundamental level of description than the usual way we think of probabilities. Probabilities can be derived mathematically from the wave-function,  but there is more information in ψ than there is in |2; the wave-function is a complex entity whereas the square of its amplitude is entirely real. If one can construct a system of two particles, for example, the resulting wave-function is obtained by superimposing the wave-functions of the individual particles, and probabilities are then obtained by squaring this joint wave-function. This will not, in general, give the same probability distribution as one would get by adding the one-particle probabilities because, for complex entities A and B,

A2+B2 ≠(A+B)2

in general. To put this another way, one can write any complex number in the form a+ib (real part plus imaginary part) or, generally more usefully in physics , as Re, where R is the amplitude and θ  is called the phase. The square of the amplitude gives the probability associated with the wavefunction of a single particle, but in this case the phase information disappears; the truly unique character of quantum physics and how it impacts on probabilies of measurements only reveals itself when the phase information is retained. This generally requires two or more particles to be involved, as the absolute phase of a single-particle state is essentially impossible to measure.

Finding situations where the quantum phase of a wave-function is important is not easy. It seems to be quite easy to disturb quantum systems in such a way that the phase information becomes scrambled, so testing the fundamental aspects of quantum theory requires considerable experimental ingenuity. But it has been done, and the results are astonishing.

Let us think about a very simple example of a two-component system: a pair of electrons. All we care about for the purpose of this experiment is the spin of the electrons so let us write the state of this system in terms of states such as  which I take to mean that the first particle has spin up and the second one has spin down. Suppose we can create this pair of electrons in a state where we know the total spin is zero. The electrons are indistinguishable from each other so until we make a measurement we don’t know which one is spinning up and which one is spinning down. The state of the two-particle system might be this:

|ψ> = (|↑↓> – |↓↑>)/√2

squaring this up would give a 50% probability of “particle one” being up and “particle two” being down and 50% for the contrary arrangement. This doesn’t look too different from the example I discussed above, but this duplex state exhibits a bizarre phenomenon known as quantum entanglement.

Suppose we start the system out in this state and then separate the two electrons without disturbing their spin states. Before making a measurement we really can’t say what the spins of the individual particles are: they are in a mixed state that is neither up nor down but a combination of the two possibilities. When they’re up, they’re up. When they’re down, they’re down. But when they’re only half-way up they’re in an entangled state.

If one of them passes through a vertical spin-measuring device we will then know that particle is definitely spin-up or definitely spin-down. Since we know the total spin of the pair is zero, then we can immediately deduce that the other one must be spinning in the opposite direction because we’re not allowed to violate the law of conservation of angular momentum: if Particle 1 turns out to be spin-up, Particle 2  must be spin-down, and vice versa. It is known experimentally that passing two electrons through identical spin-measuring gadgets gives  results consistent with this reasoning. So far there’s nothing so very strange in this.

The problem with entanglement lies in understanding what happens in reality when a measurement is done. Suppose we have two observers, Dick and Harry, each equipped with a device that can measure the spin of an electron in any direction they choose. Particle 1 emerges from the source and travels towards Dick whereas particle 2 travels in Harry’s direction. Before any measurement, the system is in an entangled superposition state. Suppose Dick decides to measure the spin of electron 1 in the z-direction and finds it spinning up. Immediately, the wave-function for electron 2 collapses into the down direction. If Dick had instead decided to measure spin in the left-right direction and found it “left” similar collapse would have occurred for particle 2, but this time putting it in the “right” direction.

Whatever Dick does, the result of any corresponding measurement made by Harry has a definite outcome – the opposite to Dick’s result. So Dick’s decision whether to make a measurement up-down or left-right instantaneously transmits itself to Harry who will find a consistent answer, if he makes the same measurement as Dick.

If, on the other hand, Dick makes an up-down measurement but Harry measures left-right then Dick’s answer has no effect on Harry, who has a 50% chance of getting “left” and 50% chance of getting right. The point is that whatever Dick decides to do, it has an immediate effect on the wave-function at Harry’s position; the collapse of the wave-function induced by Dick immediately collapses the state measured by Harry. How can particle 1 and particle 2 communicate in this way?

This riddle is the core of a thought experiment by Einstein, Podolsky and Rosen in 1935 which has deep implications for the nature of the information that is supplied by quantum mechanics. The essence of the EPR paradox is that each of the two particles – even if they are separated by huge distances – seems to know exactly what the other one is doing. Einstein called this “spooky action at a distance” and went on to point out that this type of thing simply could not happen in the usual calculus of random variables. His argument was later tightened considerably by John Bell in a form now known as Bell’s theorem.

To see how Bell’s theorem works, consider the following roughly analagous situation. Suppose we have two suspects in prison, say Dick and Harry (Tom grassed them up and has been granted immunity from prosecution). The  two are taken apart to separate cells for individual questioning. We can allow them to use notes, electronic organizers, tablets of stone or anything to help them remember any agreed strategy they have concocted, but they are not allowed to communicate with each other once the interrogation has started. Each question they are asked has only two possible answers – “yes” or “no” – and there are only three possible questions. We can assume the questions are asked independently and in a random order to the two suspects.

When the questioning is over, the interrogators find that whenever they asked the same question, Dick and Harry always gave the same answer, but when the question was different they only gave the same answer 25% of the time. What can the interrogators conclude?

The answer is that Dick and Harry must be cheating. Either they have seen the question list ahead of time or are able to communicate with each other without the interrogator’s knowledge. If they always give the same answer when asked the same question, they must have agreed on answers to all three questions in advance. But when they are asked different questions then, because each question has only two possible responses, by following this strategy it must turn out that at least two of the three prepared answers – and possibly all of them – must be the same for both Dick and Harry. This puts a lower limit on the probability of them giving the same answer to different questions. I’ll leave it as an exercise to the reader to show that the probability of coincident answers to different questions in this case must be at least 1/3.

This a simple illustration of what in quantum mechanics is known as a Bell inequality. Dick and Harry can only keep the number of such false agreements down to the measured level of 25% by cheating.

This example is directly analogous to the behaviour of the entangled quantum state described above under repeated interrogations about its spin in three different directions. The result of each measurement can only be either “yes” or “no”. Each individual answer (for each particle) is equally probable in this case; the same question always produces the same answer for both particles, but the probability of agreement for two different questions is indeed ¼ and not larger as would be expected if the answers were random. For example one could ask particle 1 “are you spinning up” and particle 2 “are you spinning to the right”? The probability of both producing an answer “yes” is 25% according to quantum theory but would be higher if the particles weren’t cheating in some way.

Probably the most famous experiment of this type was done in the 1980s, by Alain Aspect and collaborators, involving entangled pairs of polarized photons (which are bosons), rather than electrons, primarily because these are easier to prepare.

The implications of quantum entanglement greatly troubled Einstein long before the EPR paradox. Indeed the interpretation of single-particle quantum measurement (which has no entanglement) was already troublesome. Just exactly how does the wave-function relate to the particle? What can one really say about the state of the particle before a measurement is made? What really happens when a wave-function collapses? These questions take us into philosophical territory that I have set foot in already; the difficult relationship between epistemological and ontological uses of probability theory.

Thanks largely to the influence of Niels Bohr, in the relatively early stages of quantum theory a standard approach to this question was adopted. In what became known as the  Copenhagen interpretation of quantum mechanics, the collapse of the wave-function as a result of measurement represents a real change in the physical state of the system. Before the measurement, an electron really is neither spinning up nor spinning down but in a kind of quantum purgatory. After a measurement it is released from limbo and becomes definitely something. What collapses the wave-function is something unspecified to do with the interaction of the particle with the measuring apparatus or, in some extreme versions of this doctrine, the intervention of human consciousness.

I find it amazing that such a view could have been held so seriously by so many highly intelligent people. Schrödinger hated this concept so much that he invented a thought-experiment of his own to poke fun at it. This is the famous “Schrödinger’s cat” paradox. I’ve sent Columbo out of the room while I describe this.

In a closed box there is a cat. Attached to the box is a device which releases poison into the box when triggered by a quantum-mechanical event, such as radiation produced by the decay of a radioactive substance. One can’t tell from the outside whether the poison has been released or not, so one doesn’t know whether the cat is alive or dead. When one opens the box, one learns the truth. Whether the cat has collapsed or not, the wave-function certainly does. At this point one is effectively making a quantum measurement so the wave-function of the cat is either “dead” or “alive” but before opening the box it must be in a superposition state. But do we really think the cat is neither dead nor alive? Isn’t it certainly one or the other, but that our lack of information prevents us from knowing which? And if this is true for a macroscopic object such as a cat, why can’t it be true for a microscopic system, such as that involving just a pair of electrons?

As I learned at a talk by the Nobel prize-winning physicist Tony Leggett – who has been collecting data on this recently – most physicists think Schrödinger’s cat is definitely alive or dead before the box is opened. However, most physicists don’t believe that an electron definitely spins either up or down before a measurement is made. But where does one draw the line between the microscopic and macroscopic descriptions of reality? If quantum mechanics works for 1 particle, does it work also for 10, 1000? Or, for that matter, 1023?

Most modern physicists eschew the Copenhagen interpretation in favour of one or other of two modern interpretations. One involves the concept of quantum decoherence, which is basically the idea that the phase information that is crucial to the underlying logic of quantum theory can be destroyed by the interaction of a microscopic system with one of larger size. In effect, this hides the quantum nature of macroscopic systems and allows us to use a more classical description for complicated objects. This certainly happens in practice, but this idea seems to me merely to defer the problem of interpretation rather than solve it. The fact that a large and complex system makes tends to hide its quantum nature from us does not in itself give us the right to have a different interpretations of the wave-function for big things and for small things.

Another trendy way to think about quantum theory is the so-called Many-Worlds interpretation. This asserts that our Universe comprises an ensemble – sometimes called a multiverse – and  probabilities are defined over this ensemble. In effect when an electron leaves its source it travels through infinitely many paths in this ensemble of possible worlds, interfering with itself on the way. We live in just one slice of the multiverse so at the end we perceive the electron winding up at just one point on our screen. Part of this is to some extent excusable, because many scientists still believe that one has to have an ensemble in order to have a well-defined probability theory. If one adopts a more sensible interpretation of probability then this is not actually necessary; probability does not have to be interpreted in terms of frequencies. But the many-worlds brigade goes even further than this. They assert that these parallel universes are real. What this means is not completely clear, as one can never visit parallel universes other than our own …

It seems to me that none of these interpretations is at all satisfactory and, in the gap left by the failure to find a sensible way to understand “quantum reality”, there has grown a pathological industry of pseudo-scientific gobbledegook. Claims that entanglement is consistent with telepathy, that parallel universes are scientific truths, that consciousness is a quantum phenomena abound in the New Age sections of bookshops but have no rational foundation. Physicists may complain about this, but they have only themselves to blame.

But there is one remaining possibility for an interpretation of that has been unfairly neglected by quantum theorists despite – or perhaps because of – the fact that is the closest of all to commonsense. This view that quantum mechanics is just an incomplete theory, and the reason it produces only a probabilistic description is that does not provide sufficient information to make definite predictions. This line of reasoning has a distinguished pedigree, but fell out of favour after the arrival of Bell’s theorem and related issues. Early ideas on this theme revolved around the idea that particles could carry “hidden variables” whose behaviour we could not predict because our fundamental description is inadequate. In other words two apparently identical electrons are not really identical; something we cannot directly measure marks them apart. If this works then we can simply use only probability theory to deal with inferences made on the basis of information that’s not sufficient for absolute certainty.

After Bell’s work, however, it became clear that these hidden variables must possess a very peculiar property if they are to describe out quantum world. The property of entanglement requires the hidden variables to be non-local. In other words, two electrons must be able to communicate their values faster than the speed of light. Putting this conclusion together with relativity leads one to deduce that the chain of cause and effect must break down: hidden variables are therefore acausal. This is such an unpalatable idea that it seems to many physicists to be even worse than the alternatives, but to me it seems entirely plausible that the causal structure of space-time must break down at some level. On the other hand, not all “incomplete” interpretations of quantum theory involve hidden variables.

One can think of this category of interpretation as involving an epistemological view of quantum mechanics. The probabilistic nature of the theory has, in some sense, a subjective origin. It represents deficiencies in our state of knowledge. The alternative Copenhagen and Many-Worlds views I discussed above differ greatly from each other, but each is characterized by the mistaken desire to put quantum mechanics – and, therefore, probability –  in the realm of ontology.

The idea that quantum mechanics might be incomplete  (or even just fundamentally “wrong”) does not seem to me to be all that radical. Although it has been very successful, there are sufficiently many problems of interpretation associated with it that perhaps it will eventually be replaced by something more fundamental, or at least different. Surprisingly, this is a somewhat heretical view among physicists: most, including several Nobel laureates, seem to think that quantum theory is unquestionably the most complete description of nature we will ever obtain. That may be true, of course. But if we never look any deeper we will certainly never know…

With the gradual re-emergence of Bayesian approaches in other branches of physics a number of important steps have been taken towards the construction of a truly inductive interpretation of quantum mechanics. This programme sets out to understand  probability in terms of the “degree of belief” that characterizes Bayesian probabilities. Recently, Christopher Fuchs, amongst others, has shown that, contrary to popular myth, the role of probability in quantum mechanics can indeed be understood in this way and, moreover, that a theory in which quantum states are states of knowledge rather than states of reality is complete and well-defined. I am not claiming that this argument is settled, but this approach seems to me by far the most compelling and it is a pity more people aren’t following it up…

### 34 Responses to “Spin, Entanglement and Quantum Weirdness”

1. Todd Laurence Says:

Thoughts, words, and….Number!

Professor W. Pauli, Nobel laureate physicits, (exclusion
principle) 1945….

The Quantum Ancient:

“In quantum physics, natural numbers are considered to be the ultimate structural element of being.”

.”A well-known joke about Pauli in the physics community goes as follows: After his death, Pauli was granted an audience with God. Pauli asked God why the fine structure constant has the value 1/(137.036…). God nodded, went to a blackboard, and began scribbling equations at a furious pace. Pauli watched Him at first with great satisfaction, but soon began shaking his head violently: “Das ist ganz falsch!” (This is all wrong!)….

2. Peter,

are you aware of the work of Chris Fuchs and the other “Quantum Bayesians”?

A

3. Hmmm, my comment seemed to delete the URL: http://uk.arxiv.org/abs/1003.5209

4. telescoper Says:

Andrew,

Yes, I was planning to include a paragraph on that but forgot. I’ve now added it at the end. The post grew so long I became tired and forgetful at the end…

Peter

5. […] This post was mentioned on Twitter by Garth Godsman and Chattertrap, Francisco F. Francisco F said: RT @telescoper: Spin, Entanglement and Quantum Weirdness: http://wp.me/pko9D-1Vl […]

6. Anton Garrett Says:

I too am all for hidden variables, but I would go further and say that it is a misleading name. When two systems prepared with identical wavefunctions behave differently upon measurement, it is a clear manifestation of the so-called hidden variables.

Why should we believe in them? Because it is the sacred duty, or more prosaically the job description, of physicists to seek to improve testable prediction. Reject hidden variables, whether in favour of Copenhagen’s “Shut up and calculate” approach (copyright ND Mermin) or crazy rationalisations thereof such as many-worlds, and you are at the dead end of a one-way street without any way to improve testable prediction. It is the business of physicists to answer the question “why does one atom go one way in my Stern-Gerlach apparatus and another go the other way even though the relevant valence electrons have identical spin wavefunction?” To be told that I may not even ask the question is inhimical to my calling as a scientist. Why may I not ask it? It is a perfectly good question, after all. In the 19th century the physicists who noticed dust specks jiggling under microscopes (Brownian motion) didn’t moan “maybe they just jiggle for no reason”. They supposed that there was a reason, and that supposition led to modern atomic theory – the speck were being bombarded by atoms too small for the microscope to see. What glories might we not find if we hold our nerve in the present impasse? As Peter says, any such hidden variables have weird properties such as acausality, but we knew that anyway from things like Wheeler’s delayed-choice experiment. (In which you decide whether to measure wave properties or particle properties of electrons approaching a 2-slit experiment AFTER the electrons have traversed the slit, yet you can still get interference patterns or the result that the electron passed through a particular slit!) Granted that acausality might put a bound on prediction, but at least it would be because we don’t know the future rather than because we don’t know the theory – and once you start exploring, you never really know what you will find.

Why the 19th century physicists held their nerve and the 20th century ones ducked out and denied hidden vairables is one of the few issues where the prevailing philosophical zeitgeist really does matter. After nearly 100 years, the dust still has not settled from the fact that the formalism of quantum mechanics is not in 1:1 correspondence with things that physicists suppose are ‘out there’, ie you can never measure wavefunctions directly but you need them to caclulate what willl happen. Wavefunctions are a mix of ontology and epistemology that we need hidden variable theory to disentangle.

The ‘hidden’ variables must also be nonlocal. That was once held to be a big deal in the post-Bell world, but nonlocality has been familiar in physics since Newtonian gravity, as action-at-a-distance; what is new is that (sub-)quantum nonlocality does not weaken with distance. Given that the ‘hidden’ variables are nonlocal and acausal (ie, nonlocal in space and time respectively), it is remarkable that we can predict anything at all, and this is a powerful constraint on hidden-variable theories.

The quantum computing community is regularly uncovering further startling and counter-intuitive predictions of quantum mechanics, which post-Aspect nobody seriously doubts are correct. I see these as telling us further things about hidden variables.

Anton

7. It seems to me that until you propose a theory that makes a different prediction than QM you aren’t doing science. You are just doing philosophy. Alternate universes and no way to get there. Hidden variables that cannot in principle be measured and oh by the way they are connected faster than light. Invisible blue faeries that fly faster than the speed of light to enforce the QM rules. What is the difference?

“Shut up and calculate” isn’t an attempt to explain anything. It is simply an acknowledgment that we have no clue and we may never have any clue. If physics has a bottom level then there cannot be an explanation of “why” that bottom level is as it is. There is no reason not to look for an explanation but you should be careful about demanding one.

As for

“…quantum states are states of knowledge rather than states of reality.”

I agree except that states of knowledge ARE states of reality. With this I think decoherence will give us all we need.

And

“…quantum decoherence, which is basically the idea that the phase information that is crucial to the underlying logic of quantum theory can be destroyed by the interaction of a microscopic system with one of larger size.”

Is really a bad oversimplification isn’t it? The quantum wave function never really gets destroyed it only gets mixed up with a massive number of other particles. If you could trace down all those particles you could in principle see the quantum wave again. The entire field of quantum computing is dedicated to finding ways to mix the wave function in arbitrarily complex ways keeping track of how it is mixed so that the wave function can still be seen. That’s why a quantum computation must be reversible. If the quantum wave escapes the computer and becomes entangled with the outside world you can no longer keep up with all the state transforms and your calculation collapses.

Anyway decoherence may or may not be the bottom explanation. If it is you will never know it. If it is not I predict that what replaces it will cause even deeper philosophical angst.

8. “I’ve sent Columbo out of the room while I describe this.” Brilliant!

You say that hidden variables fell out of favour after Bell’s work but the reverse is true. Hidden variables were opposed by people like Pauli on ideological grounds (Bohm was a marxist, therefore his theory was dismissed as realist propaganda). Bell himself was a strong supporter of Bohm’s hidden variable theory and most of the (limited) work developing that theory took place after 1965.

It also does not follow that hidden variable theories must be acausal, as Bell pointed out in his “How to teach special relativity”. All you have to assume is that there is in fact a preferred frame in which causality applies (Bohm’s basic theory is non-relativistic anyway so this is implicit in it). The problem swallowing this is that we have been taught that physics revolves around symmetry. The idea that Lorentzian symmetry is a phenomenological artefact masking a preferred frame then appears extremely ugly.

As it happens, you get a similar problem explaining spin. In your post, you describe an electron in a superposition of up & down states as in “quantum purgatory”, but actually it is in a perfectly well defined state with its spin axis pointing in some direction other than +z or -z (this only works for spin-1/2). You might as well say that a spin-up electron is in quantum purgatory between being spin NNW and spin SSE. Nevertheless, if you want to explain quantum mechanics with “hidden” variables you have to break this symmetry by selecting some specific direction to be the “real” pair of distinct quantum states.

I havn’t read Christopher Fuch’s article…whatever physics is in it seems to be spread pretty thinly over acres of propaganda…but the problem for me with epistemological approaches to QM is that they can explain “measurement” but they are not so hot on the dynamics. Why should my subjective probabilties evolve according to the Schrodinger equation? Surely that implies that the equation is describing something objective? If so, what (if not quantum states)?

10. Anton Garrett Says:

@ppnl: “It seems to me that until you propose a theory that makes a different prediction than QM you aren’t doing science. You are just doing philosophy.”

That’s a bit harsh. If you are actively hunting for a theory that makes a testable prediction then you are doing theoretical physics. Philosophy is arguing that you should or should not look for such a theory.

“Hidden variables that cannot in principle be measured and oh by the way they are connected faster than light.”

My objection is that nobody has given a decisive reason *why* they cannot be measured. And they are not superluminal if you suppose that they are acausal – which you anyway have to assume for other reasons (eg, delayed choice expts). I accept that Paddy is right in saying you can get round it by renouncing Lorentz covariance, but I have no taste for that road.

Penrose? His combinatorial networks as an attempt to assemble spacetime at the quantum level seem to me to be another depressing confusion of epistemology and ontology. As for twistors, they are part of a higher-order Clifford algebra and if you view them from that perspective they do not seem to me to comprise the Holy Grail. I admire Penrose rather for his GR singularity theorems, his relativity insights (Penrose diagrams) and his pure-mathematical work on tilings.

Anton

11. Anton Garrett: “That’s a bit harsh. If you are actively hunting for a theory that makes a testable prediction then you are doing theoretical physics. Philosophy is arguing that you should or should not look for such a theory.”

I agree and that’s why I said: “There is no reason not to look for an explanation but you should be careful about demanding one.”

Contrast this with this – Anton Garrett: “Why should we believe in them? Because it is the sacred duty, or more prosaically the job description, of physicists to seek to improve testable prediction.”

Sacred duty? Sometimes you have to accept limits. Ever since Godel math has had limits on what is provable in any system. Thermodynamics puts limits on the efficiency of a heat engine. Information theory puts limits on how compressible a message is. Maybe QM really is running up against a fundamental limit on what can be predicted. Well maybe not but in either case elevating the endeavor to a “sacred duty” risk sidetracking physics into the equivalent of an endless search for a perpetual motion device.

Anton Garrett:”My objection is that nobody has given a decisive reason *why* they cannot be measured. And they are not superluminal if you suppose that they are acausal – which you anyway have to assume for other reasons (eg, delayed choice expts).”

If you can measure them then you gain information about distant particles in a way that probably allows you to send information faster than light. It is the inability to measure them that makes them acausal rather than superluminal. Anyway what is the sense in an acausal variable? We need it to explain what happened but it did not cause what happened? It looks like a place holder for “that’s just how stuff works.”

Anton Garrett: “I accept that Paddy is right in saying you can get round it by renouncing Lorentz covariance, but I have no taste for that road.”

Yeah, that’s ugly. God would have to shoot himself. But the easiest way to avoid this kind of ugly is to hide your hidden variables.

12. Anton Garrett Says:

OK ppnl, let’s come back to earth. I want to know whether the UP detector or the DOWN detector rings when the next electron (in a silver atom) passes through a Stern-Gerlach apparatus. I am not happy to be told that the question is out of bounds. Is it not the business of physicists to seek answers to such questions?

I can even see the effect of the hidden variables – they aint so hidden – in the fact that some electrons go UP and some go DOWN. What I can’t do today is *influence* them beyond a limited extent. But if I never consider this question then I stand no chance of succeeding…

Anton

13. Anton Garrett: “OK ppnl, let’s come back to earth. I want to know whether the UP detector or the DOWN detector rings when the next electron (in a silver atom) passes through a Stern-Gerlach apparatus. I am not happy to be told that the question is out of bounds. Is it not the business of physicists to seek answers to such questions?”

This is exactly how mathematicians reacted to Godel. They were very unhappy about having limits placed on what they could do. Most of them got over it.

This is also exactly how some science fiction fans react to being told that they can’t exceed the speed of light.

Anton Garrett: “But if I never consider this question then I stand no chance of succeeding…”

I never suggested you never consider the question. I simply suggested that elevating it to a “sacred duty” is maybe counterproductive. You can ask any question you want. You just can’t dictate the form the answer will take.

QM allows acausal correlations between distant measurements while at the same time preventing any information from traveling faster than light. Recreate this feature in even a toy hidden variable theory that allowed you to predict a local measurement.

14. Anton Garrett Says:

@ppnl: I don’t regard Goedel’s work as an accurate analogy for the present situation. What has been shown, most strikingly by Bell but also by others, is no a “no-hidden-variable theorem” but that any viable hidden variable theory must have some highly counter-intuitive properties. At this point you can either allow your intuition to be educated by nature or you can say “too weird for me – I’d rather say that HVs don’t exist and there is NO REASON why one electron goes up and another with an identical wavefunction goes down”.

I prefer the first alternative. I accept the freedom of others to take the latter path, but suppose somebody holding my views eventually learns how to tweak the HVs – that person will be able to do things that the others can’t, which is what scientific advance is all about.

I can describe the situation using religious terminology (the sacred duty of scientists) or commercial terminology (the business of scientists, the job description of scientists) but the idea is the same.

I expect those people who are hostile to science to be anti-scientific. But scientists themselves? Something in the zeitgeist has infected the scientific community.

Anton

15. I find Godel to be a very good analogy. The psychological reaction to a fundamental change in the epistemology of the field is the same.

Anton Garrett: “What has been shown, most strikingly by Bell but also by others, is no[t] a “no-hidden-variable theorem” but that any viable hidden variable theory must have some highly counter-intuitive properties.”

What is most striking is how high the bar has been set against a hidden variable theory. Once you accept acausality hidden variables seem superfluous anyway. In fact I suspect that a large percentage of people arguing for hidden variables are doing so to avoid acausality.

Anton Garrett: “At this point you can either allow your intuition to be educated by nature or you can say “too weird for me – I’d rather say that HVs don’t exist and there is NO REASON why one electron goes up and another with an identical wavefunction goes down”.

I’m not saying there are no HVs. I’m saying there are profound difficulties with the whole idea and currently no empirical evidence or theoretical need for them. The need for them arises from a misapplication of our classical intuition in a totally nonclassical domain. Our monkey brains require it.

Anton Garrett: “I prefer the first alternative. I accept the freedom of others to take the latter path, but suppose somebody holding my views eventually learns how to tweak the HVs – that person will be able to do things that the others can’t, which is what scientific advance is all about.”

Like communicate faster than light? There is very little room for tweaking hidden variables without doing strange things like that. Well I hope you can communicate faster than light but that’s not where to bet your money.

Anton Garrett: “I can describe the situation using religious terminology…”

That’s clear. The question is can you not.

Anton Garrett: “I expect those people who are hostile to science to be anti-scientific. But scientists themselves? Something in the zeitgeist has infected the scientific community.”

Apparently not. Any disagreement with the need for HVs is scientific apostasy. This need to describe the controversy in absolute religious terms is a clear indication of the psychological reaction I mentioned above. I take a more pragmatic view. Show me how to make hidden variables work and I will be impressed. Until then I’m not impressed with intuitive arguments that they must exist. I’m not saying HVs don’t exist. I’m saying show me. Ultimately that is the position that defines science.

16. Anton Garrett Says:

@ppnl:

“I suspect that a large percentage of people arguing for hidden variables are doing so to avoid acausality.”

If so then they are misinformed, as acausality is unavoidable in view of things like the delayed-choice experiment.

“…there are profound difficulties with the whole idea [of HVs] and currently no empirical evidence… for them.”

There is very clear evidence, in the differing behaviour of systems having identical wavefunctions. By your reasoning there would be no reason for Brownian motion and physics would still be stuck in the pre-atomist era.

“The need for them arises from a misapplication of our classical intuition in a totally nonclassical domain. Our monkey brains require it.”

The definition of an explanation is something that makes sense to our speaking-monkey brains.

“I’m not saying HVs don’t exist. I’m saying show me. Ultimately that is the position that defines science.”

You’re actually saying a bit more than that, ie Don’t bother looking for them. In that case nobody would find them even if they existed, and physics would be stuck indefinitely. Thankfully physicists of the stature of Prof Coles are starting to out themselves as believers in HVs, and the subsequent generation can resume the task that Copenhagen hobbled for many decades.

Anton

17. Anton Garrett: “The definition of an explanation is something that makes sense to our speaking-monkey brains.”

There is no guarantee that such an explanation exists. Our brain evolved to help a stone tool using primate survive on a savanna. There likely is more between heaven and earth than is dreamed of in our monkey brains.

Anton Garrett: “There is very clear evidence, in the differing behaviour of systems having identical wavefunctions. By your reasoning there would be no reason for Brownian motion and physics would still be stuck in the pre-atomist era.”

Except that any attempt to make a theory using hidden variables requires you to either hide them even in principle or accept that FTL communication is possible. Thats a very very different situation from Brownian motion.

Hey, I’m rooting for FTL communication. I’m just not betting on it.

Anton Garrett: “You’re actually saying a bit more than that, ie Don’t bother looking for them. In that case nobody would find them even if they existed, and physics would be stuck indefinitely”

I am saying no such thing and don’t appreciate having words placed in my mouth. I simply see the vast problem with hidden variables and I see no particular need for them. Everyone has the right to look for them and I reserve the right to look for them myself.

What I’m saying is that you need to understand the difficulty, recognize that the universe isn’t bound by your (or my) intuition and accept that people who do not look for hidden variables are not hostile to science.

In one sense this controversy has happened before. When Newton first proposed his theory of gravity it was rejected because nobody could accept that strange “action at a distance” thing. However it was such a successful theory that people adopted the “shut up and calculate” attitude. Gravity was accepted as a fundamental fact of nature.

Well now we have strong reason to believe that there is an underlying mechanism to “explain” gravity. Gravity turns out not to be a fundamental law. But note a few things:

1) It took three hundred years to get here. Continuing to obsess over action at a distance would not have been productive.

2) Action at a distance has been replaced with a world of virtual particles, probabilities and twin paradoxes. Newton era people would have far more problems with this than with action at a distance.

3) We really have not solved the action at a distance thing anyway since no material real particle travels between the objects.

The point is that QM could be fundamental in the way Newtonian gravity was not. Even if QM is not fundamental the thing that replaces it will likely be far less acceptable to us than randomness and may take another 300 years to develop. And even then it may not solve the randomness problem the way you want. The universe is funny that way.

“Shut up and calculate” was a good pragmatic position for gravity 300 years ago and it is a good pragmatic position for QM now. It is probably as pointless to obsess over QM randomness now as it was to obsess over action at a distance then.

Again I’m not saying there are no HVs and I’m not saying you should not look for them. I’m saying that you should maintain some perspective and avoid turning it into a holy war. Three hundred years from now all sides of the argument will likely look silly. That’s what science does.

18. Anton Garrett Says:

@ppnl:

Superluminality is only required to explain the obervations if you try to stick with causality. But I’ve already explained that causality has had it. This is not a conclusion I find congenial, but that’s the way it is.

“Don’t bother looking for them” were not your words – apologies if you thought I was making up a quote – but I do believe that it represents a reasonable summary of your position on HVs. I don’t believe that a position which says “Show me” on the one hand and “Don’t bother to look” on the other is very consistent. But I am glad that we respect each other’s freedom to disagree.

A final question: What is your explanation for why two systems having identical wavefunctions behave differently, please?

Anton

19. Anton Garrett: “Superluminality is only required to explain the obervations if you try to stick with causality. But I’ve already explained that causality has had it. This is not a conclusion I find congenial, but that’s the way it is.”

That’s true of QM as it stands now. But if you attempt to alter QM in such a way as to give you greater predictive power that falls apart.

Anton Garrett: ““Don’t bother looking for them” were not your words – apologies if you thought I was making up a quote – but I do believe that it represents a reasonable summary of your position on HVs.”

It absolutely is not and I am getting tired of saying so. Every thing in science is provisional and you don’t ever ever ever need “permission” to check even the most well established theories.

Anton Garrett: “A final question: What is your explanation for why two systems having identical wavefunctions behave differently, please?”

I don’t have one. I don’t insist that there is one. I also don’t insist that there isn’t one but I do note there are vast theoretical difficulties with constructing any such explanation.

The difference between us is that you insist that there must be an explanation. I’m saying “well maybe and maybe not”. You are insisting the universe follow your intuition. I’m simply pointing out that the universe is well known for violating our intuitions. Thus I’m not impressed with arguments to intuition.

In any case QM is so successful that it is hard to see how you will shoehorn in a deeper theory at low energy. The deeper theory would probably be an element of a deeper understanding of string theory or QLG or even something totally new.

20. Anton Garrett Says:

@ppnl:

“The difference between us is that you insist that there must be an explanation.”

This time I wish to protest at your summary of my position. What I’m saying is that if there is an explanation then only those who are prepared to look stand any chance of finding it, whereas those who say there isn’t an explanation rule themselves out of the running. I also maintain that it is the business/task/job/vocation/calling of scientists to look – not least because of the etymology of the word ‘science’.

Anton

21. Anton Garrett: “This time I wish to protest at your summary of my position. What I’m saying is that if there is an explanation then only those who are prepared to look stand any chance of finding it, whereas those who say there isn’t an explanation rule themselves out of the running.”

But again I never said that there was no explanation. I said there does not have to be an explanation. And I never said I was not prepared to look. In fact it is exactly by looking that I understand how difficult a task it is.

Anton Garrett: “I also maintain that it is the business/task/job/vocation/calling of scientists to look – not least because of the etymology of the word ‘science’.”

I agree except you do not need to believe in order to look. Take time travel for example. I think few physicists think time travel is possible. Yet you wouldn’t know it from the amount of work done in time travel. Look at a list of names of people who have published and you will see Godel, Feynman, Guth, Carrol, Hawking and many others. I recently read about the effect time travel would have on the computational complexity class of quantum algorithms. I seriously doubt the author believes time travel is possible but I don’t know. I don’t even care.

You can ask any question you want but you cannot demand the answer be of a particular form. It is belief that blinds and limits us not doubt.

22. Anton Garrett Says:

@ppnl:

“It is belief that blinds and limits us not doubt.”

This might be our ultimate difference of opinion. It is fashionable today to say that science is built on doubt – ie, don’t take anybody’s word for it, go test it for yourself. But in fact every scientist has spent countless hours reading and listening to the opinions of other scientists; no one person can do all of the experiments on which contemporary scientific knowledge is based. And even beneath that is the supposition that science can be done at all, that we can make some kind of sense of things, that one electron will behave like another, etc.

You give reasons why it will be hard to find HVs while remaining consistent with what we know to date. I agree. Cutting-edge research IS hard. Einstein himself said as much. But I doubt that the search for HVs is any more daunting to the human mind than the possibility of powered heavier-trhan-air flight seemed 400 years ago. The history of Sci/Tech is full of “They said it couldn’t be done”s. Re HVs, I am not saying “Seek, and you shall find,” but I *am* saying “Don’t seek, and you assuredly won’t find.”

Nonlocality in both space and time (ie, acausality) suggest to me that space and time somehow play a different role at the next level. It might be worth dusting off Feynman’s theory that there is actually only one electron in the universe, which we somehow see in many places. I believe also that light will play a special role in any deeper theory. But beyond that, I know not. If I did then you would have heard my name announced last Tuesday…

Anton

23. Anton Garrett: “This might be our ultimate difference of opinion. It is fashionable today to say that science is built on doubt – ie, don’t take anybody’s word for it, go test it for yourself.”

No, I’d say it was another clean miss. Besides that it is the exact opposite of what you were accusing me of before. First I’m unwilling to look into some ideas and then I’m a radical skeptic unwilling to trust anything unless I do the work myself.

I didn’t say you had to doubt everything. I said you were free to doubt anything. The kind of radical doubt you are attributing to me would be the death of science.

Anton Garrett: “Re HVs, I am not saying “Seek, and you shall find,” but I *am* saying “Don’t seek, and you assuredly won’t find.””

And again I’m not saying don’t seek. I’m saying you really need to be careful about letting your intuition limit what you seek. The one bet you can make is that your intuition will be confounded.

You keep suggesting that anyone who does not assume HVs are somehow violating the “sacred principles” of science. I find that attitude to be very limiting. One possibility is that quantum indeterminacy is fundamental. If you don’t explore this possibility and work out the implications you have limited yourself.

For example what are the implications for quantum computing? The power of a quantum computer depends on being able to place the computer in a superposition of an essentially infinite number of states. But in a hidden variable theory the computer is always in some particular state and it is limited to a finite number of states. Can you recreate the power of a quantum computer in a hidden variable theory? Are quantum computers even possible?

Anton Garrett: “Nonlocality in both space and time (ie, acausality) suggest to me that space and time somehow play a different role at the next level.”

Exactly so. This is called the background problem. Instead of space and time being assumed at the beginning the goal of both string theory and QLG is to be background independant. That way you see how space and time arise due to the theory rather than simply embed the theory in a pre-existing space and time.

In order to solve this problem they play with all kinds of strange ideas from time travel to HVs. You don’t have to believe in any of these in order to work out the consequences of a given theory.

24. Anton Garrett Says:

Well ppnl, in that case I admit that I don’t know what you believe. I leave it to readers to decide whether I have mis-summarised you or whether your position is not wholly consistent. If I have mis-summarised you then I apologise.

Anything that quantum theory can do, HVs can do, but not vice-versa. So there is no difficulty about quantum computers.

“One possibility is that quantum indeterminacy is fundamental. If you don’t explore this possibility and work out the implications you have limited yourself.”

There are no implications. If indeterminacy is fundamental then you have reached the end of the line. But how do you know it is fundamental if you decline to try to do better by seeking HVs? That means you have reached the end of your line without knowing whether it is the end of THE line.

String theory and QLG are both quantum theories, both turn observables into non-commutuing operators, both renounce determinism. I am well aware of the difference between passive and active transformations of coordinates, which is what I think you are talking about. I mean something deeper. But I can’t be more specific – as Einstein said, if we knew what we were talking about it wouldn’t be called research…

Anton

25. Anton Garrett: “Well ppnl, in that case I admit that I don’t know what you believe.”

Good. The point is that if you let it matter over much what you believe then you are doing it wrong.

There are no implications. If indeterminacy is fundamental then you have reached the end of the line. “Anything that quantum theory can do, HVs can do, but not vice-versa. So there is no difficulty about quantum computers.”

Don’t you actually have to have a theory in order to say what it can do? Who knows maybe there is a hidden variable theory and it will be proved when the impossibility of quantum computers is discovered. I think one famous physicist, can’t remember his name, did develop such a theory and has predicted that quantum computers will fail if pushed beyond a certain size.

And anyway you never addressed my point. A quantum computer can reach an exponentially larger number of states than a classical computer. In order to describe the state of a 200 bit quantum computer you would need more bits than there are atoms in the universe. If you do this with HVs you better bring a lot of them.

Anton Garrett: “There are no implications. If indeterminacy is fundamental then you have reached the end of the line.”

Not at all. QM is a theory of mechanics not of things. It cannot predict what is out there it can only tell us how to find them.

26. Joy Christian, PhD, from the Perimeter Institute for Theoretical Physics in Canada, provides an alternative explanation to “entanglement” as simply an illusion caused by erroneous choice of framework within which the phenomenon is explained. J.Christian has been able to disprove that Bell’s inequality is relevant in explaining the effect. He provided a mathematical framework that explains the phenomenon without violating the locality principle.

I understand that many people will simply stay away from arguing in favor of a not-so-popular-in-the-masses explanation for various reasons. At the very least, people’s careers will be at stake in case the new explanation turns out to be false.

Moreover the theory in its current state allows for successful commercial application. So for a significant portion of the public profit is enough.

Heated new age debate about “how everything is connected” provides fertile ground for many pseudo-scientific journalists and writers.

However, it seems to me reasonable to err on the side of simplicity and choose his theory (recalling Occam’s razor).

If you read his papers (available at arXiv, e.g. http://arxiv.org/abs/quant-ph/0703179 ), what is your opinion?

• Anton Garrett Says:

I would guess that Joy Christian is “she” rather than “he,” although I don’t know. I remember seeing these preprints and not agreeing, although I’ve forgotten where the point of departure was. The reason I don’t agree is this. Bell’s theorem is actually an argument involving inference. A measurement of the spin of one particle tells you something (the great thing is that you don’t have to be more specific) about the internal state of that particle at the instant of measurement. Assumptions of locality and causality are involved at this point. From partial knoweldge of the internal state of one particle, we can infer something about the internal state of a second particle that is correlated with it (specifically, that the two are in a singlet state). From there we can infer something about the results of spin measurement on the second particle. Observations made on an ensenmble of particle pairs violate these predictions, however. (QM is nowhere involved in any of this, although it correctly predicts the observations and clearly inspired Bell.) There simply isn’t anything to give way other than locality and/or causality. But, as I’ve said, nonlocality has been bust since Newtonian gravity (under the name “action at a distance”); likewise acausality since JA Wheeler’s “delayed choice” experiment described above. I find the latter conclusion shocking, but the job of physicists is to work with what they find.

Anton

27. I understood that you have a different point.

For some reason, his (Joy’s) personal page is only available in google cache at the moment. here it is with a photo (converted to pdf): http://www.megaupload.com/?d=XUVA2NUH. So he is indeed a he.

Well, I re-watched some lectures of Leonard Susskind and understood that what I was talking about was actually yesterday’s news. What JC (Joy) does is simply reconciling Bell’s inequality to quantum mechanics without mystic spooky 10000x the speed of light action. But enough about JC.

The first time I watched Mr. Susskind, I probably missed that Bell’s inequality has nothing to do with quantum mechanics. So my previous post missed that point.

Moreover, there are no violations of neither locality nor causality in the quantum entanglement, which is btw also called ERP correlation. We know the component of the spin for the second electron simply because the sum of the same components of the spin on two particles are equal to 1. Many people seem to fall into the mysticism pitfall (psychologists say that we are “wired” this way).

Start from 0:19:48 and watch till at least 0:26:05 or further.
(This is lecture 5 part 1 provided by Standford university.)

Still, I must thank you for the post, Anton. It did help me to pull many facts together.

28. PS
I must add that my ONLY goal was to clarify the popular belief that there exists some sort of “connection” between the two particles in the singlet state.

I purposefully omitted the word “entangled” as it creates a bias in favor of the popularized interpretation. That bias is precisely the implication of “our observation creating a reality” in a philosophical sense. If we approach a phenomenon with a different set of assumptions, we may build a totally different theory and a totally different “reality.”

If both interpretations agree with test results, everyone will be correct in his/her own right. So the only difference of opinion becomes whether to use Occam’s razor, which is left to one’s personal liking.

Strictly speaking, we actually cannot prove a theory to be true. We can only fail to prove a theory wrong. So there exists a possibility for each theory to be true.

I think we have a chance to come to an agreement here.

• Anton Garrett Says:

From what you say I can’t work out any proposition to which I assent and you disagree… I knew that at some point Christian and I would diverge, for the reasons I gave above, but it is some time since I read his paper and I can’t remember exactly where in it I said “Hold on”.

29. It was more or less a rhetorical statement. I basically meant to withdraw my initial argument related to JC as my understanding of the subject was less clear than it is at present.

The amount of press the topic gets creates a lot of confusion. The “action-at-a-distance” interpretation is even taught in (at least) one reputable American university. But, according to Mr.Susskind (in my own words), there’s nothing mystical about it.

At present, I have sufficient understanding of both views to stop arguing both in favor of against any of these. My personal preference is that singlet state of two particles is simply the lowest energy state with opposite spins (hence, all components of their spins are opposite) and that when two particles are separated by large distances, nothing really happens b/n them.

At the same time, I am quite comfortable about some physicists trying to prove the existence of a connection b/n the two. Every physicist has his own version of the reality, that’s their job.

As for Wheeler’s delayed choice experiment, I will delay drawing any conclusions at the moment as very few experiments have been performed so far and I know very little about measurement procedures and calculations. I have been shocked at the quantum correlation effect due to all the publicity; so there is a chance that ‘Wheeler’s delayed choice’ will have its own logical explanation as soon as all the excitement subsides.

Again, my previous post was a rhetorical statement. Perhaps, I should have been more clear.

30. Don’t confuse hard science with spirituality …
http://emergent-hive.com/2010/11/wheeler/

31. Hello, thanks for the interesting blog.

I am somewhat confused by this piece of text out of your blog;

If, on the other hand, Dick makes an up-down measurement but Harry measures left-right then Dick’s answer has no effect on Harry, who has a 50% chance of getting “left” and 50% chance of getting right.

and a video from a lecture by Roger Penrose, explaining the findings made by Lucien Hardy concerning the rules of entanglement for 1/2 spin;

You say that in the case when there is no corresponding measurement the outcome from Dick’s measurement has no effect on Harry’s outcome, but in the video there is an effect, for instance |Left> and |Down> never occurs or |Down> and |Left> never occurs. Harry’s electron will never be spin left if Dick’s electron measured spin down, it seems to me that there is an influence.
There is a high probability that i misunderstood it all, i don’t have a training in quantum mechanics. I would be grateful if you can give an explanation.

Sincerely,
Peter