The theory of quantum mechanics describes the laws of physics at the smallest scale, yet these laws must be taken into account to explain many of the phenomena observed by astronomers.
The foundations of quantum mechanics date from the start of the 20th century, when Max Planck proposed that the shape of the black-body radiation curve – seen in the spectrum of light produced by a star, for example – could be explained by assuming that the energy levels of the radiation are “quantised”; in other words, the energy of the radiation can only take specific discrete quantities.
Black-Body Radiation and Planck’s Constant
Black-body radiation is the electromagnetic radiation produced by an idealised, perfectly opaque and non-reflective object – referred to as a “black body” because it absorbs all electromagnetic radiation incident upon it. A black body is also a perfect idealised emitter of electromagnetic radiation; for example, the spectrum of the light from a glowing piece of iron, heated by a blacksmith’s furnace, approximates the idealised spectrum of a black body, as does the light produced by a star. A graph of the spectrum of an ideal black body, plotted as intensity against wavelength, has a characteristic shape and peak wavelength that depends only on the temperature of the body. Since the wavelength of light determines its colour, the color of a star is related to its temperature – cooler stars are red, while hotter stars appear blue.
In 1900, Max Planck showed that the shape of the black-body radiation spectrum could be described mathematically by assuming that the energy of the radiation emitted can only take discrete “quantised” values, rather than any value between a continuous range of energies. Previous calculations of intensity, based on continuous, non-quantised energy levels, approached infinity at low wavelengths, diverging from empirical observation at around the wavelength of ultraviolet light. Hence this problem was known at the time as the “ultraviolet catastrophe”.
Planck’s calculations were based on the idea that electromagnetic radiation is emitted by hypothetical resonant oscillating bodies – later identified as the electrons in atoms on the surface of the black body – and that these electrons can oscillate only at specific, quantised, resonant frequency levels. Planck also showed that the frequency of the oscillation was related to the energy of the electromagnetic radiation produced, via the formula:
Energy = h x frequency
where h is known as Planck’s constant, which is a fundamental constant of nature that has a value of approximately 6.626 × 10-34 m2 kg / s.
The Photoelectric Effect
The photoelectric effect is a process by which electrons are emitted from materials such as metals, when exposed to electromagnetic radiation of high enough energy. Heinrich Hertz discovered the photoelectric effect in 1887. (N.B. the photoelectric effect should not to be confused with the photovoltaic effect, which is a related but slightly different process). In 1900, Philipp Lenard discovered that certain gases also emit electrons, via the photoelectric effect, when illuminated by ultra-violet light. Lenard noted that the energies of the individual electrons emitted were dependent on the frequency of the light and not the intensity, as would have been expected based on James Clerke Maxwell’s wave theory of light.
In 1905, Albert Einstein suggested that this could be explained if the electrons only absorb light in discrete packets of energy or “quanta”. The term photon later came to be used to refer to a single “quantum” of light. Einstein proposed that the Planck relation (Energy = h x frequency) applied also to photons, relating the energy possessed by an individual photon – and hence it’s ability to eject an electron from the surface of a material, via the photoelectric effect – to the frequency of the light. If the light used is below a certain threshold frequency, no electrons are emitted, as the energy of each individual photon is not enough to free an electron, irrespective of the intensity of the light, which is simply a measure of the number of photons.
Einstein received the Nobel Prize in 1920 for his explanation of the photoelectric effect, although his theory was not accepted by many when it was first proposed, since it seemed to contradict Maxwell’s wave theory of light.
The Compton Effect
In 1923, Arthur Compton showed that photons can be scattered by free, or weakly bound electrons in a way that could only be explained if a photon is considered to possess particle-like properties.
This, combined with Einstein’s description of the photoelectric effect, finally lead physicists to abandon attempts to produce a description of quantum behaviour based on simply trying to impose quantised limitations on classical theories and completed the transition from these “old” quantum theories to the new physics of quantum mechanics.
Young’s Double Slit Experiment and Wave-Particle Duality
If light is allowed to fall on a screen after passing through two identical slits, each of a narrower width than the wavelength of the light used, a diffraction pattern of alternating light and dark bands is observed. This pattern is caused by the light waves emanating from the two separate slits interfering with each other. Where a wave peak in the light from one slit falls on the screen at a point where a wave trough also falls upon the screen from the second slit, a dark band occurs. This is due to destructive interference, where the phases of the two waves cancel each other out. Where two peaks or two troughs coincide on the screen, or the two waves are in phase with each other, the waves reinforce each other, and this constructive interference causes a bright band of light to appear at that position on the screen.
This experiment was first performed by Thomas Young in 1801 and provided the first direct evidence that light is a wavelike phenomenon, seemingly ruling out Sir Isaac Newton’s “corpuscular” theory that light is made up of separate particles. This experiment is, therefore, often known as Young’s double slit experiment.
In 1909, four year’s after Einstein first proposed the existence of photons – individual, particle-like quantum packets of light energy – a version of this experiment was conducted where the energy of the light was reduced to such a level that single photons were passing through the slits at a time. An individual, very faint point of light was observed on the photographic plate, used as the detector screen, for each photon that passed through the slits. As the position of each photon was recorded as it fell on the detector screen, the same pattern of light and dark bands, observed in Young’s original experiment with high intensity light, gradually built up, as the positions of more and more individual photons were recorded.
This meant that each single photon of light was still behaving as a wave and was still able to interfere with itself as it passed through both slits at once, even though each photon was observed as a single point of light on the detector screen.
A thought experiment was proposed in the 1960s by Richard Feynman that, at the time, was not possible due to technical limitations but which has now been performed, whereby the position of each photon is measured just before it passes through the slits. You should then be able to determine which of the two slits the single photon will pass through. If you do this, the interference pattern on the detector screen disappears. Each photon now travels through just one of the slits as if it were a particle, rather than a wave, and no longer interferes with itself. The pattern of light that builds up on the detector screen is now just two simple bands of light, corresponding to the positions of the two slits, where photons that are fired at the correct angle pass directly through one or other of the slits, as if they were behaving as particles travelling in straight lines.
This means that light can behave either as a wave or a particle, depending whether or not you have made an “observation” of it. This strange double identity of light is known as “wave-particle duality”.
De Broglie Waves
In 1924 a PhD student called Louis De Broglie predicted that all matter should share this wave-particle duality behaviour.
His reasoning was based on the quantised energy levels of electrons in Niels Bohr’s model of the atom, which Bohr had introduced in 1913. This described the hydrogen atom as a nucleus consisting of one proton, at the centre, and one electron orbiting around it. The electron’s orbit could only take on specific energy levels at set distances from the nucleus. The electron could jump between these orbital levels – a “quantum leap” – by emission or absorption of a photon with an energy equivalent to the difference in energy between the quantised orbits.
De Broglie’s intuition told him that the reason atomic electrons could only take on specific quantised energies was that they, like light, had a wave-like nature with an associated wavelength. For each atomic electron orbit to be stable, the distance around the perimeter of the orbit would have to be equal to a complete number of wavelengths, so that a resonant electron standing wave is set up. Since the electron could not take on orbits between these values, the electron could not spiral in towards the nucleus, as would be predicted if the electron is considered to behave only as a particle without wavelike properties.
De Broglie hypothesised that not only electrons, but all particles of matter possess these wavelike properties, and that the wavelength of a particle is given by the equation:
Wavelength = h / momentum
and the Energy is related to the Frequency by:
Energy = h x frequency
as Einstein had shown for photons, where h is Planck’s constant.
Since momentum is given by the mass of the particle multiplied by it’s velocity, this means that the greater the mass of the particle, the smaller its De Broglie wavelength, for a given velocity.
De Broglie’s theory was first confirmed experimentally for electrons in 1927, using the atoms of a crystal lattice of nickel to act as a diffraction grating. The spacing of the atomic planes of the crystal is of the width required to cause diffraction of electrons in same way that the material had previously been shown to diffract x-ray photons of a similar wavelength.
The Young’s double slit experiment has also been conducted using a source of electrons, instead of light, which are fired through the slits towards a detector. The result is that the electrons are observed to behave in exactly the same way as photons, producing either a wavelike interference pattern or two simple bands, depending upon whether or not you observe the position of each electron before it passes through the slits. Similar results are also observed for other larger particles, such as protons and neutrons.
These results show that entities traditionally recognised as particles also display this strange wave-particle duality that is observed for light.
To date, the largest particles for which the double slit experiment has been carried out are molecules consisting of 810 atoms. However, De Broglie’s theory applies to all matter. The larger the mass, and hence momentum, however, the smaller the De Broglie wavelength, so that, at anything except the smallest scale, the wavelength becomes vanishingly small and so is not observed.
The Quantum Wave Function and Schrödinger’s Equation
De Broglie’s theory required a mathematical equation, or “wavefunction”, to describe how a matter wave changes with time. In 1926, Austrian physicists Erwin Schrödinger published an equation which did just that. Schrödinger applied his equation to the orbits of electrons in the Bohr model of the hydrogen atom and found that the results exactly predicted the observered quantised energy levels.
Schrödinger’s quantum wavefunction does not allow you determine the position of a quantum particle at a given time, as would be expected of a classical theory, such as Newton’s laws of motion. It should instead be thought of as providing the probability that a particle will be observed at any specific location at a specific time. The wavefunction assigns a probability to all possible positions of a particle. When an observation is made, the particle will be found in one of these positions. It’s the chances of finding the particle at any particular position that can be calculated from the wavefunction.
So, in Young’s double slit experiment, for example, the Schrödinger wavefunction equation would assign a probability for each single quantum of light being observed at each specific position along the detector screen. The bright bands of light observed where the intensity of the light source is high, and hence many photons strike the screen at once, correspond to regions of high probability for the position of each individual photon, and the dark bands correspond to regions of low probability.
When a single photon travels through the apparatus, the wavefunction treats the particle as if it occupies all possible positions at once, and (in the standard interpretation of quantum mechanics), the wavefunction can be considered to “collapse” to one specific location when the observation of the photon is made, as it hits the screen. So the equation gives the probability that the quantum wavefunction will collapse to any specific point on the detector screen.
In the standard interpretation of quantum mechanics, the particle can be considered to exist in a superposition of all possible states until an observation is made.
The famous Schrödinger’s cat thought experiment considers a cat sealed inside a box with a flask of poisonous gas, a low-level radioactive source and a detector. If a radioactive emission from the source is detected, an automated mechanism will be triggered to break the flask, releasing the poisonous gas that will kill the cat. After the box is sealed, it will no longer be possible to determine whether the cat will still be alive when the box is re-opened, some time later.
The entire system of the cat in the box could be described by a quantum wave function, which suggests that the cat exists in a superposition of states – simultaneously both alive and dead – until the box is opened and an observation is made.
Quantum tunneling is a surprising consequence of the existence of matter waves, which is not explicable using classical theories describing matter purely in terms of particles.
Imagine, for example, an electron fired towards a barrier consisting of an electric potential that repels the electron. Classical theories of the electron’s motion will say that, if the electron’s momentum is not sufficient for it to pass through the potential barrier, the electron will rebound like a ball thrown at a wall.
Schrödinger’s equation for the wave function of an electron describes the probability of finding the electron at all points in space at a specific time. The wave function notably does not stop at edge of the potential barrier, however, but instead falls off exponentially as it penetrates into the barrier. The probability of finding the electron at some point inside the potential barrier, therefore, approaches, but never reaches, zero, as the wave function penetrates deeper into the barrier. If the barrier is of finite width, there will always be a non-zero probability of finding the electron at the opposite side of the barrier when you measure its position. This probability will be smaller the wider and higher the barrier, but the wavefunction always allows some possibility that the electron will spontaneous appear at the other side of the potential barrier.
This ability for a particle to instantaneously jump through a potential barrier is known as quantum tunneling.
The greater the distance across the barrier, the higher the barrier and the larger the mass of the particle, the lower the chances that tunneling will occur. However a tunneling particle will appear on the opposite side of the barrier without any loss of energy.
The Quantum tunneling effect is used in the design of certain electronic components, such as diodes. It also imposes theoretical limits on the miniaturisation of electronics, as electrons will spontaneously tunnel across electrically insulating materials.
Notably, quantum tunneling is used by the scanning, tunneling microscope, to measure the distance from the tip of the probe of the microscope to the surface of the material being studied. This allows the microscope to “feel” the surface of a material at incredible resolutions, allowing individual atoms to be resolved.
Quantum tunneling can also be used to explain radioactivity, as the emitted particle randomly tunnels through the potential barrier holding it within the atomic nucleus.
The Heisenberg Uncertainty Principle
In 1927, Werner Heisenberg devised his famous (or perhaps infamous) uncertainty principle, which notes that, on a quantum scale, certain “complimentary” or “conjugate” observable quantities, cannot be simultaneously measured to an arbitrary degree of accuracy.
For example, the observable quantities of a particle’s position and momentum are complimentary. Momentum is given by the particle’s mass multiplied by its velocity, hence the particle’s speed and direction of travel cannot be measured simultaneously with its position, in such a way that the particle’s future position and momentum can be precisely predicted from the observation.
If you try to measure a particle’s position, it is possible to do this to an arbitrary level of precision. However, in doing so, you will cause the particle’s momentum to shift by a random unknown amount. If you, subsequently, try to measure the particle’s momentum, you will then be unsure of its position again.
The uncertainty in the measurement of the particle’s position multipled by the uncertainty in the measurement of the particle’s momentum will always be greater than or, at the very least, equal to Planck’s constant.
This can be expressed by the mathematical inequality:
Δx Δp ≥ h
Where Δx and Δp are the uncertainties in the particle’s position and momentum, respectively, and h is Planck’s constant.
So, the more accurately you measure the particle’s position, the less sure you can be of its momentum, i.e. its speed and direction.
Note that, since Planck’s constant is such a small quantity (6.626 × 10-34 m2 kg / s), this principal generally only affects measurement on the smallest scales.
Heisenberg originally described the reason for this uncertainty as being due to unavoidable disturbance of the particle when taking a measurement. For example, you can measure a particle’s position by illuminating it with light and because of the wavelike properties of light, the limit for how precisely you can resolve the position of the electron depends on the wavelength of the light used. The shorter the wavelength of the light, the higher the resolution achievable. However, the Compton effect must be taken into account (see above) whereby the electron will recoil due to the particle-like nature of the photon of light that strikes it. The shorter the wavelength of the photon, the higher it’s energy (see the photoelectric effect, above) and the greater the random recoil of the electron. So, the higher the energy of the light used, the more accurately you can resolve the position of the electron, but the less sure you can be of its velocity and direction of travel (i.e. its momentum) afterwards, due to its random recoil as it is struck by the photon.
However, it must be noted that this uncertainty is not simply a consequence of experimental error and cannot be circumvented by devising a more accurate way to measure the electron’s position without disturbing it’s momentum. Any way in which it is possible to determine the electron’s position will lead to an uncertainty in the electron’s momentum greater than (or at the very minimum, equal to), that given by the uncertainty principle.
Another way of thinking about this is to consider the dual wave-particle nature of matter. If the wave packet of a particle is behaving as a wave, it will have a well defined wavelength, and hence a low degree of uncertainty in its momentum. This is because momentum is inversely proportional to a particle’s wavelength, as given by De Broglie’s theory of matter waves, above. However, for the matter wave to have a well defined, easily measurable wavelength, it must be spread out in space over a distance of many wavelength, and not localised at any particular point, as would be expected in the classical visualisation of a particle. The more spread out the wave packet is in space, the closer the match of the measurement of the distance between each individual wave crest and, therefore, the more accurately the wavelength, and hence the momentum, can be determined. This means that the position of the particle’s wave packet will be less well defined, however. Therefore the particle’s position will be more “uncertain” depending on how accurately it’s wavelength, and hence its momentum can be measured. Conversely, if the wavepacket is bunched tightly together in space, it will behave more like a traditional particle with a more precise position. However, the wave packet will then have a less well defined wavelength and therefore a greater “uncertainty” in its momentum.
It should also be noted that the mass of the particle also features in the momentum term of the Heisenberg uncertainty inequality. This means that the greater the particle’s mass, and the greater the uncertainty in its velocity, the greater the uncertainty in its momentum, since momentum is the product or mass times velocity. A lower level of uncertainty is, therefore, required for the position of a more massive particle, in order to satisfy the inequality of the uncertainty principle, than for a less massive particle with the same uncertainty in its velocity.
When Heisenberg’s uncertainty principle was first devised, there was much debate about how this principal should be interpreted. Einstein, in particular was of the opinion that, even though the position and momentum of a particle could not be known simultaneously, the particle still possessed an intrinsic value for both, although hidden from us by fundamental limitations of experimentation. Einstein believed that the theory of quantum mechanics was incomplete, famously declaring that “God does not play dice”. This sort of interpretation of the uncertainty principle was known as a “hidden variable” theory. However, Heisenberg went much further, claiming that not only were the precise values of these quantities unknowable within the bounds of the uncertainty principle, but that precise values did not even exist.
Hidden variable theories involving “pilot waves” which guide a particle’s motion (similar to a surfer riding a wave), have been proposed, which could potentially provide a visualisable interpretation of quantum mechanics (see De Broglie Bohm theory, below).
The Time-Energy Uncertainty Principle
Another important pair of complimentary observable quantities are energy and time. The time-energy version of the uncertainty principle can be expressed mathematically as:
ΔE Δt ≥ h
Where ΔE is the uncertainty in a particle’s Energy and Δt the uncertainty of the lifetime of the energy state.
This time-energy uncertainty principle can be observed in spectroscopy, for example, where “excited” energy states of an atom have a finite lifetime. The more quickly the excited energy state decays back to the lower energy state, the larger the uncertain in the energy of the higher state. The width of the observered spectral emission line corresponds to this uncertainty in the energy of the excited state. The shorter the lifetime of the energy state, the broader the emission line observed.
A more interesting consequence of the time-energy uncertainty principle, however, is that it allows energy to be “borrowed”, effectively out of nowhere. The shorter the time you borrow the energy for, the more energy a particle can borrow, provided it is paid back within the time limit allowed by the uncertainty principle.
The consequence of this is that the vacuum of space can no longer be considered to be empty. Instead, it can be thought of as teaming with “virtual” particles, which spontaneously pop into and out of existence, by borrowing energy from the vacuum of space itself. For example, a virtual electron and its antimatter (see antimatter below) equivalent, known as a positron, can be spontaneously produced, as long as they annihilate by recombining again within the time limit allowed by the time-energy uncertainty principle. The borrowed energy must be at least equal to the rest masses of the two particles, so the higher the particle’s mass the shorter the time available under the constraints of the uncertainty principle for the annihilation to occur. So, less massive virtual particles exist for longer than more massive virtual particles.
“Hawking radiation”, (named after Stephen Hawking, who proposed its existence), around the event horizon of a black hole is conjectured to occur when one of these pairs of virtual particles falls into the black hole, while the other particle escapes the black hole’s gravitational field. In order for the escaping particle to be elevated to the status of a “real”, rather than a “virtual” particle, it must pay back the energy borrowed for its creation by carrying away some of the energy of the black hole, thereby reducing the black hole’s mass. (See also black holes.)
In quantum field theory, the exchange of these virtual particles is proposed as a mechanism for the fundamental forces of electromagnetism, the strong and weak nuclear forces and possibly gravity. (See fundamental forces for more details.) This theory applied to the electromagnetic interaction is known as quantum electrodynamics and describes the electromagnetic force as a consequence of the exchange of virtual photons between electrically charged particles, such as the electron and the proton. For the strong nuclear force, the theory of quantum chromodynamics applies, which describes how the exchange of force-carrying particles called gluons are responsible for binding together quarks, the constituent particles of the proton and the neutron, inside the nucleus of atoms. (see also, particle physics)
In classical physics, a rotating electric field produces an associated magnetic field, which acts like a bar magnet, with a north and a south pole.
The electron is an electrically charged particle, which also possesses an intrinsic magnetic field, known as the electron’s magnetic moment. However, early attempts to attribute the existence of the electron’s magnetic field to a rotation of the electron, as if it were spinning on its axis, led to problems, since, to produce the value for the electron’s magnetic moment observed in experiments, the classical equations of electromagnetism required the electron to be spinning faster than the speed of light.
In 1925, Paul Dirac derived a version of Schrödinger’s wave function equation that was compatible with Einstein’s special theory of relativity. This was the first time that a result from the theory of relativity had been successfully combined with the theory of quantum mechanics. This relativistic wave equation described the behaviour of particles, such as the electron, at high energies and velocities and allowed Dirac to derive a value for the magnitude of the electron’s magnetic moment that agreed well with experimentally observed values.
The Stern-Gerlach Experiment
In 1922, physicists Otto Stern and Walther Gerlach had shown that a moving atom of silver could be deflected by placing two long magnets, of opposite polarity, above and below the path of the atom, parallel to its direction of motion. Silver atoms are electrically neutral and possess 47 orbiting electrons. The first 46 of these electrons are paired up so that their magnetic fields are facing in opposite directions and, hence, cancel out. It is the remaining unpaired electron that is responsible for the magnetic field of the silver atom, which causes the observed deflection as it travels between the two magnets.
It would have been impossible to perform this experiment on a free electron because of its electric charge. However, the experiment was later performed using hydrogen atoms, which possess only one orbiting electron, with similar results obtained. (Note that the proton also has a quantum spin and magnetic moment, but this is much smaller than that of the electron.)
If an electron is considered to act like a tiny bar magnet, due to its intrinsic spin angular momentum (see Quantum Spin, above), it should be deflected by a magnetic field. The deflection is such that a beam of silver atoms is split into two separate beams, one travelling upwards toward the top magnet and one travelling downwards towards the bottom magnet.
Before the particle enters the magnetic field, the standard interpretation of quantum mechanics considers its spin direction to be undefined, existing in a superposition of possible spin states, much like Schrödinger’s cat is considered to be simultaneously both dead and alive until an observation forces the wavefunction to instantaneously collapse into one of the two possible states.
Dirac’s relativistic wave equation (see Quantum Spin, above), predicts the values of the electron’s spin and agrees well, (but not exactly), with the results of the Stern-Gerlach experiment.
The magnitude of the electron’s spin is ½ (h / 2π), where h is Planck’s constant. The quantity h / 2π appears frequently when calculating quantised angular momentum, and is usual written in a shorthand form as ħ, pronounced h-bar. The electron’s spin can therefore be written as ½ħ, and, because of this, it is often referred to as a “spin-half” particle. The two possible states are referred to as “spin up” and spin down”, depending on whether the electron’s spin would be aligned parallel to the magnetic field of the Stern-Gerlach apparatus, causing the electron to move upwards towards the top magnet, or whether the spin would be aligned opposite to the magnetic field, causing the electron to move downwards.
However, the value of the electron’s magnetic moment given by Dirac’s equation did not agree exacly with values obtained experimentally. It was only when the existence of virtual photons (see the Time-Energy Uncertainty Principle, above), predicted by the theory of quantum electrodynamics, were taken into account that the precise value could be calculated.
Other particles also have their own spin magnetic moments. The proton is also a spin-half particle. However, because the magnitude of a particle’s magnetic field is inversely proportional to its mass, the magnetic moment of the proton is around one thousand times smaller than that of the electron. Even the neutron, although electrically neutral, has a magnetic moment and is also a spin-half particle. Both the proton and the neutron are made up of fundamental particles known as quarks (see particle physics). These are also spin-half particle’s and are responsible for the spin of both the proton and the neutron.
Particles such as the photon are also considered to possess quantum spin, although the photon does not possess charge and, hence, has no magnetic moment. However, due to angular momentum conservation laws in interactions with spin-half particles, such as the electron, the photon can be shown to have a quantum spin number of 1. This means that a photon can have the possible spin values of -1 or +1, corresponding to the two possible orthogonal polarisation states of light.
The Pauli Exclusion Principal
Independently of Dirac, Wolfgang Ernst Pauli, had also explained the splitting of the beam in the Stern-Gerlach experiment as due to the spin magnetic moment of the electron. However, unlike Dirac, who derived the values of the electron’s spin from his relativistic wave equation, Pauli had introduced the mathematical description of quantum spin ‘by hand’ into his equations in order to describe the results of the Stern-Gerlach experiment in quantum mechanical terms.
Pauli used his theory to explain why particles with half integer spin, known as fermions (after Enrico Fermi), could not share the same position and quantum mechanical state. This is known as the Pauli exclusion principal and is the reason why particles with half-integer spin can be considered to behave as particles of matter, and cannot share the same position in space. Whereas particles with integer spin, such as the photon, can group together in large numbers and pass through each other.
Particles with integer spin are known as bosons (after Satyendra Nath Bose) and are considered to be the force-carrying particles in quantum field theory. For example, the photon, which mediates the electromagnetic field, the W and Z particles, which mediate the weak nuclear force, and the gluon, which mediates the strong nuclear force, are all spin-one particles.
Dirac’s relativistic wave equation was based on a quantum mechanical version of Einstein’s energy-momentum relation, derived from the special theory of relativity. This is an expanded form of the well known equation E = mc2, where E represents energy, m represents mass and c is the speed of light squared. E = mc2 essentially states that the mass of any object is equivalent to energy. Since the speed of light squared is a very large number, 1 kilogram in mass is approximately equal to 9×1016 Joules (or 90 petajoules) of energy.
The energy-momentum form of this equation includes a term denoting the equivalent energy to the mass of the particle at rest (m0) and a second term denoting the kinetic energy of the particle due to its momentum (p). The sum of these two quantities gives the total energy possessed by the particle, i.e.:
E2 = m02c4 + p2c2
Since the left-hand side of this equation is the square of the total energy, to calculate the value for energy (E) requires that the square root of the right-hand side of the equation be taken. This leads to two possible answers, since any square root has both a positive and a negative solution (e.g. the square root of 4 is either 2 or -2).
Dirac’s quantum mechanical version of this equation, therefore also had two solutions for the energy of a particle The positive solutions to the equation can be considered to represent particles of ordinary matter, whereas the negative solutions can be thought of as representing particles of positive energy (and hence positive mass) but with the opposite charge. This oppositely charged matter came to be known as antimatter.
The existence of the electron’s antimatter counterpart (or antiparticle), named the positron, was confirmed experimentally in 1932 by Carl Anderson.
Most fundamental particles have corresponding antiparticles, however, the chargeless bosons, such as the photon, the Z particle, the Higg’s Boson (see particle physics) and the graviton – the hypothetical particle which is proposed to mediate the force of gravitation – are considered to be their own antiparticles.
When a particle of matter collides with a particle of antimatter they will “annihilate”, leaving only high energy photons, which radiate away the energy of the particles’ mass and momentum. Other particles can also be produced, providing that the total energy of the collision is greater than the rest mass of the resulting particles.
Suppose two identical particles are created through a random radioactive decay process, such that they will always be created spinning in opposite directions, due to the conservation of angular momentum. The particle’s spin is quantised, and can be in either a spin up or spin down state, as measured along any particle axis. The two particles fly off in opposite directions, but it is not possible to predict which particle goes in which direction. Suppose that you and your friend each intercept one of the two particles and measure its spin. There is a fifty-fifty probability of the particle that you intercepted having either spin up or spin down when you measure it. This will also be true for your friend’s particle; however, you will always agree that your particle has the opposite spin to that of your friend’s, no matter how many times this process is repeated.
Fair enough; however, according to the standard interpretation of quantum mechanics, as with Schrödinger’s cat (see above), the spin of each particle isn’t defined until you measure it. Your particle exists in a superposition of states until you force it to take on just one of the two possible states by making an observation.
The particles’ states are said to be “entangled”, since their spin values are dependent upon each other, but neither is yet defined.
The problem with this is that, if the spin of your own particle suddenly takes on one distinct value when you measure it, your friend’s particle must take on the opposite spin value at precisely the same moment, no matter how far apart you are when you make your measurement. This means that the information on the spin value of your particle must somehow be instantaneously transmitted to your friend’s particle at the exact moment that you measure your particle’s spin. This implies that the information has somehow traveled between the two particles at an infinite speed. This of course would seem to violate the fundamental principle of the special theory of relativity, that nothing can travel faster than the speed of light.
If the spin of your friend’s particle was not set at precisely the same moment as your own, it would be possible for your friend to measure the spin value of their own particle before the information that you had measured the spin of your particle had reached them. It would, therefore, be possible for you to both measure the same value for your particles’ spins, which would violate the spin angular momentum conservation law, which required that the two particles had opposite spin values when they were first created.
Albert Einstein was among those who rejected this explanation of quantum entanglement, calling it “spooky action at a distance”. 1935, Einstein, Boris Podolsky and Nathan Rosen co-authored a scientific paper, which first pointed out that the standard quantum mechanical view of entanglement required a faster-than-light signal between particles. This problem became known as the EPR paradox, after the paper’s authors. Einstein preferred a “hidden variable” explanation, believing that, although quantum mechanics is a correct theory, it is incomplete, as it does not take account of hidden internal quantities, which it treats as undefined until measured, but which he believed always have a hidden but defined value.
In the hidden-variable interpretation, it is simply as if the two particles both had their spin states defined when they were created. This is just analogous to a someone writing the words “Spin Up” on a piece of paper “Spin Down” on another piece of paper, sealing both pieces of paper in separate envelopes and posting one to you and one to your friend. Until one of you opens their envelope you cannot know who has which piece of paper, but as soon as you examine your piece of paper, you instantly know what your friend’s reads – not very “spooky” and requiring no action at a distance.
However, the standard quantum mechanical view – championed by Heisenberg and others – with superpositions of states, is more like a third person flipping two coins in the air, one towards you and one towards your friend. Since the coin is spinning fast, it cannot be said to be heads or tails until you catch it. However, if the states of your coins were somehow entangled, when you did catch your coin, it would be as if your friend’s coin somehow instantaneously took on the opposite value, even though they had not caught their coin out of the air yet.
However, the theory of relativity does not actually forbid a faster than light signal between the particles. Only signals that carry useful information faster than light would violate the principle that cause must always come before effect (known as the principle of causality), which would lead to paradoxical situations and be effectively equivalent to travelling backwards in time. Since there is no possible way to make use of the signal between the particles to communicate any information faster than the speed of light, it does not cause a problem. You can’t tell whether your particle is still in a superposition of spin states or not (i.e. whether your friend has measured their particle’s spin yet), without making a measurement yourself, which forces it to adopt one value and breaks the entanglement, anyway. You can’t somehow monitor whether your particle has taken on a definite spin value yet and watch for the moment when your friend has measured their particle’s spin and forced yours out of its superposition of states, so there is no way your friend can use this to send a signal to you. Hence, causality is not violated, even though the means by which the two particles share their states instantaneously remains mysterious.
In 1964, John Stewart Bell, a physicist from Northern Ireland, proposed an experiment that could distinguish whether or not a hidden variable theory could reflected the true nature of reality.
Consider the above scenario for the two entangled particles. In order to measure your particle’s spin, you can use a magnetic field, as described in the Stern-Gerlach experiment, above, and you’re friend can also use a magnetic field to measure the spin of their particle. If the magnetic field of your apparatus, and hence the axis along which you measure the spin of your particle, is exactly aligned with the magnetic field of you’re friends apparatus then, for each particle pair that is created, via the random radioactive decay process, you will measure the spin of your particle to be opposite to the spin of your friend’s particle. If you measure your particle as spin up, your friend will be certain to measure their particle as spin down, and vice versa, since the two particles are in a combined entangled state, due to the conservation of angular momentum when they were created.
If you repeat this process for each pair of entangled particles, produced by the radioactive source, there will be a perfect anti-correlation between the spin measurements of your particles and the spin measurements of your friend’s particles. In other words, the spins of each pair of entangled particle will always be aligned opposite to each other, when measured along the same axis.
If you then turn your apparatus through 180 degrees, so that its magnetic field is pointing in the opposite direction to that of your friend’s apparatus, then the measurement of your particle’s spin will always be the same as the spin measurement of your friend’s particle, as measured along the opposite axis. When you measure your particle as spin up, your friend will also measure that particle’s entangled pair as spin up, and when you measure your particle as spin down, your friend will also measure their particle as spin down.
However, If you align the magnetic field of your apparatus at 90 degrees to your friend’s apparatus, then no correlation will be observed. The standard interpretation of quantum mechanics says that this is because the components of a particle’s spin in two perpendicular directions cannot be measured at the same time outside of the limits of the Heisenberg uncertainty principle. The spin of the particle in the original “up-down” direction is a conjugate observable quantity to the particle’s spin in the perpendicular “left-right” direction. In other words, you cannot measure both quantities precisely at the same, in the same way that you cannot precisely measure both a particle’s position and momentum at the same time. The measurement of your particle’s spin in this “left-right” direction can be considered to destroy the information about the spin of your particle in the original “up-down” direction and forces it back into a superposition of the two possible states. This leads to a fifty-fifty chance that the measurement of your particle’s spin in this perpendicular direction will match the spin of your friend’s particle, i.e. there is no correlation between you and your friend’s particles’ spins measured along perpendicular axis.
The definition of up-down or left-right are completely arbitrary, of course, although for you to observe a perfect correlation, or anti-correlation, with your friend’s spin measurements, your apparatus must be aligned in the opposite direction, or in the same direction, respectively, as their apparatus.
If you align your apparatus so that the magnetic field points in the opposite direction to your friend’s, but then tilt the axis by only one degree, most of the particles you observe will still have their spins aligned in the same direction as your friends. However, now the correlation will not be perfect. Instead, the majority of your particles will have the same spin state as your friends’ particles, but a small amount will have opposite spin directions.
If your friend then also tilts their measuring apparatus by one degree in the opposite direction, the correlation between their particles’ spin and your own will drop even further. If a hidden variable theory reflects the true nature of reality, it would be expected that the number of particles you observe with the opposite spin to your friend’s would simply be double the number you observed when only your measuring apparatus was tilted with respect to the original axis. This is simply because your friend has tilted their measuring apparatus by the same angle as yours, but in the opposite direction. The combined angle between the measuring devices has doubled, so you would also expect the number of mismatches to double.
However, the theory of quantum mechanics predicts a different result. The Schrödinger Equation says that when you tilt your measuring apparatus by a small angle, θ, the chances that you will observe a particle to have the opposite spin direction to your own is given approximately by θ2 / 2. So, when you first rotated your measuring apparatus by one degree, the chance that your particle’s spin would be measured to be opposite to the spin of your friend’s corresponding particle was 12/2, which equals 0.5. However, after your friend has also rotated their apparatus by 1 degree in the opposite direction, the chance of a mismatch is now 22/2, which equals 2. This is four times greater than was the probability of a mismatch before your friend had tilted their apparatus – double the mismatch predicted by the analysis for hidden variable theories, above.
Experimental test of Bell’s theorem have been carried out, although it’s usually more practical to measure the polarisations of entangled photons, (for which an analogous but slightly more complicated version of Bell’s theorem applies), rather than particle spin. The results of these experiments are consistent with the predictions of the Schrödinger equation, ruling out many varieties of hidden-variable theories.
The conclusion that must be drawn from these tests of Bell’s theorem is that semi-classical versions of quantum mechanics, assuming hidden variables, cannot replicate the results of experiment and therefore are not consistent with the true nature of reality.
Possible Interpretations of Quantum Mechanics
So where does this leave the hope of determining a visualisable interpretation of the underlying mechanisms behind quantum physics?
Bell’s theorem ruled out the majority of hidden-variable theories of quantum mechanics, but still leaves a number of possible interpretations, including the following:
The Copenhagen Interpretation
The standard interpretation of quantum mechanics is also known as the Copenhagen interpretation. This term was coined by Heisenberg in the 1950s to refer to the theories of quantum mechanics developed at the Niels Bohr Institute in Copenhagen, Denmark, in the 1920s.
This is a nondeterministic, probabilistic view of quantum mechanics, which accepts that the true nature of reality is inherently unknowable. This interpretation considers that we should not attempt to visualise the processes behind quantum mechanics as we would for a classical theory. Heisenberg, himself, preferred to think that it was sufficient that a theory could be described mathematically for it to be considered visualisable.
The Many-Worlds Theory
The many-worlds theory postulates that, for every event with more than one possible outcome, the universe splits, so that all possible versions of reality are played out in parallel universes to our own.
This concept has been widely used by science fiction writers, and, as far as we know, there is nothing in the laws of physics that expressly prohibit this version of reality. However, there is of course, currently, no way to test this theory, and unless there is some way to interact with these parallel universes, we can never hope to verify their existence.
Superdeterminism attempt to avoid the problems of hidden-variable theories, exposed by Bell’s theorem, by suggesting that the outcomes of all events are predetermined. This would mean, however, that the notion of free will is merely an illusion.
The Computer Simulation Hypothesis
It has been suggested that what we perceive as reality is in fact akin to a computer simulation. This would remove the need to explain the mechanism behind quantum phenomena, since the result of every event would merely be a random assignment based on the mathematical calculation of the probability distribution of the wave function.
The De Broglie-Bohm Pilot-Wave Theory
In the 1920s, De Broglie suggested that the mysterious phenomena of wave-particle duality could be explained if the motion of a particle is determined by a guiding pilot wave. De Broglie eventually gave up on his attempts to fully formulate this theory; however, it was subsequently revived and improved upon by David Bohm, in the 1950s, with encouragement from Albert Einstein, becoming an alternative to the Copenhagen interpretation. The theory is fully consistent with experimental observation; indeed, John Stewart Bell, (see Bell’s theorem, above) was, himself, a proponent of the De Broglie-Bohm theory.
For example, in Young’s double slit experiment, this theory says that the particle always passes through just one of the slits; however, the pilot waves passes through both and is diffracted. This causes the particle’s position when it is observed on the screen to be determined by the diffracted pilot wave – as if the particle were “surfing” the pilot wave – although the particle does not have wavelike characteristics itself.
Although this theory does provide a possible way to visualise quantum mechanics in a more classical sense, pilot wave theories do not provide any more information about a quantum mechanical system than theories based on the standard Copenhagen interpretation. Because of this, it’s possible that we’ll never be able to experimentally confirm or rule out the existence of such pilot waves, so whether they have any real relevance to quantum theory, is debatable. However, the same criticism can also be applied to any of that alternative theories listed above. Since the Copenhagen interpretation doesn’t even attempt to describe a mechanism for quantum mechanical phenomena, De Broglie Bohm theory might seem to be the least exotic of the possible explanations of the true nature of reality. It is even possible that testable differences between predictions made by pilot-wave theories and the Copenhagen interpretation might arise at even smaller scales. It has also been speculated that the theory might be the right direction for the development of a fully quantum-mechanical interpretation of general relativity, which has, so far, eluded physicist working under the standard assumptions of quantum mechanics.
Interestingly, it has recently been shown, by Yves Couder et al, that droplets steered by pilot waves on the surface of silicon oil, exhibit behavour analogous to quantum mechanical effects – such as quantised orbital levels, tunneling and annihilation – but at much larger, laboratory scales.