Smolin, Lee. Einstein's Unfinished Revolution. New York: Penguin Press, 2019. ISBN 978-1-59420-619-1.
In the closing years of the nineteenth century, one of those nagging little discrepancies vexing physicists was the behaviour of the photoelectric effect. Originally discovered in 1887, the phenomenon causes certain metals, when illuminated by light, to absorb the light and emit electrons. The perplexing point was that there was a minimum wavelength (colour of light) necessary for electron emission, and for longer wavelengths, no electrons would be emitted at all, regardless of the intensity of the beam of light. For example, a certain metal might emit electrons when illuminated by green, blue, violet, and ultraviolet light, with the intensity of electron emission proportional to the light intensity, but red or yellow light, regardless of how intense, would not result in a single electron being emitted.

This didn't make any sense. According to Maxwell's wave theory of light, which was almost universally accepted and had passed stringent experimental tests, the energy of light depended upon the amplitude of the wave (its intensity), not the wavelength (or, reciprocally, its frequency). And yet the photoelectric effect didn't behave that way—it appeared that whatever was causing the electrons to be emitted depended on the wavelength of the light, and what's more, there was a sharp cut-off below which no electrons would be emitted at all.

In 1905, in one of his “miracle year” papers, “On a Heuristic Viewpoint Concerning the Production and Transformation of Light”, Albert Einstein suggested a solution to the puzzle. He argued that light did not propagate as a wave at all, but rather in discrete particles, or “quanta”, later named “photons”, whose energy was proportional to the wavelength of the light. This neatly explained the behaviour of the photoelectric effect. Light with a wavelength longer than the cut-off point was transmitted by photons whose energy was too low to knock electrons out of metal they illuminated, while those above the threshold could liberate electrons. The intensity of the light was a measure of the number of photons in the beam, unrelated to the energy of the individual photons.

This paper became one of the cornerstones of the revolutionary theory of quantum mechanics, the complete working out of which occupied much of the twentieth century. Quantum mechanics underlies the standard model of particle physics, which is arguably the most thoroughly tested theory in the history of physics, with no experiment showing results which contradict its predictions since it was formulated in the 1970s. Quantum mechanics is necessary to explain the operation of the electronic and optoelectronic devices upon which our modern computing and communication infrastructure is built, and describes every aspect of physical chemistry.

But quantum mechanics is weird. Consider: if light consists of little particles, like bullets, then why when you shine a beam of light on a barrier with two slits do you get an interference pattern with bright and dark bands precisely as you get with, say, water waves? And if you send a single photon at a time and try to measure which slit it went through, you find it always went through one or the other, but then the interference pattern goes away. It seems like whether the photon behaves as a wave or a particle depends upon how you look at it. If you have an hour, here is grand master explainer Richard Feynman (who won his own Nobel Prize in 1965 for reconciling the quantum mechanical theory of light and the electron with Einstein's special relativity) exploring how profoundly weird the double slit experiment is.

Fundamentally, quantum mechanics seems to violate the principle of realism, which the author defines as follows.

The belief that there is an objective physical world whose properties are independent of what human beings know or which experiments we choose to do. Realists also believe that there is no obstacle in principle to our obtaining complete knowledge of this world.

This has been part of the scientific worldview since antiquity and yet quantum mechanics, confirmed by innumerable experiments, appears to indicate we must abandon it. Quantum mechanics says that what you observe depends on what you choose to measure; that there is an absolute limit upon the precision with which you can measure pairs of properties (for example position and momentum) set by the uncertainty principle; that it isn't possible to predict the outcome of experiments but only the probability among a variety of outcomes; and that particles which are widely separated in space and time but which have interacted in the past are entangled and display correlations which no classical mechanistic theory can explain—Einstein called the latter “spooky action at a distance”. Once again, all of these effects have been confirmed by precision experiments and are not fairy castles erected by theorists.

From the formulation of the modern quantum theory in the 1920s, often called the Copenhagen interpretation after the location of the institute where one of its architects, Neils Bohr, worked, a number of eminent physicists including Einstein and Louis de Broglie were deeply disturbed by its apparent jettisoning of the principle of realism in favour of what they considered a quasi-mystical view in which the act of “measurement” (whatever that means) caused a physical change (wave function collapse) in the state of a system. This seemed to imply that the photon, or electron, or anything else, did not have a physical position until it interacted with something else: until then it was just an immaterial wave function which filled all of space and (when squared) gave the probability of finding it at that location.

In 1927, de Broglie proposed a pilot wave theory as a realist alternative to the Copenhagen interpretation. In the pilot wave theory there is a real particle, which has a definite position and momentum at all times. It is guided in its motion by a pilot wave which fills all of space and is defined by the medium through which it propagates. We cannot predict the exact outcome of measuring the particle because we cannot have infinitely precise knowledge of its initial position and momentum, but in principle these quantities exist and are real. There is no “measurement problem” because we always detect the particle, not the pilot wave which guides it. In its original formulation, the pilot wave theory exactly reproduced the predictions of the Copenhagen formulation, and hence was not a competing theory but rather an alternative interpretation of the equations of quantum mechanics. Many physicists who preferred to “shut up and calculate” considered interpretations a pointless exercise in phil-oss-o-phy, but de Broglie and Einstein placed great value on retaining the principle of realism as a cornerstone of theoretical physics. Lee Smolin sketches an alternative reality in which “all the bright, ambitious students flocked to Paris in the 1930s to follow de Broglie, and wrote textbooks on pilot wave theory, while Bohr became a footnote, disparaged for the obscurity of his unnecessary philosophy”. But that wasn't what happened: among those few physicists who pondered what the equations meant about how the world really works, the Copenhagen view remained dominant.

In the 1950s, independently, David Bohm invented a pilot wave theory which he developed into a complete theory of nonrelativistic quantum mechanics. To this day, a small community of “Bohmians” continue to explore the implications of his theory, working on extending it to be compatible with special relativity. From a philosophical standpoint the de Broglie-Bohm theory is unsatisfying in that it involves a pilot wave which guides a particle, but upon which the particle does not act. This is an “unmoved mover”, which all of our experience of physics argues does not exist. For example, Newton's third law of motion holds that every action has an equal and opposite reaction, and in Einstein's general relativity, spacetime tells mass-energy how to move while mass-energy tells spacetime how to curve. It seems odd that the pilot wave could be immune from influence of the particle it guides. A few physicists, such as Jack Sarfatti, have proposed “post-quantum” extensions to Bohm's theory in which there is back-reaction from the particle on the pilot wave, and argue that this phenomenon might be accessible to experimental tests which would distinguish post-quantum phenomena from the predictions of orthodox quantum mechanics. A few non-physicist crackpots have suggested these phenomena might even explain flying saucers.

Moving on from pilot wave theory, the author explores other attempts to create a realist interpretation of quantum mechanics: objective collapse of the wave function, as in the Penrose interpretation; the many worlds interpretation (which Smolin calls “magical realism”); and decoherence of the wavefunction due to interaction with the environment. He rejects all of them as unsatisfying, because they fail to address glaring lacunæ in quantum theory which are apparent from its very equations.

The twentieth century gave us two pillars of theoretical physics: quantum mechanics and general relativity—Einstein's geometric theory of gravitation. Both have been tested to great precision, but they are fundamentally incompatible with one another. Quantum mechanics describes the very small: elementary particles, atoms, and molecules. General relativity describes the very large: stars, planets, galaxies, black holes, and the universe as a whole. In the middle, where we live our lives, neither much affects the things we observe, which is why their predictions seem counter-intuitive to us. But when you try to put the two theories together, to create a theory of quantum gravity, the pieces don't fit. Quantum mechanics assumes there is a universal clock which ticks at the same rate everywhere in the universe. But general relativity tells us this isn't so: a simple experiment shows that a clock runs slower when it's in a gravitational field. Quantum mechanics says that it isn't possible to determine the position of a particle without its interacting with another particle, but general relativity requires the knowledge of precise positions of particles to determine how spacetime curves and governs the trajectories of other particles. There are a multitude of more gnarly and technical problems in what Stephen Hawking called “consummating the fiery marriage between quantum mechanics and general relativity”. In particular, the equations of quantum mechanics are linear, which means you can add together two valid solutions and get another valid solution, while general relativity is nonlinear, where trying to disentangle the relationships of parts of the systems quickly goes pear-shaped and many of the mathematical tools physicists use to understand systems (in particular, perturbation theory) blow up in their faces.

Ultimately, Smolin argues, giving up realism means abandoning what science is all about: figuring out what is really going on. The incompatibility of quantum mechanics and general relativity provides clues that there may be a deeper theory to which both are approximations that work in certain domains (just as Newtonian mechanics is an approximation of special relativity which works when velocities are much less than the speed of light). Many people have tried and failed to “quantise general relativity”. Smolin suggests the problem is that quantum theory itself is incomplete: there is a deeper theory, a realistic one, to which our existing theory is only an approximation which works in the present universe where spacetime is nearly flat. He suggests that candidate theories must contain a number of fundamental principles. They must be background independent, like general relativity, and discard such concepts as fixed space and a universal clock, making both dynamic and defined based upon the components of a system. Everything must be relational: there is no absolute space or time; everything is defined in relation to something else. Everything must have a cause, and there must be a chain of causation for every event which traces back to its causes; these causes flow only in one direction. There is reciprocity: any object which acts upon another object is acted upon by that object. Finally, there is the “identity of indescernibles”: two objects which have exactly the same properties are the same object (this is a little tricky, but the idea is that if you cannot in some way distinguish two objects [for example, by their having different causes in their history], then they are the same object).

This argues that what we perceive, at the human scale and even in our particle physics experiments, as space and time are actually emergent properties of something deeper which was manifest in the early universe and in extreme conditions such as gravitational collapse to black holes, but hidden in the bland conditions which permit us to exist. Further, what we believe to be “laws” and “constants” may simply be precedents established by the universe as it tries to figure out how to handle novel circumstances. Just as complex systems like markets and evolution in ecosystems have rules that change based upon events within them, maybe the universe is “making it up as it goes along”, and in the early universe, far from today's near-equilibrium, wild and crazy things happened which may explain some of the puzzling properties of the universe we observe today.

This needn't forever remain in the realm of speculation. It is easy, for example, to synthesise a protein which has never existed before in the universe (it's an example of a combinatorial explosion). You might try, for example, to crystallise this novel protein and see how difficult it is, then try again later and see if the universe has learned how to do it. To be extra careful, do it first on the International Space Station and then in a lab on the Earth. I suggested this almost twenty years ago as a test of Rupert Sheldrake's theory of morphic resonance, but (although doubtless Smolin would shun me for associating his theory with that one), it might produce interesting results.

The book concludes with a very personal look at the challenges facing a working scientist who has concluded the paradigm accepted by the overwhelming majority of his or her peers is incomplete and cannot be remedied by incremental changes based upon the existing foundation. He notes:

There is no more reasonable bet than that our current knowledge is incomplete. In every era of the past our knowledge was incomplete; why should our period be any different? Certainly the puzzles we face are at least as formidable as any in the past. But almost nobody bets this way. This puzzles me.

Well, it doesn't puzzle me. Ever since I learned classical economics, I've always learned to look at the incentives in a system. When you regard academia today, there is huge risk and little reward to get out a new notebook, look at the first blank page, and strike out in an entirely new direction. Maybe if you were a twenty-something patent examiner in a small city in Switzerland in 1905 with no academic career or reputation at risk you might go back to first principles and overturn space, time, and the wave theory of light all in one year, but today's institutional structure makes it almost impossible for a young researcher (and revolutionary ideas usually come from the young) to strike out in a new direction. It is a blessing that we have deep thinkers such as Lee Smolin setting aside the easy path to retirement to ask these deep questions today.

Here is a lecture by the author at the Perimeter Institute about the topics discussed in the book. He concentrates mostly on the problems with quantum theory and not the speculative solutions discussed in the latter part of the book.

May 2019 Permalink