Wednesday, April 18, 2012

String Theory

I touch on string theory again and again as I write about physics. String theory attempts to define particles of matter and force in terms of tiny fundamental structures called strings. Electrons, quarks, photons and bosons, for example, would all be Planck-length strings. These strings would distinguish themselves from each other in terms of vibrational frequency, attachment to other structures called branes and whether they are open or closed. Here is how matter may ultimately be broken down into strings:

(adapted from & credit: MissMJ, Wikipedia)

I admit that I'm drawn to its mathematical elegance even though my understanding of its mathematics is limited.  I find it's potential for bringing relativity and quantum mechanics together into one unified theory in physics exciting. Theoretical physicist Michio Kaku, cofounder of string field theory (easy introduction to this theory) and best-selling author has made great strides in helping the general public understand its promise.



I haven't explored in depth how the notion of forces and particles as strings came to be, why there is more than one string theory, and what the merits and weaknesses are. In this article I attempt to really sink my teeth into string theory, come join me.

All The Best Stories Seem to Start With Special Relativity . . .

Let me set the stage for you: It's 1930. Einstein's theory of special relativity is an enormous breakthrough in our understanding of space and time, and it all has to do with light. He realized that the speed of light was the same no matter how fast, in what direction, or where the light source was traveling. This means that photons of light will travel at 3 x 108 m/s, in a vacuum, whether they come from a flashlight shone out the front window of a space ship traveling at 3/4 light speed relative to an observer or if they come from a flashlight held at rest relative to an observer. The key point to this description of light is the notion of one measurement being relative to another. Any observer of light will measure light speed to be 3 x 108 m/s, in a vacuum. Physicists call light Lorentz invariant and that simply means it doesn't change.

But if the speed of light doesn't change then how does the spaceship example make sense? Does light from the flashlight know how to slow just enough to maintain light speed? No. And this simple answer has huge implications! It means that time and space must change - they are not Lorentz invariant. This is exactly the point at which Einstein transcended Newton's concept of a universe operating within an unchanging space and with time as an external parameter.

We've Got Space and Time Worked Out . . .

When an object travels close to the speed of light, time (relative to an observer at rest) slows down. This is called time dilation. In fact, time even stops for an object traveling at light speed, such as a photon of light. It stops according to an observer at rest. For the object itself time is ticking along as usual. This point here is the core concept of relativity. Space is also affected. If an observer at rest watches an object, lets say some kind of ultrafast motorbike whizzing past at close to light speed, that bike will appear squished flat perpendicular to the observer. That effect is called length contraction. Time dilation and length contraction are two logical consequences of the Lorentz invariance of light speed. Weird as they are, they are both well verified through experiment. This 9-minute animated video may help you visualize special relativity:



From special relativity we get spacetime, a four-dimensional stretchy fabric, where time and space have an inversely proportional relationship to each other. As time is "stretched," space is "squeezed." As space is stretched, time is squeezed.

 . . . Except for How Subatomic Particles Act in It.

Special relativity was a fantastic leap forward in understanding but now there is a catch: Schrodinger's equation. This highly successful equation, one of the cornerstone theories of quantum mechanics (the physics of the very small), was formulated in 1925. It describes the motion of molecules, atoms and subatomic particles, much like Newton's second law of motion describes motion in classical mechanics. It very successfully describes a quantum system, and it, like special relativity, has been well verified through experiment.



The problem is that it isn't a relativistic theory; it doesn't take into account Lorentz transformations. Put another way, it can't describe particles traveling at or near the speed of light because it does not take into account how space and time behave at those speeds.

This Problem Ushers in A New Field of Study: Relativistic Quantum Field Mechanics

Out of this this impasse came some groundbreaking theoretical breakthroughs. A new field called relativistic quantum mechanics was born based on two mathematical equations that attempt to deal with time and space: the Klein-Gordon equation and the Dirac equation. These two equations bring the "relativistic" into quantum mechanics.

Electromagnetism Is Worked Out First

From this new field of study came quantum electrodynamics (QED) pioneered by Richard Feynman in the1940's. This theory was a huge breakthrough in physics. It describes how light and matter interact with each other (electromagnetism) through an exchange of virtual photons. This is the first solved relativistic quantum theory. In other words, it is the first theory where the equations of quantum mechanics and special relativity agree with each other. It perfectly predicts phenomena such as something called the anomalous magnetic moment of the electron as well as the Lamb shift of the energy levels of hydrogen atoms.

First Accidental Sighting of String Theory

Relativistic quantum mechanics works great for electromagnetism, one of the four fundamental forces in the universe. The other three fundamental forces - strong, weak and gravity - haven't been nearly so cooperative. Physicists trying to describe the strong force in a relativistic quantum manner came up with something called a dual resonance model in the 1960's. Researchers realized that the mathematical description of the strong force took the shape of a 1-dimensional string. In fact the model was a quantum theory of a relativistic vibrating string! They hoped that these strings, corresponding to massless 1- and 2-spin virtual subatomic particles, could describe interactions between hadrons inside atomic nuclei. This string-based description made many predictions that were then soundly contradicted by experimental findings. In a few years the scientific community lost interest in this so-called string theory of the strong force.

Electroweak Force and Strong Force Are Worked Out

Meanwhile, the weak force was successfully described by combining it with electromagnetism into the electroweak theory in the 1970's. The same wisdom was applied to the strong force soon after, resulting in another successful theory called quantum chromodynamics. Now, three of the four fundamental forces could be described, all of them using Feynman's famous diagrams showcasing virtual subatomic fundamental force-carrying particles. This is an example of a Feynman diagram, showing the radiation of a gluon when an electron (e-) and its antimatter twin, a positron (e+), are annihilated. They produce a gamma photon (wavy blue line) that becomes a quark/antiquark pair. One of particles of this pair radiates a gluon (green wavy light).

(credit: Joel Holdsworth, Wikipedia)

Aside: Notice, for interest's sake, how antimatter particles move backward through time? At the quantum level, time moves in both directions, an action forbidden by the second law of thermodynamics. We'll be talking about this important law later on in this article. If you are curious about time, please see my article on the subject.

Electromagnetism is mediated by the virtual photon. The electroweak force is mediated by virtual bosons. And the strong force is mediated by virtual gluons. These theories are successful relativistic quantum theories.

Now, What About Gravity?

This left pesky gravity. Many physicists have tried to formulate some kind of quantum gravity theory. None worked. As physicists would say, none were renormalizable. That means that unless you can treat all the values, which arise as infinities (a process called renormalizing) in a mathematical formulation, you're hooped. It doesn't work.

In the framework of quantum field theory, an extension of quantum mechanics, something called a first rank tensor describes the three fundamental force theories. A tensor is a geometric object that describes the density, flux of energy and momentum in spacetime. It's an all-around word that describes matter, radiation and force fields. This is what a stress tensor looks like in a 3-dimensional Cartesian coordinate system:

(credit: Sanpaz, derivative work: TimothyRias, Wikipedia)

The mathematics of quantum field theory suggests that gravitation can be described by something called a stress-energy tensor. It is a second rank tensor compared to the first rank tensor of other three fundamental forces. This tensor happens to describe a massless spin-2 particle that could mediate the force of gravity, a particle analogous to the virtual photon, bosons and gluon. This massless spin-2 particle would "act" like gravitation because it interacts with the stress-energy tensor the same way a gravitational field does. This means that if you find a massless spin-2 particle, you can rest assured you've discovered the quantum field counterpart for gravity. The problem is that gravitons would not be easy to find.

To a few physicists this spin-2 particle had an eerily familiar ring to it. Didn't that old dual resonance model say something about a massless spin-2 particle for the strong force?

So, our stage has been set, and this is where we pick up on how string theory came to be.

String Theory as a Relativistic Quantum Theory of Gravity. Or not.

Our current understanding of gravity is that it is an emergent property of spacetime. It is based on Einstein's extension of special relativity to his theory of general relativity, which describes gravity. Objects with mass curve spacetime, and we observe that curvature as gravity, visualized in this 4-minute video:



This understanding works great except at the extremes at both ends.

Singularities Don't Work

Gravity around the most massive objects in the universe, black holes, is reduced to a singularity - infinite gravity confined to a single point. Singularities such as this one are generally a sign that there is a problem with the mathematics behind the theory. The theory is not renormalizable. At the other extreme, gravity is not even a variable within quantum mechanics equations describing subatomic behaviour. Unless you are trying to describe the behaviour of collapsed atoms (where the effects of gravity are massive), leaving gravity, a relatively weak force, out of quantum descriptions is not a problem. But this also means that we can't describe the behaviours of particles moving near the speed of light, where relativistic effects (on time, length and gravity) become significant.

Black holes (composed of collapsed atoms) are a unique laboratory where both quantum effects and significant gravitational effects merge into one phenomenon. Only a relativistic quantum theory of gravity could adequately describe a black hole. Incidentally, only such a theory could describe the beginning of the Big Bang as well. Like relativity, quantum mechanics does not permit singularities either - particles cannot inhabit a space smaller than their wavelengths. Both general relativity and quantum mechanics break down at black hole (and Big Bang) singularities.

Strings Don't Have Singularities

String theory gets around the singularity problems in a mathematically elegant way. It offers a smooth two-dimensional surface that is analogous to the Feynman diagrams describing the other three fundamental forces. Tiny 1-dimensional strings are mathematical loop integrals over this surface, with their minimum length being Planck length, about 10-35 m. Something called the Ployakov action describes how these strings want to contract to minimize their potential energy like springs do, but conservation of energy keeps them from disappearing.

This construct avoids the zero distance and infinite momentum problems of such integrals over particle, or point, loops (the mathematical way of saying singlularity). For strings, the relationship between distance and momentum doesn't break down at a singularity. It becomes a measure of string tension instead. Just like a guitar string, as the tension in a fundamental string is increased, the wave velocity and the normal frequency increase. In a guitar you hear a higher note. In a fundamental string you get a different particle.

How To Build A Relativistic Quantum String Theory

The Ployakov action above works for nonrelativistic strings. But what if the wave velocity approaches light speed? You need to incorporate the formulas that describe Lorentz transformation properties. This gets tricky: think about a string vibrating. Its oscillation in space and time sweeps out a two-dimensional surface in spacetime. Physicists call that surface a world sheet, and the division of space and time depends on the observer (remember?). This diagram gives you an idea of what a worldsheet is. Notice the brane shown in blue. We will be discussing branes a little later on. Keep in mind this is a three-dimensional representation of 4-dimensional spacetime (one space dimension is left out so we can visualize the structures):

(credit: Stevertigo, en.wikipedia)

When physicists plug relativity into the formula describing strings, they discover that this fundamental string no longer resembles a guitar string. As it oscillates it's no longer tied down at either end. It travels freely through spacetime instead. They also discover there are two kinds of strings - open strings like this one and closed strings. In closed strings, the boundary conditions are periodic. The mathematical solution looks like oscillations that can move around the string in one of two opposite directions. You can think of them more simply like this: the ends are floppy on open strings and closed into a loop on closed strings. On closed strings, you can have right-mover closed string mode or a left-mover closed string mode, which we'll talk about later on.

So far, we have created a model for a classical relativistic string. Now we have to incorporate quantum mechanics. We have to make the string momentum and position obey what is called quantum commutation relations. When we incorporate these equations, we get something called quantized string oscillator modes. These modes also happen to beautifully describe the quantum state of the mass and spin of particles in a relativistic quantum field theory. Particles, in string theory, are harmonic notes on fundamental 1-dimensional strings.

Oops, Didn't Mean To Make All Those Extra Dimensions

This last part sounded very slick but there is a catch when you incorporate quantum mechanics into string theory. When you use the quantum mathematics above, you get, in addition to particles, quantum states with negative norm. Physicists call these bad ghosts, and good mathematical formulations don't have them. They aren't particles at all; they are in fact unphysical states, states with negative probability. If you increase the number of spacetime dimensions from the usual 4 to 26 (one time and 25 spatial), these unphysical states disappear. This means that the quantum mechanics is only consistent if spacetime has 26 dimensions. This sounds like exchanging one problem for another one. However, because the theory can be formulated so that the 22 excess dimensions fold up into a kind of compact manifold, leaving the familiar 3 dimensions of space and 1 dimension of time visible to the ordinary phenomena we (and physicists) encounter, the theory can still make sense.

One of the particle state solutions of a closed string with two units of spin is the massless graviton.

One Theory Becomes 4 Theories

The string theory I just described is called bosonic string theory. It was developed in the late 1960's and it was the first generic string theory formulated. As you will soon see, it doesn't describe real life particles all that well, but it's a good toy theory, one that students usually learn first when they study string theory. Bosonic string theory gives rise to 4 different string theories, depending on how you choose the boundary conditions used to solve the equations of motion. Each of these closely related theories has a graviton particle in it. It also has a tachyon particle, a hypothetical particle that moves faster than the speed of light (and backward through time). The tachyon corresponds to the lowest energy ground state string. Unlike the graviton, most physicists don't believe that tachyons, which break Lorentz invariance (break the rules of special relativity), exist. Sorry Star Trek fans.

4 Theories Become Lots Of Theories

As I mentioned, bosonic string theory isn't realistic in terms of its particles. The particles it describes all have whole integer spins (0,1, 2, etc). These are called, well no surprise, bosons. They are generally force-mediating virtual particles. In contrast, particles that make up familiar matter are called fermions. They all have a half integer spin. Examples are the quarks and electrons inside atoms. Bosons (red squares) and fermions (purple and green squares) make up the elementary particles of the Standard Model in physics, shown here:
(credit: MissMJ, Wikipedia)

Adding fermions to the formulas gives you a whole new set of negative norm states or bad ghosts. To get rid of these you have to confine the number of spacetime dimensions to ten and you have to make the theory supersymmetric, so that there are equal numbers of bosons and fermions in the mix. Here too, several string theories arise as you choose various boundary conditions for the strings. This process is even more complex than it is for bosons. There is one high note though. All of these string theories contain the graviton but none contain the problematic tachyon of the bosonic theories.

Superstring Theory's Handedness Problem

Supersymmetry string (also called superstring) theory, developed in the early 1970's, has a fermionic partner for every boson particle. So, the supersymmetric partner for a graviton would be a gravitino, with a spin 3/2. Some researchers think this particle might exist as a stable fermion with mass and it might be a component of dark matter. However, at present both it and the graviton are theoretical particles.

Supersymmetric string theories come with their own set of problems. For example, Type IIA superstring theory incorporates the handedness that massless fermions exhibit. Allow me to use a particle called a neutrino to explain what handedness means: A neutrino, a real fermion particle with a spin 1/2 and zero mass, can theoretically spin in one of two directions - with the spin axis in the same direction as its angular momentum or with the spin axis in the opposite direction of its angular momentum, as shown here:


Type IIA string theory can describe this handedness in terms of string oscillations that move in opposite directions. There is no problem with any of this right up to here: The theory predicts that every fermion has a partner of opposite handedness. In real life however, this doesn't pan out. All neutrinos, fermions that travel at light speed, are left-handed (for fermions with mass, such as electrons and quarks, handedness is meaningless because these particles can travel at different speeds, all under light speed, and their handedness depends on the reference frame in which they are observed).

Type IIB supersymmetry theory gives the two superspace coordinates these theories use the same handedness. This means that it predicts massless fermions without partners of opposite handedness, as we see in real life. But it also means there is no way to add a gauge symmetry to the theory. The concept of gauge symmetry is a fundamental role in particle physics. All the fundamental forces (except gravity) are expressed in terms of gauge symmetry and that means we can't include any of them in Type IIB string theory.

Several Breakthroughs Lead to a New Unified Superstring Theory

As you have seen, none of these string theories gives us a complete description of reality, just various snippets of it. This led theorists to try something new. They separated the left and right oscillation modes of a string and treated them as two different theories. In 1984, they tried something crazy and it seemed to work: They realized a consistent theory could be made by combining a bosonic string theory moving in one direction with a supersymmetric theory with a single superspace coordinate moving in the opposite direction, yielding two theories, depending on direction. In doing so, the 26 spacetime dimensions of the bosonic theory dissolve into the 10 spacetime dimensions of the supersymmetric theory. In 1985, physicists realized they could achieve N=1 supersymmetry. This is a good, or at least simplifying, thing - it means that each particle has just one supersymmetric copy of itself.

They also formulated a way to compact the six extra dimensions down into a microscopic structure called a Calabi-Yau manifold, a 3-dimensional section of which is represented here:
(credit: Lunch, en.wikipedia (this image originally appeared in Scientific American magazine))

You don't see these dimensions and they don't interact in ordinary physics (or maybe they do, you'll see what I mean a little later). This series of discoveries held such promise it was referred to as the first superstring revolution, with a cover story in Discover magazine (November 1986) devoted to it.

Welcome M-Theory (Membranes, Mothers, Monsters and Magic)

A second superstring revolution in 1994 was ushered in by a new string theory called M-theory. Physicists found strong evidence that all the previous string theories could be considered as different limits of a single 11-dimensinal theory called M-theory. This theory requires the inclusion of higher dimensional objects called D-branes. In this new class of objects, open strings can end with something called Dirichlet boundary conditions, hence the "D." D-branes add a new level of mathematical richness to the theory, opening up the possibility of building cosmological models with it. We can think of D-branes as spatial dimensions, localizations in space in other words, so a D0-brane is a single point, a D1-brane is a line, D2-branes are planes, and so on:


You can also have instantonic D-branes and these are localized in both space and in time (I love this phrase! It sounds like some new kind of zombie food).

Taking M-Theory Out For A First Spin, Tackling the Black Hole Entropy Puzzle

Black holes present a huge mystery to physicists. They break the second law of thermodynamics, one of the most basic and important laws of science. According to this law, all processes tend toward increasing entropy, which is a measure of disorder in a closed system. Black holes seem to proceed in the opposite direction. All the objects that are swallowed by a black hole are lost from the universe and that makes the universe, a closed system, simpler. It decreases the universe's overall entropy. Black holes violate the law in another related way as well. A quantum mechanical version of this second law states that information in a closed system cannot be lost. For atoms of matter that are sucked in, that means that the quantum states of the atoms are lost. Hawking radiation from black holes eventually releases information back into the universe but it is generic in nature; it doesn't preserve all the quantum information that went in. Physicists turned to an interesting formulation of string theory to help solve this paradox. Using this formulation, they can preserve quantum information in the universe even when it is sucked into black holes. The solution, which I'm going to try to describe next, is a complex mathematical journey, the kind of journey string theorists embark on everyday.

Analyzing the D-branes in M-theory, physicists came up with something called anti de Sitter/conformal field theory (AdS/CFT for short). Allow me to break down and explain this mouthful. You might recognize anti de Sitter space from my previous article called Holographic Universe (and that's a clue about the string theory solution you're going to follow here!). Anti de Sitter space is hyperbolic spacetime (one of several spacetime geometries physicists use) that behaves as special relativity says it should. A field describes a vector force, gravity for example, at every point in space. A conformal transformation is a little bit harder to explain. It works like this: Let's say you have a spherical globe of Earth and you want to stretch that out onto a flat surface, make a map in other words. If you don't want a bunch of bumps all over it you have to do a geometric conformal transformation. Even though landmasses may end up distorted on the map, the angles where latitude and longitude meet up will be preserved at 90°, just like on the original globe. In our case, we are dealing with something more complex - a mathematical conformal group (a bunch of transformations lumped together) that represents supersymmetry, along with another conformal group called an internal symmetry group.

The two conformal groups have to "talk" somehow with the anti de Sitter spacetime mathematics, and from all this you will get a description of this special field theory. Correspondence does this job for us. Correspondence is a math term that relates two different things, like a translator between two mathematical languages. Here, we are going to correspond between anti de Sitter space, a description of relativistic curved spacetime, and the product of the two groups of transformations. Something very cool happens when we do this: If you take any representation of anti de Sitter space with a certain number of dimensions you get a corresponding conformal field theory that always has one less dimension in it. Does this sound familiar to you? This AdS/CFT correspondence is a concrete realization of the holographic principle, which describes all the information of three-dimensional space limited to, spread across, and preserved on the two-dimensional space of a black hole's event horizon. In this way, information is not lost from the universe when stuff is sucked in and the information paradox of black holes is solved.

M-Theory Might Be THE Theory (When it Gets Older)

The mathematics of M-theory (the M generally stands for "membrane" but you can call it mother, monster, matrix, mystery or magic - I'm partial to monster considering its Frankenstein-ish origin) suggest something quite interesting, that string theory might not be about strings after all, that 1-dimensinal strings may actually be slices of a 2-dimensional membrane vibrating in 11-dimensional space. In doing so, it goes even further to bring all the string theories together into one theory. In other words, some theorists believe that all the other string theories are simply mathematical descriptions of different angled views of the same thing. M-theory potentially brings them all under one umbrella with the possibility that all phenomena in the universe could be described by one theory. Some physicists have great hope that M-theory will be the Theory of Everything they've long been looking for. One of them is physicist Michio Kaku. You can read his article on M-theory here.

M-theory is not a complete construction yet. So far, the theory has passed many rigorous tests for mathematical consistency, a good sign. But to be a truly successful theory, it must be able to predict phenomena, which can then be experimentally confirmed, a vital missing ingredient. Here, string theory is a bit odd: As a scientist you usually come up against some new data, a new behaviour, process or structure for example, and you try to formulate a theory based on that new information. Then you design experiments to test your theory. Did it predict your new results? String theory, on the other hand, was born from a series of mathematical formulas. The string, the D-brane, the manifolds, the graviton all live strictly inside math. They have only been "seen" in formulas. In a way string theory is a breach birth and now scientists are trying to fit the data to it. I wonder, as a string theorist, is it tempting to get lost in the beautiful math? Writing this article, I felt like I was drowning in it! String theory provides unequivocal proof that all aspiring physicists need to know their mathematics. At the end of the day, string theory, to be a successful scientific theory, must correlate with real phenomena. Well-known physicists such as Stephen Hawking, Edward Witten and Leonard Susskind believe that M-theory is a step in the right direction toward successfully describing nature at its most fundamental level. Other physicists, notably Richard Feynman and Sheldon Glashow, are not convinced that string theory can ever by verified at the energies we have currently available to us, and therefore they question its validity.

Attempts To Confirm Strings, So Far

Some researchers recently looked to the Large Hadron Collider (LHC) for signs of supersymmetric particles. These would not be easy to create as they are supposed to exist only at energies compatible with those that existed right after the Big Bang when the all the symmetries of the universe were still intact. Finding them wouldn't prove string theory either, but it would provide a good chunk of circumstantial evidence in its favour. Unfortunately the results so far have been disappointing. Supersymmetry, like M-theory which encompasses it, has a certain simplicity, beauty and mathematical elegance going for it, but those things don't mean it's a realistic theory. In this way, these results might prove to be a cautionary tale for string theorists.

A team of researchers from Vienna are trying to actually "see" string theory by taking a very close look at how tiny variations in gravity act on ultra-cold slow moving neutrons confined in a tiny cavity. Neutrons created in a fission reactor are slowed down to just 5 m/s in a material called a moderator. They are then shot between two plates only 25 um apart. The upper plate absorbs neutrons and the lower plate reflects them. As they go through, they trace out an arc because the only force acting on them is gravity, just like how a ball thrown sideways eventually hits the ground. When the neutrons hit the bottom plate, they are reflected off it and absorbed by the top plate. They don't reach the other end and they are not detected. If the researchers vibrate the lower plate at very specific frequencies, they find that the number of neutrons detected falls into specific resonant frequencies. Specific plate frequencies can boost the neutrons into higher energy quantum states. Using neutrons can eliminate all extraneous interactions, such as short-range electrical interactions, so that gravity's effect can theoretically be observed at the quantum level. Some researchers are looking for results that show a slight deviation from Newtonian gravity. Such a deviation could be the first direct evidence that gravity interacts with extra dimensions. It could also be evidence for axions, hypothetical particles that might make up dark matter in the universe. I have not been able to find definitive results associated with it yet.

Meanwhile, another group of physicists at the LHC came up with an ingenious way to look for the extra hidden dimensions of string theory. It rests on creating tiny Planck-sized black holes by smashing protons together. These black holes would be very unstable and immediately decay, releasing a slew of subatomic particles in the process. They figured out what kinds of particles would be created if the universe contained 10 or 11 dimensions. Even in massively energetic collisions of 4.5 TeV, no micro black holes formed. String theorists suspect that gravity should increase more rapidly with decreasing distance within dimensions higher than our usual 3 dimensions. In theory a black hole can have a minimum mass of 22 ug (Planck mass). To concentrate the same amount of equivalent energy would be beyond the capacity of our current colliders. However, in tiny region micro black hole-sized spaces, the extra-dimensional gravitational boost means that a micro black hole could form at energies as low as in the TeV range. The results are a setback for string theory, but not necessarily a knock out punch.

It is so technically daunting to test for the existence of behaviours and structures at the Planck scale, where strings and tiny Calabi-Yau manifolds of curled up extra dimensions are thought to live, that there may never be a way to directly confirm the existence of M-theory. That doesn't mean that the search for new experimental designs has slowed down at all, however.

Physicists can also approach confirmation from the opposite direction by solving the theory, reducing it down at low (everyday universe) energies into a theory of ordinary particles like electrons, protons and so on, which, if the theory is solved, should exactly match the slew of real-life particles out there. While this would not be physical confirmation in itself, a mathematically complete theory that could predict subatomic particles would be very compelling. Unfortunately, reaching complete mathematical solution is predicted to be extremely difficult if not impossible.

M-theory not only brings us the tantalizing possibility of describing quantum gravity, it provides a possible framework in which all particles and their interactions can be described. M-theory may even explain why gravity is such a weak fundamental force. Not all strings are confined to D-branes. The graviton is speculated to be a closed loop string and that means it is free to move about through spacetime. The other force-mediating particles are strings with endpoints that confine them to their D-branes. In this sense gravitons can "hide" among higher dimensions so that gravity might be a function of those extra dimensions. This might explain why it is so much weaker than the other three fundamental forces: In a 3-dimensional universe, gravitational attraction follows the inverse square law. When the distance between two objects is doubled, the gravitational attraction between them is reduced to 1/4. In 4 dimensions, the reduction is a cube, so it's 1/9th of the original attraction, and so on. This is the basic logic behind the micro black hole experiment at the LHC, and to some extent behind the slow neutron experiment as well, both of which are described above.

It might be wise not to fall too much in love with the construct of spacetime dimensions, and perhaps it's a waste of time looking for them. Just as the strings in M-theory may not actually be strings, some physicists (John Swartz and Paul Townsend as mentioned in Dr. Kaku's article M-Theory: The Mother of all Strings are now questioning the very idea of dimensions in M-theory. Dimensions emerge only as possible solutions to the mathematics, and they emerge only in a semi-classical context.

So this is where string theory leaves us, for now. M-theory is still very new Its complex mathematics is not finished and that kind of leaves us in an awkward place. We have this astoundingly beautiful mathematical construct, but of what exactly?

Friday, April 6, 2012

Holographic Universe

The idea that we, Earth, the stars, everything, live in a hologram seems stranger than any cooked up sci-fi fantasy, and yet that is exactly what a contemporary theory of the universe tells us.

Where Did this Weird Theory Come From?

The inspiration for this theory came from Stephen Hawking's work on black hole entropy. Entropy is the energy in a system that is available to do work. According to the second law of thermodynamics, the entropy of any closed system always increases or stays the same, and in many ways entropy can be described as the state of disorder in a system. Another law in physics states that all physical laws should work the same everywhere in the universe. Black holes, like the one shown below, present a big problem because they decrease entropy, and thus violate the second law of thermodynamics.

(Wikipedia copyright user: Alain r)

They reduce the entropy of the universe because the information encoded in objects that are sucked in is irretrievably lost.

What's just as strange about black hole entropy is the maximum possible entropy in any region of a black hole scales with its radius squared not cubed, as if a black hole were a flat two-dimensional object. This means that all the information about all the objects that have ever fallen into a black hole somehow seems to be confined to the spherical surface of the event horizon. The event horizon is like the outer shell of a black hole. It is the point of no return, where even light itself cannot escape, and no one knows what lies within this shell. Current theory tells us that all the mass of the original collapsed star and all the objects that have been swallowed since are reduced to a radius of zero at a central point inside the horizon, called a singularity.

The Holographic Principle solves two closely related important problems with black hole entropy. The first problem is that black holes decrease entropy, as described above. Second, they violate the law of conservation of information. This law, a more specific interpretation of the second law of thermodynamics, is most often described in a quantum sense. When an atom, for example, falls into a black hole, all of its information is lost. For an atom, that means its wave function, according to quantum mechanics, is gone from the universe. This loss violates an important principle in science - that information is conserved in the quantum sense.

Hawking showed us that black holes slowly radiate their energy away, they slowly evaporate in other words. According to the no-hair theorem in physics, this too is a problem. Hawking radiation should be completely independent of the material going into a black hole. It should be a mixed (think of it as generic) quantum state. Any particular initial quantum state of the material going into a black hole is therefore lost, and according to the law of information conservation, it can't be.

The problem of information loss can also be described by something called the entangled pure state situation. This is how it works: Let's say a photon has just spontaneously annihilated in space (they do this all time according to quantum theory). When it does so, an electron and a positron are spontaneously created. These two particles should quickly annihilate each other and release a new photon. But what if one particle is sucked into a black hole while the other one escapes? Half the information of the photon (called a partial trace) is lost from a closed physical system (the universe). This also violates the second law of thermodynamics on a quantum level.

From Black Hole Riddles to Holograms

The information problem led to a huge battle between physicists, with Hawking and Kip Thorn on one side insisting that quantum information must be lost, and Leonard Susskind and Gerard t' Hooft on the other side, insisting that that is impossible. Eventually, T' Hooft proposed a holographic theory as a solution to the problem and Susskind provided a string theory interpretation of that solution.

Hawking, recognizing the problem of information loss, suggested that quantum fluctuations on the event horizon could theoretically allow all information to escape from a black hole. As long as information comes back out, the information paradox is solved. From this solution they arrived at the idea of information being contained on the event horizon of a black hole, contained in a two-dimensional space in order words, and that is the kernel at the center of the Holographic Principle, introduced here in this 3-minute video:


(from MICHAEL COULSON on Vimeo)

What is the Holographic Principle?

A hologram is a three-dimensional image confined in two dimensions. Below is a mouse hologram for example. Two photos are taken from different views.


What's really interesting about thinking of the universe as a holographic projection is that it offers a possible description of quantum gravity, something that physicists have been seeking for decades. String theory allows us a lower dimensional description of the universe, in which gravity emerges from it in a holographic way. This could account for why physicists haven't been able to find a force-carrier particle for gravity. The other three fundamental forces each have a gauge boson, a virtual particle that mediates a fundamental force. It also offers an explanation for why gravitational force is so much weaker than the other three forces as well. It is so weak, in fact, that it can be completely omitted from quantum equations.

Quantum mechanics describes the tiny subatomic world very well, and relativity describes the behaviours of massive objects in the vastness of space. The Holographic Principle could be the bridge between quantum mechanics and relativity that physicists have been searching for in their quest to find a single unified theory that can describe both the very small and the very big. String theory attempts to describe gravity as an emergent property of tiny fundamental vibrating strings. It doesn't attempt to describe gravity in terms of a force-mediating particle. Instead, gravity is an illusion. String (or strings as there is a whole set of these theories right now) theory is based on an elegant set of mathematical formulae but there are many physical phenomena it cannot describe and there is no way to test it, so far. Gravity, as an emergent holographic illusion, could bridge two competing (and as of now mutually exclusive) theories of how the universe works:

(QFT stands for quantum field theory)

Black holes present a riddle to physicists because they require both a relativistic and a quantum description in order to understand them, and the Holographic Principle offers a potential way to describe them in which both quantum mechanics and relativity can be satisfied.

Physicist Juan Maldacena came up with a mathematical description of spacetime, called anti de Sitter space, which describes the holographic universe. To us, the universe seems infinite yet we can deduce that it has some kind of boundary that has been expanding ever since the Big Bang. His mathematical metric solves this apparent anomaly. He used a three-dimensional analogue of what is called the hyperbolic plane, the boundary or surface of our universe. This plane is two-dimensional but it is wildly twisted (and impossible to visualize). We, living within this bounded surface, don't notice the twisting. It is sort of like the distortion you see when looking at a global map, a 2-D representation of a 3-D world. His 3-D analogue of this plane coupled with a fourth dimension, time, gives us a model holographic universe. String theory, describing the interior of the universe, has a sort of 2-D shadow on the inner boundary of it. Every fundamental particle has a 2-D counterpart on that boundary. Using this theory, you can describe any object in a gravitational field, whether it's a subatomic particle or a massive neutron star. You could describe a black hole. Try this 83-minute video from the 2010 World Science Festival called "Black Holes and Holographic Worlds," where the world's best physicists describe this topic for nonscientists:



As you can see, it has been very tempting to expand the holographic idea to the universe as a whole. In this sense, the universe is a two-dimensional information structure "painted on" the cosmological horizon, the edge of space analogous to the event horizon of a black hole. It even provides a new conceptualization of entropy - as the surface of the universe expands, more information can be stored on its 2-D surface and the entropy of the universe increases as a result.

According to this theory, our familiar three dimensions of space only work at the macroscopic scale and at low energies. When we look at very high-energy events or at the subatomic scale, or both simultaneously in the case of black holes, the underlying reality of a 2-dimensional space seems to become apparent. The event horizon of a black hole is, therefore, a peek at this inner reality of the universe. Leonard Susskind describes the Holographic Principle in his own words in this 13-minute interview:



In order for this theory to work, there must be a limit on information density. Entropy can be described as degrees of freedom in a system of matter and/or energy. There is an upper limit to the density of information that can be packed into a given volume that can be translated to two spatial dimensions. As you subdivide matter into its atoms and then into its sub-particles and finally down into various fundamental particles, you increase the degrees of freedom. The Holographic Principle implies that at some point there is a limit to the subdivisions you can make, and at that point you reach some kind of fundamental particle containing a single bit of information (like the zeros and 1's of a computer's binary language - or like a 1-dimensional string in string theory). And here is where the universe, according to the Holographic Principle, should ultimately break down into fundamental pixels of reality, like the pixels in a photographic image, in this case grains of spacetime calculated to be Planck length.

Searching for the Universe's Pixels First Try: Gravitational Wave Noise

Some scientists, such as Craig Hogan of Fermilab, believe that this graininess, equivalent to quantum fuzziness, can be scaled up across the holographic universe and detected as very minute gravitational waves. The GEO 600 in Germany is the latest and most sensitive gravitational wave detector built. Recall general relativity for a moment. Gravity bends spacetime. As a result it can shorten distances. Disturbances in spacetime caused by heavy-weight binary star systems made up of white dwarfs, neutron stars or black holes can ripple right across the universe as waves. The Hulse-Taylor binary, two neutron stars orbiting a common center of mass, has an orbital decay that is in exact agreement with the loss of energy through gravitational waves predicted by general relativity. These ripples, however, are expected to be very minute and none so far have been directly detected (by any instrument). The GEO 600 can detect relative changes in distance on the scale of 10-21 m, that's about the size of a single atom compared to the distance between the Sun and Earth! Along with detecting minute gravitational waves, noise in the GEO 600 may be holographic noise. Hogan's interpretation of the noise from GEO 600 caused quite a stir in late 2010. He suggested that the noise is scaled up Planck-length graininess, massively scaled up graininess. Planck-length is on the scale of 10-35 m, a difference of 1014 between Planck length and the detector's sensitivity! Hogan claims that the noise should grow grainier across great distances, much like a low-res movie played on a screen that is too big. So, larger scale (10-21 or more) changes in distance could be traced back to Planck-scale graininess at the edge of the universe, and it could be detected by the GEO 600.

Unfortunately, it is very common for gravitational wave detectors to detect a noise background - they are extremely sensitive instruments. And physicists are still in the process of identifying and removing sources of noise. And, as mentioned, gravitational waves have yet to be directly detected.

Searching for the Universe's Pixels Second Try: Gamma Ray Polarization

Some physicists calculate holographic noise, if it exists, to be on a much smaller scale than any current instrument can measure, much smaller than Hogan's estimate. And so they have devised another ingenious way to look for evidence of holographic graininess in spacetime. The European Space Agency used its Integral Gamma-ray Observatory in 2002 to look at gamma ray bursts. These are bursts of extremely high-energy gamma photons from supernovae. When the photons travel through spacetime, their polarization (you can think of it as a twist) is slightly affected. A polarized gamma ray preferentially scatters in a direction perpendicular to the direction of polarization. If spacetime is smooth, as Einstein predicted, the polarization should remain random. That means there shouldn't be any difference between higher and lower energy photons no matter how far they travel. But if spacetime is grainy as the Holographic Principle predicts, the degree of polarization of the gamma rays should depend on distance and energy. The detector should detect random polarizations if Einstein was right or it should detect a bias toward a particular polarization if the Holographic theory is right. I'm going to leave you in suspense for a moment about what they found, in order to introduce you to another very interesting problem that the Holographic Principle might solve, called locality.

A Fascinating New Look at Reality

Hogan's hypothesis violates a tenet of special relativity called locality. This means that an object is directly influenced only by its immediate surroundings. At first thought, this might seem to be a deathblow against the Holographic Principle. But, locality is already violated by a widely accepted (and experimentally verified) phenomenon called quantum entanglement. Let me give you an example to illustrate how this works: During nuclear decay processes, the events that take place must obey various conservation laws in physics. This means in the quantum world that various new particles that are generated as a particle decays must have specific quantum states. If a pair of particles is generated having a two-state spin, for example, one particle must be spin up and the other must be spin down. These particles are called an entangled pair. Lets say they fly away in opposite directions. Now here is the rub: when two objects (they can be subatomic particles, molecules or even diamonds have been observed to obey this!) interact physically and then become separated, each member of that pair is described as having the same quantum description. That means their state is indefinite until it is measured, according the the Copenhagen interpretation of quantum mechanics. They are each in an equivalent state of quantum superposition. Much later, when they are across the universe from each other, one person measures the spin of one of these particles. There is a 50% chance it will be spin-up and 50% chance it will be spin-down, depending on which one of the pair he measures. When one particle is measured, the quantum states of both particles collapses. If a second person then measures the second particle, its spin is 100% predictable - it will be the opposite spin of the first one measured. Let's say particle A collapsed into a spin up quantum state. How did Particle B instantly get wind of that news from across the universe and collapse into a spin down state?

Nonlocality implies some kind of across-the-universe instantaneous communication between two particles. That's a violation of special relativity, which states that nothing, star ships, light or communication, can exceed light speed. Experimental results have shown that effects due to entanglement travel at least thousands of times faster than the speed of light.

What does this mean? It suggests that either nonlocality operates in quantum physics or there are hidden variables we don't know about yet. Perhaps the measured spin of the particles is just one element of a larger yet unknown physical reality OR the assumption that we can measure a particle and collapse its spin into one definite state is not quite accurate. There is no transmission of information possible - no force transmission fast enough to account for projecting information across space between two separate physical systems. The fact that it happens is deeply unsettling. It is not easy to live in a classical world looking out into a quantum mechanical world, to use the words of physicist John Stewart Bell, who proposed the entanglement experiment described above and formulated Bell's theorem based on it. These results led to the Bohm Interpretation of quantum mechanics. This interpretation gives non-locality a place in quantum mechanics, where all particles in the universe are able to instantaneously exchange information with all other particles. Basil Hiley, Professor of Physics, describes the challenge of thinking about particles and locality, and extends it to the conundrum of Heisenberg's uncertainty principle, in this fascinating 10-minute video:



The Bohm Interpretation, as I understand it, does not provide us with an easy 1-step answer to the problem of locality. Instead, it asks us to rethink the problem of wave-particle duality and challenge our assumptions about particle reality.

Does the Holographic Principle provide a viable answer to the nonlocality problem? Well, yes, in the sense that this principle implies a reality outside of spacetime, so that problems involving separation by space or time are transcended. Nonlocality means that no particle in this universe is separate from any other particle. An electron in Young's two-slit experiment, for example, seems to know beforehand where other electrons are going to be. According to this principle, we can view a particle such as an electron not as a material object moving through space but as something that unfolds out of a deeper level of reality that is outside of spacetime. In this reality, particles like electrons and photons can sniff out space before them. There is lots of evidence that they do just that. It's just that we can't observe their process because our perception of them is stuck in space and time.

The Holographic Principle, therefore, seems to solve two very tricky problems in physics - black hole entropy and nonlocality.

Experimental Proof of the Holographic Theory?

I left you in suspense over the gamma ray polarization experiment, so let's get back to it now. A gamma ray burst, such as the NASA artist's illustration below, is a random event and in order to detect and measure its photons, our detector (and Earth in general) must be in line with one of its bipolar jets (shown as yellow).


Luckily, such a burst happened in 2004 (well, for us anyway), called GRB 041219A. It was extremely bright and far away (300 million light years), making its data a perfect candidate to test the Holographic Principle.

No polarization difference was detected.

The Integral Gamma-ray Observatory detector is so precise it would be able to detect graininess down to a scale of 10-48 m, which is 1013 times smaller than Planck length. Some researchers, attempting to restore the Holographic Principle, have suggested that gamma rays perhaps don't behave as expected in this grainy universe.

To conclude, no one has yet experimentally verified the Holographic Principle. So far, two experiments appear to refute its existence. Specifically, they refute a measurable graininess or pixilation of spacetime. It might be possible that the assumption of underlying pixilation is itself incorrect but the idea of a three-dimensional projection of spacetime of a two-dimensional universe may still hold some validity. I suspect physicists right now are busy thinking of new ways to verify the principle.

The Next Step: Peering Past a Quantum-Uncertain Universe

When the Holographic Principle was introduced in 2010, it captured the imagination of people worldwide. We can play with the bizarre possibility that we, and everything around us, are merely projections cast from some distant 2-dimensional screen, that we are ignorant of our true flatness as we live out our lives inside an enormous sphere at least 13.7 billions of light years across as projections from its inner surface. Certainly this idea has huge philosophical implications. Can you imagine God observing us as characters in a moving film projected over the surface of the universe? What does this mean for time? How about free will? This notion is at once absurd and alluring.

This is why most physicists refer to the Holographic Principle as a principle. It is a reasoned argument, a starting point for a true scientific theory, which is a set of principles that can explain and predict phenomena.

This principle, perhaps a work in progress, has enormous implications for how we understand spacetime and it suggests possible solutions to some very vexing problems in physics. Rather than being abandoned by the scientific community because it has not been successfully verified through experiment, it seems destined to be a jumping-off point in physics where our very understanding of the rules themselves will be challenged to its core.

Monday, April 2, 2012

Faster Than The Speed of Light: The Neutrino Story

Neutrinos in the News

I first talked about neutrinos in an article called All About the Particles in Physics in a paragraph called "The Enigmatic Neutrino."  In this article, I explore these mysterious particles in more detail and update you on the latest research.

In September 2011, Scientists working at the OPERA experiment in Italy announced to the scientific world they had clocked neutrinos, a type of tiny subatomic particle, traveling faster than the speed of light. This gives you an idea of the experimental set-up:


Why is this such a big deal? For many of us, faster-than-light-speed particles don't strike us as all that mind-blowing. Star Trek has been making use of faster-than-light warp drive propulsion for decades. What is it about breaking the speed of light barrier that is so important? And did they actually do it? Allow me to set up the story for you.

First, What's a Neutrino?

Neutrinos are one of the most elusive subatomic particles in the universe. They have no charge and almost no mass, and that makes them almost impossible to observe and study.

On Earth, neutrinos come from several natural sources: from the decay of thorium and uranium in the Earth, from collisions between cosmic rays and atomic nuclei in the atmosphere, from supernovae and supernova remnants, and even from the Big Bang itself. However, most of Earth's neutrino bombardment originates in the Sun as solar radiation. Every second about 65 billion solar neutrinos pass through every square cm of Earth! Billions are zooming through you right now. Because they do not interact with your atoms, you don't sustain any damage to your cells. We generally detect no evidence of their existence whatsoever, so how do we know they're even there?

Neutrinos have been observed to interact only with the weak fundamental force. That is the force associated with nuclear decay and nuclear reactions. Neutrinos were first predicted to exist in 1931 when scientist Wolfgang Pauli, shown here, noticed than some undetectable particle must be carrying off a tiny amount of energy and momentum during certain radioactive decays.


In 1956, a particle was discovered that fit the neutrino's description.

Neutrinos pass right through the Earth undetected because they only very rarely interact with ordinary matter. Only extremely energetic neutrinos can be detected at all. The most energetic neutrinos come from supernova remnants where cosmic rays are accelerated through a process called Fermi acceleration. But even these neutrinos are very hard to detect. Many detectors utilize an enormous volume of water or ice surrounded by photomultiplier tubes, all this just to detect a few neutrinos. This is what the inside of the Super-Kamiokande experiment looks like. The boat image is placed to show how enormous it is. It is 1000 m underground and contains 50,000 tons of ultrapure water, surrounded by over 11,000 photomultiplier tubes.

(copyright: Kamioka Observatory, ICRR, The University of Tokyo)

The detector works like this: Very occasionally a neutrino will interact with an electron or the nucleus in a water molecule, and the collision will create a muon (another kind of subatomic particle) or an electron in the water. When it does so, it emits a cone of light.

This cone of light moves faster than the speed of light in water, creating the optical equivalent of a sonic boom, and this is what is detected.

The photomultiplier detector sees a well-defined ring. This light emission is called Cherenkov radiation. It's the blue glow you may have seen in photographs of submerged rods in nuclear reactors such as the Reed Research reactor in Oregon, shown here.


It is important to understand that the speed of light in water is 0.75 c, or 3/4 of the speed of light in a vacuum. Cherenkov radiation propagates faster than 0.75 c BUT NOT faster than the speed of light in a vacuum. If it did, physicists would have bigger fish to fry than faster-than-light neutrinos.

The upshot of all of this is that neutrinos are very ephemeral barely-there subatomic particles, at least from our perspective.

When neutrinos were first discovered they were assumed to be massless, just like photons of light. Now, researchers believe they must have at least some mass because they oscillate between flavours as they travel. This is unique among subatomic particles. Only a neutrino can start off as one kind, say an electron neutrino for example, and be detected at the end of its journey as a another kind, say a muon or tau neutrino for example. Neutrinos come in three flavours - electron, tau and muon neutrinos.

This discovery is linked with what was called the solar neutrino problem. For decades, scientists knew there were three different kinds of neutrinos. Nuclear fusion in the Sun creates only electron neutrinos, based on physicists' understanding of solar fusion. Yet researchers consistently got only about a third of the neutrinos they expected when they were detected on Earth. They were understandably only measuring electron neutrinos. In 2001, when scientists at the Sudbury Neutrino Observatory in Canada measured electron plus tau/muon neutrinos, they detected the missing neutrinos they expected, thus solving the solar neutrino problem. This is what the odd-looking spherical Sudbury detector looks like:


It has since been turned off but during its operation it housed 1000 tons of heavy water and 9600 photomultiplier tubes.

According to the Standard Model in physics, a well-established and well-tested model for all subatomic particles, flavour oscillations imply differences between the different neutrino masses. Neutrinos tend to change flavours when they pass through matter. The amount of neutrino flavour mixing that occurs depends on the square of their masses.

If you are unsure what flavour actually means, you are in good company. It can be explained like this: Neutrinos come in three flavours based on which particle they interact with. Electron neutrinos interact with electrons, muon neutrinos with muons and tau neutrinos with tau particles. Flavour describes the particle's overall symmetry. In most particles it is preserved. However, during some kinds of nuclear decay, this symmetry can be broken, as in the case of quark decay and neutrino oscillations. When a neutrino propagates, or travels, it is a mixture of all three flavours, all superimposed on each other at the same time. However, a neutrino can only interact as one specific flavor. Mathematically it can be shown that if a neutrino's mass was zero, it would not be able to change flavours.

This is not to say that the mass of a neutrino is known with any precision. However, cosmic behaviour puts a very narrow range on possible values for neutrino mass. It must be very small or these plentiful particles would have caused the universe to collapse in on itself long ago. In fact, the mass of a neutrino must be less than the a tiny fraction of the mass of an electron. If you are wondering, neutrinos cannot be the dark matter physicists are seeking - they do not have enough mass to account for more than about 1% of dark matter in the universe.

This sounds a bit complex but the general idea here is that most physicists believe neutrinos have mass, albeit, very small. And that simple notion adds a level of strangeness to the idea of them approaching light speed. Why is this so? Why is there a problem with mass and light speed?

Neutrino Velocity As an Open Question

If neutrinos are massless they should, according to special relativity, travel at the speed of light, just as photons do. If they have mass, they shouldn't even theoretically reach light speed.

Neutrinos are thought to have mass and they have been clocked at light speed in many experiments! For example, 10-MeV neutrinos have been clocked at light speed coming from a recent supernova called SN1987A, shown below. Actually, the supernova isn't really recent - it took place in the Large Magellanic Cloud about 160,000 years ago and the neutrinos that blasted out of it simply reached Earth in 1987, within hours of each other.


In this NASA image you can see circumstellar rings around SN1987A, with the ejecta from the supernova explosion in the middle of the inner ring.

Now, what does 10-MeV mean? Subatomic particle mass is most easily measured in electron volts. One electron volt is the amount of energy one electron gains when it undergoes a potential difference of one volt. 10-MeV is 10 million electron volts of energy. This value is the supernova neutrino's mass-energy equivalent. As Einstein discovered, mass and energy are the same thing. E = mc2. This simple fact is very well documented by experimental evidence and it is one of the central foundations of the Standard Model in physics. And it is a central theme of our neutrino story.

Lets get back it now: Neutrinos have mass and can travel at light speed. How? The answer, at least for now, is in the numbers. When neutrino velocity is measured it is not 100% accurate. But physicists can get very close. This tiny wiggle room is where neutrinos can have a tiny mass and yet not violate special relativity. We don't have any detectors yet that can measure the tiny difference between neutrino speed and light speed. Neutrinos can very well travel just under light speed. We simply can't measure that difference (yet). For now, this places neutrino velocity well within the constraints of special relativity.

Light Speed is the Universe's Speed Limit

Why does special relativity put these constraints on mass and velocity?

Special relativity is all about the speed of light. Light travels at about 300,000,000 m/s in a vacuum no matter how fast the light source is moving. The speed of light is invariant. This fact came from Einstein's work using Maxwell's equations for electromagnetism. If you shine a flashlight through the front window of a spaceship traveling at 0.5 X light speed, the photons of light coming from it would still be traveling at light speed, not 1.5 X light speed. This simple fact has huge consequences. It means that space and time must, as a result, be variable. Space and time, together as one entity called spacetime, is bendable, twistable and stretchable. This bending of spacetime is described mathematically by the Lorentz transformation. Observers moving at different velocities may measure different distances, elapsed times, and masses of the same object, but they will always measure the speed of light as 300,000,000 m/s, or c.

According to special relativity, as an object with mass approaches light speed, its mass becomes infinite, from the reference point of an observer at rest. To accelerate an object past light speed would require infinite force. Force = mass x acceleration. The mass we are talking about here is relative. Hopefully that will become clear in a moment as we return to the idea of mass as mass-energy. For a particle at rest, its mass-energy is equivalent to the rest mass, but particles tend to be in motion, so mass-energy takes into account the momentum of the particle as a whole. How is mass relative then? If you were traveling right alongside an SN1987A neutrino you could theoretically measure its mass and it would be some very, very small value (some researchers estimate it to be a few eV. That's its rest mass. But if you could measure its energy as it struck a detector on Earth (two detectors on Earth, considered to be at rest, did so) its mass-energy would be considerably higher, about 10-MeV, because it's velocity is close to c. Even 10 million eV is relatively small for a particle's momentum and that's because neutrinos have almost zero mass and because their velocity is very close to BUT NOT c (or the mass would be infinite, right?). The table below compares mass to velocity:


Some people call this mass relativistic mass. It's perhaps more accurate to call it momentum. Physicists measure these mass differences as momentum differences (momentum = mass x velocity) when they smash particles together in accelerators.

You could put the mass-c (c stands for speed of light) situation in other ways if you like. To accelerate a non-zero rest mass to c requires infinite time with a finite acceleration or an infinite acceleration over a finite time. However you put it, accelerating a non-zero rest mass to c requires infinite energy. This is a speed limit placed on all objects with mass, including neutrinos. If neutrinos were clocked traveling faster than c, then special relativity would have to be rewritten and particle physics as we know it would have to revisited.

Even particles without mass such as photons CANNOT travel faster than light speed, according to special relativity. However, general relativity does not specifically inhibit faster-than-light travel (even for objects with mass), as long as the rules of special relativity aren't broken. We'll expand on this very cool notion (and other tricks) next.

So, How About Faster Than Light Speed?

Last September, physicists at the OPERA experiment in Italy detected neutrinos sent from CERN in Switzerland, a 731 km journey. They claimed the neutrinos arrived 60 nanoseconds faster than if they traveled at light speed. This result is in the red writing in the white rectangle in the first graphic of this article. The news sent the scientific world into a spin. This is where things get very interesting and where our story really begins.

Particles With Mass Traveling Faster Than the Speed of Light?

This tentative finding had huge implications, of which the physicists involved were well aware. Here are just three examples of the implications of faster-then-light travel:

First, it violates special relativity, for all the reasons above. To be wrong about the postulates of special relativity would mean that we still don't understand the basic behaviours of subatomic particles, even though mountains of experimental evidence backs up the Standard Model.

Second, it brings up the possibility of time travel. How? Well, consider two particles traveling close to but under light speed. If they are separated by space and not travelling parallel to one another, any events linked to these particles could be viewed as happening in different orders depending on where observers are located. By the same reasoning, faster than light travel could be seen as traveling backward through time if viewed from some other equally valid frame of reference. This is the basis for the concept that objects travelling faster than light speed also travel backward through time. Could neutrinos be time travelers?

Third, is there something special about neutrinos we just don't yet get? They are the most recently discovered particle and, being difficult to study, they are still elusive. As you have seen, physicists still don't know their rest mass with any precision. They also don't know their magnetic moment, another important descriptor for subatomic particles. Based on the nuclear reactor results, however, physicists can say with some certainty that neutrinos have a half integer spin, like neutrons and protons do. They also carry energy and linear momentum (that's how they were first detected).

Maybe neutrinos, and only neutrinos, can break some rules of special relativity. Having one type of particle that violates special relativity again brings the whole theory of special relativity into question, or at the very least it would require some tweaking to accommodate them.

The Speed of Light Can Be Broken - With Some Very Important Caveats

There are special cases where one could think of the speed of light barrier as being broken yet not breaking the rules of special relativity. Here are a few examples:

1. The uncertainty principle implies that an individual photon, for example, can briefly surpass c in a vacuum. This is the basis behind quantum electrodynamics. And it indeed allows particles, but only virtual particles, to travel backward in time. Here's the catch - virtual particles are detectable only as exchanges of force. Unlike real particles, they do not exist in any quantifiable or "capturable" way. We can detect neutrinos on a detector. That makes them real particles, even if they are tricksy little elusive things.

2. A vacuum has energy, so it is also possible that if the vacuum energy were lowered then light could travel faster through such a vacuum, called a Cassimir vacuum. This is how the theory works: Even a perfect vacuum has energy because it is full of virtual particles popping in and out of existence all the time, and these particles have some, very small, energy associated with their transformation. As a photon of light travels through a vacuum, it interacts with virtual particles. It is absorbed by them, and this gives rise to a positron-electron pair of particles. The pair just as quickly annihilates and gives rise once again to a photon. The time the photon spends as an electron-positron pair slows its velocity down to c. One the other hand, a photon traveling between two Cassimir plates won't be so encumbered because there is not enough space between the plates to allow for the wavelengths of many virtual particles to fit in. There are less virtual particles to slow light down so it should travel faster than c. The closer the plates, the higher the speed of light should be. The effect, however, is predicted to be extremely small and there is of yet no experimental apparatus sensitive enough to measure it.

3. Because spacetime is malleable, there is the possibility that spacetime itself could be accelerated so that an object within that accelerated region of spacetime could be observed to be traveling faster than light speed, even though in its frame of reference it is not violating special relativity. This argument has been brought up as a possible solution to the problem of faster-than-light-speed cosmic expansion, which according to several lines of evidence, occurred soon after the Big Bang. Here, special relativity is not violated because the expansion of spacetime itself exceeds c, not an object moving through it. NASA is even working on the theory behind a rocket that can warp space-time and therefore travel faster than light speed, at least from our vantage point here on Earth.

4. There is an attempt to modify special relativity into what is called doubly special relativity. In this case, Planck length is also invariant. Planck length is the smallest possible unit of length. It is derived from c and two mathematical constants: Planck constant and the gravitational constant. It does not change regardless of velocity (unlike any larger length measurement!). This tweaking of special relativity was motivated by recent work toward a theory of quantum gravity. Basic to this work is the idea that there is an ultimate minimum length, energy or even volume possible, under which quantum fuzziness obscures any possible meaningful measurement. A consequence of this is that it makes the speed of light variable, where photon speed varies with energy. This idea has been criticized because the Lorentz transformation, the mathematical description of spacetime, is not an observable phenomenon, so it should not be held to some standard of observable measurement. Perhaps more importantly, photons of widely varied energies from recent gamma ray bursts have arrived at almost exactly the same time at detectors on Earth, discounting the idea.

Physicists are not entirely out of their minds when they consider that neutrinos could travel faster than light speed, under certain conditions, which do not violate special relativity. But I think most would agree that such conditions would be extremely unlikely in the case of the neutrinos traveling from Switzerland to Italy in the OPERA experiment. They weren't in a Cassimir vacuum nor were they going through warped space-time, for example.

Lesson #1: Always Verify Results

Perhaps not unexpectedly, the OPERA results have been recently discounted. Neutrinos officially DO NOT travel faster than light speed. At this I let out a sigh of relief mixed with a tiny bit of disappointment, and even a small laugh. After all, the initial OPERA results created a delightful buzz in physics that we may not soon enjoy again. A few days ago, two leaders of the OPERA team resigned after a vote of no confidence.  Their resignations came after two major blows to their earlier findings. First, their timing of neutrinos is now linked to a faulty cable connection. And second, an independent research group, using the same OPERA equipment, was unable to replicate the faster-than-light results.

The OPERA physicists stated with their original findings that they hoped public release would foster further inquiry and debate. And they were open to possible sources of error in the experiment. But the results they obtained were just so mind-blowing! With perfect hindsight, it seems obvious that they released their findings prematurely.

Does that really close the books on the neutrino story? Perhaps, but all the controversy, all the excitement, betrays a wonderful passion that is alive and well in science doesn't it? And discoveries like this one challenge our understanding of the energy and motion of things right to the core, always a good thing.