A letter to my MP on impending science cuts

“Never doubt that a small group of thoughtful, committed citizens can change the world. Indeed, it is the only thing that ever has.” – Margaret Mead

The future of Science in the UK is under threat. What follows is a letter I’ve sent to my MP, Harriet Harman, asking her to support the Science is Vital campaign to try to persuade the government back from enacting cuts that would undermine not only our standing in the world, but out ability to make a strong, sustainable economic recovery.

If you care about science and you live in the UK, I urge you to do the same, and get involved in and to support the to do the same campaign. Feel free to plagiarise or adapt my letter below.

Dear Rt Hon. Ms Harman

I am writing to express my grave concern about the impending cuts to UK science funding the Government has signalled will be part of its comprehensive spending review. At a time when the USA, Germany, France, Japan, India and China are increasing their investments in scientific research, cutting our own investment threatens to turn the UK into an intellectual and economic backwater. As one of your constituents, I was hoping to call on your support in opposing this ill-advised and short-sighted course of action.

Specifically, I urge you to consider:

  • Challenging the Chancellor of the Exchequer to explain where he expects future economic growth to come from, if not from advances in science and technology.
  • Signing the Early Day Motion, EDM 767 (Science is Vital – http://bit.ly/edm767).
  • Signing the Science is Vital petition at http://scienceisvital.org.uk/sign-the-petition
  • And attending a lobby in Parliament on 12th October, 15.30, Committee Room 10.

As a physicist by training, I know first hand the rapid and long-lasting detrimental impact that cuts of this size have on a country’s intellectual infrastructure. I and many of my colleagues left Australia in the late 1990s, as a direct result of the Government’s decision to drastically reduce its funding of science. Few have since returned. In these challenging times, the UK can ill afford to drive away its best and brightest.

Vince Cable’s claim that it is just a matter of “doing more with less”, by focusing on projects that have obvious economic benefit, suggests a poor appreciation of the nature of scientific enquiry and how it seeds technological progress. No-one could have anticipated that investment in high-energy particle physics research would have led to the invention of the world wide web. Certainly, few if any of the momentous technological advances of the century, from the laser to the silicon chip, could have been conceived without discoveries made in the course of curiosity-driven research.

The Science is Vital coalition, along with the Campaign for Science and Engineering, are calling upon the Government to increase, or at least maintain, UK investment in science as a central plank of its plans for economic recovery. Without such commitment, we risk our international reputation, our share of the global high-tech industrial market, and our ability to respond to the many challenges we face as a nation.

I know that you are very busy, but I hope that you will be able to spare the time to meet me to discuss this issue in person on the 12th October. Either way, I hope that I can count on your support. I look forward to hearing from you.

Yours sincerely,

Dr Edmund Gerstner
Senior Editor, Nature Physics

Filed under Uncategorized |

Thinking hyperlocal

Image courtesy of NASA

I’ve been thinking a lot about community journalism recently. Mostly inspired by Jay Rosen and Dave Winer’s Rebooting the News podcast. I’ve never previously paid much attention to local media. When I lived in Cambridge (until just over a year ago) the free weekly never made it as far as the coffee table before ending up in the recycling. The only time it really entered my consciousness was when I started dating a journalist from The Cambridge Evening News.

Then I moved to South London (where I’d lived 6 years before) and I began to miss it. Don’t know why. Clearly I had never needed it before. Maybe for a sense of community that I’d taken for granted and couldn’t anymore. More likely its connected to the fact that I never felt so acute a sense of identity with Cambridge as I do South London. I guess then it makes sense that I should want to be more embedded here and engaged with the people around me.

But I’m not particularly inspired by any of the local rags. When I read about my local area, it’s more likely to be in The New York Times.

So I was intrigued to hear what local news pundits had to say about the matter at tonight’s “What now for local and regional media in the UK?” event at The Frontline Club.

It turns out, surprise surprise, that none of them seem to have any idea.

Probably not their fault. Many of the panelists either conducted or (quasi-successfully) tendered for the Labour government’s Independently Funded News Consortia (IFNC) initiative — a scheme to try to find ways to fund regional and local news not focused on London. Most seemed to think it was a promising way forward, an opportunity to test a new model. This morning, the new Secretary of State for Culture, Olympics, Media and Sport, Jeremy Hunt, killed it.

Which turned the first half of the evening into an impromptu wake.

Yet there was constructive discussion to be had once they threw it open to the floor. Not that anyone in the audience had any better notion of where they’d be in five, or even two, year’s time. But some interesting ideas.

The most apposite (about 55 minutes into the video stream), was from a journalist from Brighton’s West Hill Whistler. Sweetly, she admitted no one read it, but that she quite enjoyed writing for it, nonetheless. She then echoed exactly my own feelings about local press in Cambridge: When it comes to buying a local paper, or a national paper, if you’re being asked to part with any money, most people will choose the national.

It turns out, that another member of the audience (of barely a dozen or so) was not only familiar with The Whistler, but sang its praises. For him, the news from most of the areas that Brighton’s main regional newspaper, The Argus, covers is as relevant to him as local news from Tehran.

And that’s surely the point.

Moreover, you don’t need to sell subscriptions to turn a profit. One of the panelists, Mark Reeves, editor of The Business Desk seemed to be having no problem generating revenue from advertising. Yes, its catchment is bigger than the West Hill area of Brighton. But with only a few thousand (admittedly sought after) readers, it can’t be that far from the potential readership of The Whistler.

But, I digress. I don’t care about business models any more than the journalist from The Whistler.

What I do care about is connecting with my community. But which community? It seems shortsighted to think that my community is just where I live.

When it comes to my day job, my community is geographically global but topically hyperlocal… in the sense that the number of people interested in the esoteric cutting edge of physics is probably dwarfed by the number of people who live in Southwark. And then there are the community of pinko politicos, or London science journalists, I follow on twitter. And on and on.

Is it meaningful to distinguish the community that lives around me from the many other communities that I call my own? Is hyperlocal a significant category or just another category?

I really don’t know. But I’m keen to find out!

Filed under Journalism |

Au revoir to Portland

Well, that’s it from me at the 2010 American Physical Society meeting from Portland. The week seems to have gone by much quicker than previous years. I wonder if that has anything to do with twitter.

As always, I’ve learnt a lot, drank a lot, and pressed the flesh of a lot of authors, referees and journalists. Was introduced to at least one potentially revolutionary idea (which I’m not going to tell any of you about until I’ve asked someone else about whether it’s madness or genius). And lots of less-than-revolutionary, but still awesome, ideas.

In 7 years of attending March meetings Portland was one of the nicest APS cities I’ve been to. But still, looking forward to being back in London.

Until next time, I think this says it all…

Filed under APSMar10, Physics |

Postcards from the (conducting) edge

Every APS March meeting I try to get my head around something new. Last year in Pittsburgh it was supersolids. Now that I’m on the return trek home from this year’s meeting in Portland, I’m trying to work out how far I have managed to get with this year’s challenge, topological insulators.

By all accounts, topological insulators are set to be the next big thing in physics. Though theymade a splash like graphene at the 2006 meeting in Baltimore, but if the increase in submissions to Nature and Nature Physics in this area are anything to go by, it may soon come close. Even graphene took a few years to become stratospheric. And a colleague of mine suggested that ‘topological’ is the new ‘nano’ as the favorite buzzword that authors are adding to their papers to try to make them sound more sexy.

There are good reasons for the excitement. They could enable low-loss spin currents to be harnessed for high-speed, low-power electronics. There is much talk about their use in high-efficiency thermoelectric systems for power generation and heat management. But most exciting is their potential to generate exotic quantum states — from Majorana fermions, which behave analogously to dark matter candidate particles known as axions, to braiding states, which could finally enable us to build quantum computers that don’t fall down the minute anyone thinks about sneezing.

But enough of the hype. What about the nuts and bolts?

I’m not going to pretend that I have anything close to a coherent understanding of what topological insulators are or how they work. The site in the press room after the APS press conference on the subject was one of a half-dozen journalists on the table trying to find a way to parse what the hell they were about to their readers. Of all so far, I think my colleague Geoff Brumfiel from Nature came the closest to getting the balance right.

First, the easy bit. A topologically insulator in fact isn’t really much of an insulator at all. By definition, its surface is conducting. And although an ideal topological insulator has a bulk that is insulating, in practice many materials that people are working on aren’t bulk insulators at all, but semiconductors or semimetals. Thankfully, this needn’t be a problem. That’s because you can modify the bulk with doping (or similar) without affecting the all important surface states, because these states are robust — one of the unique selling points of a topological insulator… actually, arguably the unique selling point.

Yulin Chen, from Stanford showed some pretty compelling angle resolved photoelectron spectra that show that when you dope bismuth telluride (Bi2Te3) — one of the materials that several groups are working on — with tin, you can switch it from an n-type semiconductor to an insulator while leaving the impotant topological surface states unchanged. This occurs at a doping concentration of 0.67% — which is freakin’ huge! It’s remarkable that the surface states are not utterly destroyed at such a level — to compare, parts per million doping turns silicon into garbage.

Anyway, I (and all my colleagues) have been throwing the word ‘topological’ around with gay abandon. Do we know what this means? Probably not. I certainly don’t — not in a deep way. anyway. But I might have an idea.

HEALTH WARNING: The explanation that follows (and what precedes, as well) is subject to change without warning. At best, it’s likely tangential to anything that resembles reality. Really, it’s just an exercise in me thinking out loud.

I’ve heard many explanations but none really resonate. It is often said that topological insulators are materials whose surface states are ‘topologically protected’. Okay, well that’s helpful, like, not at all!

The closest I think I’ve come to getting a feel for what it means for a state to be topologically protected comes from some things that Laurens Molenkamp (University of Würzburg) said in a press conference on Monday. Molenkamp was the first to demonstrate the existence of topologically protected surface states experimentally, in mercury telluride. In the press conference, he began by saying that in normal insulators, the geometry (or did he say topology?) of the conduction band states are s-like (like a Bohr atom), and of the valence band states are p-like (like the states of the electrons in the outer shell of a carbon atom).

Now, the electronic behaviour of an insulator or semiconductor is sensitively dependant on the distance in energy between the bottom of the conduction band, which always curves up in density of states of a material, and the top of the valence band, which always curves down. And any perturbation to the material — such as impurities (deliberate or otherwise), defects, strain, changes in temperature, even magnetic or electric fields — tends to frig with the distance between them.

Sometimes this is useful. Most times it’s a pain in the ass. And for things like quantum computing, this sort of thing is a deal-killer. So what do to topological insulators bring to the table?

According to Molenkamp, in certain materials made from heavy elements, the geometry (s-like and p-like) of the conduction and valence bands flip. And at the surface of some materials, they cross. And this is the key, I think.

If you move the uncrossed conduction and valence bands of a conventional material, you change the distance between their nearest points — that is, you change the thing that controls their behaviour.

But if you move the crossed conduction and valence at the surface of a topological insulator, they still remain crossed. The point where they cross might move a bit in k-space (that’s momentum-space — the inverse of normal space, which physicists like to describe electronic materials in, because makes this simpler). But they’re not going to move further apart — they’re crossed!

And it is this crossing, apparently (I think), that not only makes topologically protected states robust, it also makes them wacky, and gives rise to the panoply of exotic behaviour that physicists are excited about.

So there you go. I didn’t promise that’d you’d be able to write a paper on topological insulators after reading all this. I certainly wouldn’t claim that I could. But if I can pick up more bits and pieces at future meetings, perhaps by the time topological insulators are really huge, I might.

(Image credit: Ali Yazdani, Princeton University.)

Filed under Uncategorized |

An uncertainty of physicists

On the first day of the APS meeting, I sent a shout out… or is it tweet out, for suggestions for the collective nouns of physicists.

First up was @StanCarey with a ‘measure’ of physicists.

Then @Cromacrox with a ‘condensate’ of physicists (my favourite for a time).

Which prompted @JonMButterworth to suggest “surely it’s an ‘interference’?”

Then @PhysicsTeo with a ‘matrix’? Or a ‘vector’, outside Starbucks at the convention center.

Then @CollectiveNouns RT’d and the trickle became a flow, including a ‘fizz’, a ‘collision’, an ‘approximation’, a ‘flux’ and several variants of ‘particles’ and ‘quanta’.

But the best from @DrPeterRodgers was “An ‘ensemble’ for when they are being serious. And a ‘gas’ for when they are not.”

Filed under APSMar10, Physics |

APS Day Two — has graphene passed its Hubbert peak?

One of the notable things at this year’s APS is the lack of buzz about graphene. Ever since it made it’s APS debut in 2006 at the Baltimore meeting, the activity — and number of parallel sessions — on this material has grown and grown. The graphene audience in Denver 2007 was about twice that in Baltimore. And at New Orleans 2008 twice again. And last year in Pittsburgh bigger again.

That’s a lot of growth. And of course unsustainable. So I figured it had to level out this year. But it’s more than levelled, the buzz seems to be on the wane.

Why? Is it simply a matter of hype’s short life span?

There’s still plenty more to do. There are plenty of new results emerging and much about it’s electronic behaviour that we don’t understand. We haven’t hit any roadblocks in synthesizing or building devices from graphene (unlike nanotubes, whose potential now seems all but dead).

Maybe it’s the theorists?

When graphene hit the headlines, the theorists hit their blackboards. Most had been working on carbon nanotubes, which are essentially just rolled-up graphene sheets, so the theoretical tools they’d developed over nearly two decades were directly transferrable. And by 2006 new ideas in nanotubes were already starting to dry up. And literally hundreds of new graphene papers began rolling off theorists’ computers every week.

Although isolating graphene is as easy as peeling a ribbon of sticky tape from a chunk of graphite, it took a good year or two before significant numbers of experimental papers started coming. But when they did a similar flood emerged. And by New Orleans in 2008, pretty much the entire nanotube community — which was a large community — had moved into graphene. But the production of experimental results was never as ponderous as that of theory.

So has graphene theory finally tapered off? Or have researchers found a new bandwagon? Topological insulators, perhaps?

My learned colleague Geoff Brumfiel has suggested that my spreading of rumours of graphene’s demise might be premature. He’s probably right. Here’s a photographic representation of how Andre Geim’s “Graphene Update” talk was received.

But I still maintain that nanotubes are looking decided ill. In searching for Geim’s talk, I accidentally ended up in the carbon nanotube session next door. And this is was it looked like.

Filed under APSMar10, Physics |

Let apathy reign — teaching physicists to teach

Most times, physicists are an idealistic bunch. We study physics because of a strange and desperate yearning to know how things work. Not out of any particular desire to exploit this knowledge for fortune or fame. We just want to know. Perhaps because we never grew out of the adolescent/tourettic urge to ask “But why? … But why? … But why?”

And most physicists are just as keen to share what they know with others. Ask a physicist a question about the fundamental nature of the Universe and you will have a terrible time getting her to shut up.

But when it comes to teaching others how to spread the knowledge to which we’ve dedicated our lives, we suck. We *really* suck. Or so it would seem from the conclusions of the National Task Force on Teacher Education in Physics, formed to look at how physics teachers are taught in the US and to develop strategies for doing better.

On the first day of the APS conference, Valerie Otero, painted a dismal picture of the state of physics teacher education in the US. Only a third of the 20,000 US high school physics teachers majored in physics or physics education. This may go some way to explaining why the 2006 Program for International Student Assessment placed American 15-year-olds in the bottom third of OECD nations for the scientific literacy.

And yet, a survey of physics majors who intended to go into high school teaching found they were discouraged from doing so by their professors, often told that they should pursue research rather than waste their talents on teaching.

Among the taskforce’s key findings were:

  1. Few physics departments are actively involved in training physics teachers. When approached, most seemed to feel this was somebody else’s problem.
  2. Of the few institutions that seemed to be doing a good job at teaching physicists to teach — demonstrated by their producing two or more physics teachers per year (really, that’s all it takes to be considered to be active in teacher education??!?) — *all* were driven by a single ‘champion’ dedicated to the cause. And with few exceptions, these champions get little to no support from their institutions. That is, the production of good teachers is not a factor they can include on promotion applications. And they get few additional resources for the job.
  3. Institutions that only award Bachelor’s and Master’s degrees are more likely to have active teacher training programs than those that also offer PhDs.
  4. Physics department and Departments of Education within the same institution almost never talk, let alone collaborate.
  5. Programs do little to develop the physics-specific pedagogical expertise of teachers.
  6. Few programs provide support, resources, intellectual community or professional development for physics teachers.
  7. Few institutions offer coherent programs for the professional development of in-service teachers. Again, despite the fact that only a third of physics teachers have majored in physics.

So where do these professors think future physics majors are going to come from?!?

Or to put it more simply, WTF?

Filed under APSMar10, Physics |

First forays into live blogging

It’s that time of the year, for the American Physics Society March meeting, when 5000+ physicists from around the world descend on some poor unsuspecting American town (this year, Portland) and, well, talk physics.

Last year’s meeting in Pittsburgh was the first meeting I tried my hand at live-tweeting. And it was a good experience (for me, that is, I think it strained the good nature of my Facebook friends). Unexpectedly it helped me get much more out of the technical sessions — it’s amazing how much more attention you pay to what someone is saying when you’re trying to distill it for someone else. I’ll bet UN interpreters know better what’s going on in the world than any UN diplomat.

The only draw back is that 140 characters isn’t really an ideal format for physics reporting. Although I appreciate the discipline, it atomizes the flow too much to do the subject matter justice.

And so this year, I’m going to try live-blogging instead. The powers that be have not extended the free wifi into the meeting rooms. But I’ve managed to find a 3G and a contract-free broadband package that looks a good two orders of magnitude cheaper than dataroaming on O2. So the only thing stopping me is Maxwell’s equations.

If anyone’s interested I’ll be pegging the sessions on Twitter but most of the content will be presented here.

To follow other tweeters from the meeting, the tag set for the meeting by @APSPhysics is #APSMar10. And Matteo Cavalleri and Dave Flanagan will be live-blogging at MaterialsViews.com.

Fun fun fun!

Filed under APSMar10, Physics |


Continuing my quest to digitize my life so far, comes a multimedia project I did on whales when I was eleven.

Reckon I should have done a few more takes with the audio (this was the second take). But happy with what I managed with a camera, a roll of slide film, a tape-recorder and a library card.

Filed under Science |


More from the archives. This time an undergrad essay I wrote for Huw Price’s class Philosophy of Physics II, on the Copenhagen Interpretation of Quantum Mechanics, EPR Paradox, Bell’s Inequality and Alain Aspect’s experimental resolution of all three.

Causality, determinism and the classical view of the Universe

Newton, in the late seventeenth century, was the first to formulate a thorough mathematical formalism to accurately describe physical interactions observed both astronomically (planetary motion) and on the everyday scale (projectile motion). The philosophical implications of his theories were that the universe, and all the particles of which it was made, evolved in a completely deterministic and clearly defined manner. Given a complete description of the state of a given system (including the forces, velocities and positions of the elements that make it up), one could then predict its behaviour for the rest of time. This ability to predict the future, in principle, was only limited by the extent of one’s knowledge of the present, which in turn was only limited by the accuracy of one’s measuring instruments. Thus, it was believed that knowledge about any given system was only restricted by one’s technological ability to make measurements.

Although major inadequacies in Newton’s theories were identified and corrected by Albert Einstein’s special theory of relativity, they were mainly concerned with the nature of space-time, with the basic belief in causality and determinism in the universe maintained. This view of the world remained largely unchallenged until the advent of quantum mechanics and, in particular, what has come to be known as the Copenhagen Interpretation.

The Copenhagen interpretation of Quantum Mechanics

Prior to the 1920s and the development of quantum mechanics it was believed that atomic entities such as protons and electrons possessed strictly particle-like properties. However, in 1927, an experiment known as the double-slit experiment showed this view to be incomplete. This experiment involved projecting electrons at a screen with two closely spaced holes in it, and viewing the resulting pattern formed on a second screen behind it. If electrons existed only as particle-like entities the resulting pattern would have consisted of two intensity maxima each directly opposite the two holes. This was not the case – an interference pattern, typical of that resulting from similar experiments carried out with wave-like light, was seen. Thus, it was shown that electrons could display both particle-like properties and wave-like properties. This duality of character is known as the wave–particle dilemma.

Although Einstein’s 1905 paper on the photoelectric effect (the conclusions of which were proven experimentally in 1923 by Arthur Compton) had shown a similar wave–particle duality with respect to light, a further effect was observed in the two-slit experiment which was incomprehensible to a classical view of the world. The existence of the interference pattern requires each electron to pass through both holes of the first screen, thereby denying its particle-like character. However, if one uses a detecting instrument to determine which hole each electron goes through, without hindering its path, the interference pattern disappears leaving two single maxima on the viewing screen. What’s more, if the detecting instrument is left in the setup, but turned off, the pattern reappears. What seems to happen, in effect, is that electrons only allow themselves to “Seen” with either particle-like properties or wave-like properties, but not both.

Bohr, one of the founders of the modern quantum mechanics, explained these results by what called the principle of complementarity. This principle states that both theoretical pictures of fundamental particles (such as electrons and the like), as particles and as waves, are equally valid, complementary descriptions of the same reality. Neither description is complete in itself, but each is correct in appropriate circumstances; an electron should be considered in the case of the double-slit interference pattern, and as a particle, in the case where it is being detected by a particle detector. From this, Bohr concluded that the results of any measurement are inherently related to the apparatus used to make the measurement. Moreover, he believed that, in a Sense, the apparatus actually gives the property being measured to the particle: experiments designed to detect particles always detect particles; experiments designed to detect waves always detect waves.

Bohr’s principle of complementarity introduces an element of uncertainty into any measurement and, in effect, requires it. In a more specific experimental sense, Heisenberg derived a set of relations from the equations of quantum mechanics which show that any two non-commuting dynamic properties of a system (properties whose mathematical operators do not obey the commutativity relation a×b = b×a) cannot both be measured to arbitrary accuracy. An example of what is known as the Heisenberg Uncertainty Principle involves consideration of the momentum, p, and the position, x, of an elementary particle. This principle says that the accuracy of any measurement of the momentum and position of a particle, Δx and Δp respectively, is restricted by the relation Δp⋅Δx > ℏ (where ℏ is Planck’s constant divided by 2π). Thus, the momentum of a particle may be measured to arbitrary accuracy but with a sacrifice to the accuracy of any measurement of position, and vice versa. This relation can also be shown to apply for simultaneous measurements of energy, ΔE, and time, Δt.

The Heisenberg uncertainty principle can be considered a special case of Bohr’s more general complementarity principle, and together they provide the conceptual framework for what has come to be known as the Copenhagen interpretation of quantum mechanics.


Although the discussion so far has concentrated on the restrictions placed on experimental measurements by quantum mechanics, the implications of the Copenhagen interpretation go much further, challenging the previous (classical) conceptions of the nature of reality itself. It was the belief of Bohr and others of the Copenhagen school, that uncertainty is not merely a mathematical artifact of quantum mechanics but a reflection of an inherent ambiguity in the physical reality of the universe.

A crucial result of this interpretation is that nothing exists in objective reality which is not contained in the mathematical formalism of quantum mechanics. In other words, a quantum system can only be said to possess a dynamical property if it is describable by a quantum state for which the property is assigned a probability of one. Properties such as the spin, charge and rest mass of a system (all of which have definite quantum numbers in the formalism) can be said to have a concrete reality, and which, in principle, can be measured to any accuracy regardless of the situation. In contrast, essentially classical properties such as position and momentum cannot be said to exist at a quantum level in the same way as they do at a macroscopic level. This leads to the view that the reality of a measured property does not exist until a measurement is made, and it was this denial of objective reality, along with the fact that quantum mechanics only allows statistical predictions about the universe, which Einstein disputed. This lead him to argue that that quantum mechanics cannot reasonably be accepted as a complete description of the universe.

A useful analog; in considering the completeness of quantum mechanics is that of quantum mechanics with classical thermodynamics. Classical thermodynamics is successful in predicting the equilibrium properties of macroscopic systems, but is unable to describe such phenomena as thermal fluctuations, Brownian motion, and the like. A similar situation exists for quantum mechanics with respect to processes such as nuclear and sub- atomic particle decay. Quantum mechanics can predict the average decay time for a number of nuclei or sub-atomic particles, but is unable to predict the time for any single decay or to provide an explanation for fluctuations from the mean. Hence, it is conceivable that quantum mechanics is incomplete in the same way that classical thermodynamics is incomplete.

Acceptance of quantum mechanics requires that processes such as nuclear decay be considered as inherently acausal and indeterminate. Although this stance has been qualified (prompted by a 1964 paper by Vladimir Fock in response to Bohr’s interpretation of quantum mechanics) in as much as a `simple causality’, reflected in the well-defined natural laws that guide statistical outcomes, must exist, causality with respect to the display of macroscopic effects by a locally isolated quantum system does not. It is this that lead to Einstein’s objection, “God does not play dice with the Universe!”

It was Einstein’s belief that some form of local reality and causality must exist, independent of spatially extended effects, and therefore, that quantum mechanics is an incomplete description of the universe. He believed that although quantum mechanics places a restriction on what can be measured directly, this does not necessarily imply a restriction on the actual physical reality of the dynamical properties of a system. Furthermore, he believed that it was possible to circumvent the uncertainty principle and to show so with a number of thought experiments. Using several such experiments, and logical arguments based on `reasonable’ starting assumptions, Einstein attempted to show that quantum mechanics was indeed incomplete.

The EPR paradox

One of the first thought experiments proposed by Einstein to show a violation of uncertainty involved a radiation filled box with a tiny hole and an aperture controlled by a highly accurate atomic clock. It was argued that, by allowing a single photon to leave the box at a time prescribed by the clock and finding the energy of the photon by weighing the setup both before and after, the uncertainty relation ΔE⋅Δt > ℏ could be violated. It was shown by Bohr, however, that classical assumptions made by Einstein could not necessarily be said to apply when considering the system on a quantum level. For example, to weigh the system before and after would require suspending it by a spring;, or other such means, in a gravitational field. As the photon escapes, thereby changing the mass of the system, the spring will contract and the box will change its position in the field. From Einstein’s own theory of relativity, this change in gravitational field will change the clock’s time frame and thus introduce an error into the time measurement. It therefore turns out that energy and time cannot be measured to arbitrary accuracy, leaving the uncertainty principle intact.

The above example illustrates an important restriction placed by quantum mechanics on the way intuition may be used to predict the outcome of a hypothetical situation. Since both physicists and philosophers live in the macroscopic world where a classical view is adequate to describe things, the basis and assumptions of such intuition must be closely scrutinised, and some sort of working rules must be applied to any thought experiment. For the majority of Einstein’s early work against quantum mechanics as a complete theory, it was inconsistencies in the initial assumptions which lead to rejection of his arguments, and not the logic of the arguments themselves.

Einstein eventually accepted criticisms of his early thought experiments which attempted to disprove the uncertainty relations by the direct measurement of the properties of a system. Instead, he pursued a different line of argument (and a different thought experiment) to show inconsistencies in quantum mechanics through its denial of objective reality (with respect to the dynamical properties of a system). It was this line of argument that formed the basis of the 1935 paper by Einstein, with Boris Podolsky and Nathan Rosen, on the incompleteness of quantum mechanics, which has come to be known as the EPR paradox.

The principle argument of the EPR paper was based on the claim that,

“if, without in any way disturbing a system, we can predict with certainty (i.e. with probability equal to unity) the value of a physical quantity, then there exists an element of physical reality corresponding to this physical quantity”

Accepting this claim, it was then shown that such predictions could be made for certain correlated systems, thereby proving the existence of some sort of objective reality, and contradicting quantum mechanics.

The basic thought experiment used to show this involved consideration of two particles, travelling in opposite directions, and originating from a single event (such as the radioactive decay of a nucleus) in such a way that their properties must be correlated. Quantum theory allows the distance between the total momentum of the two particles to be known precisely, and so, by measuring the position or momentum of one particle one can predict with certainty the position or momentum of the other particle. Since the distance between the two particles could be made arbitrarily large, it was argued, that a measurement of one particle could not simultaneously have any effect on the other (by special relativity). Thus, a measurement carried out on one particle cannot create or in anyway influence any property of the second particle, and so any property derived from such a measurement for the second particle must have a separate objective reality from the first, and so must have existed before any measurement at all was made. This creates a paradox for quantum theory, which states that such properties do not exist until measured. It was therefore concluded that quantum mechanics must be considered to be an incomplete description of the universe.

Hidden variable theories

Physical theories which set out to prove, or otherwise require, the existence of objective reality with respect to the dynamical properties (or variables) of a system can be placed in the general category of hidden variable theories. Such theories prescribe to a belief in an underlying clockwork controlling the interactions which occur in our universe, and that such clockwork only gives the appearance of uncertainty and unpredictability at the quantum level (through statistical variations). An example of this was Einstein s belief in the existence of some deeper, yet unknown, mechanism governing the radioactive decay of an individual, isolated nucleus.

Such theories obey strict causality and determinism, with particles having a precise values of properties such as momentum and position irrespective of whether or not they are being observed or measured. As a result, they are in direct conflict with quantum mechanics. However, the EPR thought experiment in itself provided no concrete way of physically resolving this conflict, only hypothesis. This effectively remained until a 1964 paper by John Bell, which showed that a resolution of the EPR paradox was possible through a set of experiments whose predicted outcomes from hidden variable theories are different from those of quantum mechanics.

Bell’s theorem

The effectively classical, `local realistic’ view of the universe which underlies most hidden variable theories is generally based on three fundamental assumptions. Firstly, that there are real things that exist regardless of whether we look at them or not; secondly, that it is legitimate to draw conclusions from consistent observations or experiments; and third, that no influence can propagate faster than the speed of light. In his 1964 paper Bell carried out his argument starting from this stance.

Although Bell’s experiment differed from the original EPR experiment in that it involved consideration of different components of polarisation of two photons rather than momentum and position, the fundamental question about the objective reality of non-commuting properties still remained the central issue. Bell’s idea was to consider an atom with zero angular momentum that decays into two identical photons propagating in opposite directions. Because the initial angular momentum of the system is zero, the two photons must be polarised in the same direction, the physics of which is unequivocally accepted by both classical and quantum theory. The photons are then allowed to move apart and eventually travel through separate polarisers mounted in front of photodetectors. The angle of the polarisers can be set to either vertical or 60° to either side of the vertical.

The experiment proceeds for each decay by setting the angle of each of the two polarisers randomly to one of the three positions and photon detection or non-detection recorded from each photodetector. If this procedure is carried out for a large number of decays, Bell showed, that predictions for the resulting statistics will be different for local realistic (hidden variable) theories and quantum mechanics.

Considering a number of decays, quantum mechanics states that the two detectors will register the same outcome (either double detection or double non-detection) 100% of the time when the polarisers are both in the same position (both at the same angle), and 25% of the time when their positions are 60° or 120° apart. Thus, averaging over the random selection of the nine possible polariser settings, the quantum mechanical probability of identical detections is

PQM = (3×1.0 + 6×0.25)/9 = 1/2

Hidden variable theories, however, find quite different statistical probabilities, the argument for which proceeds as follows. Each of the photons is considered to posses its own instruction set (or hidden variables). Effectively these might take the form of three parameters which tell the photon whether or not it should be detected for each orientation of a polariser. An example of such a set could be DNN which tells the photon to register detection (D) if the polariser is in the first position (e.g. at 60° in one direction to the vertical) and non-detection (N) if it is in the other two. Now, since classical mechanics also requires that the same outcome will occur when both polarisers are in the same position, both photons must contain exactly the same instruction sets. So, for the instruction sets DDD or NNN, the two detectors will register the same, regardless of their relative orientation, and for the other six possible instruction sets, five of the nine polarisation orientations will register the same outcome. Therefore, even if no NNN or DDD instruction sets are ever emitted, we have a lower bound of 5/9 for the probability of identical responses, that is,

PEPR > 5/9

This leads to the relation PEPRPQM which is known as Bell’s Theorem. Thus, by this theorem, the contradiction between the predictions of quantum mechanics and those of EPR hidden variable/local reality theories is made both obvious, and what is more, experimentally testable. It is also important to note the the condition for PEPR above is not specific to the EPR argument but a general result for all hidden variable theories (the instruction sets introduced are a general requirement of all non-local hidden variable theories).

Aspect’s experiment and its implications

In 1982, Alain Aspect and co-workers at Orsay, France conducted what is arguably the most conclusive set of experiments so far that set out to resolve the conflict between quantum mechanics and hidden variable theories. The results find that not only is the EPR inequality violated but that the predictions of quantum mechanics are confirmed.

The implication of these and the majority of similar results favouring quantum mechanics is that local hidden variable theories are incorrect, and that the assumptions of the EPR paper about the nature of the universe are flawed. In particular, the belief that quantum influences (which might enforce the uncertainty relations) between spatially separated particles cannot propagate faster than the speed of light — and that strict causality must exist for all interactions — must be rejected. The results confirm that dynamical properties such photon polarisation (and in other similar experiments the direction of electron spin) do not have a concrete local reality with respect to isolated, undisturbed particles, and imply some sort of superluminal interaction between correlated particles, which ensures that the Heisenberg uncertainty principle is not violated. Thus, it can be argued that it was the fundamental assumptions of EPR hidden variable theories, such as those begun with by Bell, that have lead, through plausible arguments, to incorrect conclusions.

One can say that it is the classical belief in local objective reality, and thus in the laws of determinism and causality, that are incorrect and thus incomplete. As for quantum mechanics, although it would be logically incorrect to say that Aspect’s results prove it to be a complete theory of the universe (that is, it is quantum theory that, through its predictions, logically implies the outcomes and not vice versa), it is possible to say with modest confidence that any truly complete theory should contain the critical aspects of quantum theory — those to which Einstein and others objected. This leaves the door open for broader and more wide reaching interpretations of the physical implications of quantum theory, such as the many worlds interpretation and non-local hidden variable theories.

But by any standard, the Copenhagen interpretation of quantum mechanics still stands as the most complete formalism for describing the universe to date.


  1. Mermin, N.D. `Is the Moon There When Nobody Looks?’ The Philosophy of Science. (MIT Press).
  2. Shimony, A. `Metaphysical Problems in the Foundations of Quantum Mechanics.’ The Philosophy of Science. (MIT Press).
  3. Putnam, H. `A Philosopher Looks at Quantum Mechanics’, Mathematics, Matter and Method — Philosophical Papers 1 (2nd edition). (Cambridge University Press).
  4. Gribbin, J. In Search of Schrodinger’s Cat. (Wildwood House Limited, 1984).
  5. Selleri, F. Quantum Paradoxes and Physical Reality. (Kluwer Academic Publishers, 1990).
  6. Healey, R. The Philosophy of Quantum Mechanics. (Cambridge University Press, 1989).
  7. Robinson, P. Physics IV — Advanced Quantum Mechanics course notes. (University of Sydney, 1992).
Filed under Physics |