Category Archives: Quantum mechanics

New Paper Dance

B. C. Stacey, “SIC-POVMs and Compatibility among Quantum States” [arXiv:1404.3774]:

An unexpected connection exists between compatibility criteria for quantum states and symmetric informationally complete POVMs. Beginning with Caves, Fuchs and Schack’s “Conditions for compatibility of quantum state assignments” [Phys. Rev. A 66 (2002), 062111], I show that a qutrit SIC-POVM studied in other contexts enjoys additional interesting properties. Compatibility criteria provide a new way to understand the relationship between SIC-POVMs and mutually unbiased bases, as calculations in the SIC representation of quantum states make clear. Along the way, I correct two mathematical errors in Caves, Fuchs and Schack’s paper. One error is a minor nit to pick, while the other is a missed opportunity.

One and One and One Make Three

Every once in a while, a bit of esoteric mathematics drifts into more popular view and leaves poor souls like me wondering, “Why?”

Why is this piece of gee-whizzery being waved about, when the popularized “explanation” of it is so warped as to be misleading? Is the goal of “popularizing mathematics” just to inflate the reader’s ego—the intended result being, “Look what I understand!,” or, worse, “Look at what those [snort] professional mathematicians are saying, and how obviously wrong it is.”

Today’s instalment (noticed by my friend Dr. SkySkull): the glib assertion going around that

$$ 1 + 2 + 3 + 4 + 5 + \cdots = -\frac{1}{12}. $$


It’s like an Upbuzzdomeworthy headline: These scientists added together all the counting numbers. You’ll never guess what happened next!

“This crazy calculation is actually used in physics,” we are solemnly assured.


The physics side of the story is, roughly, “Sometimes you’re doing a calculation and it looks like you’ll have to add up $$1+2+3+4+\cdots$$  and so on forever. Then you look more carefully and realize that you shouldn’t—something you neglected matters. It turns out that you can swap in $$-1/12$$ for the corrected calculation and get a good first stab at the answer. More specifically, swapping in $$-1/12$$ tells you the part of the answer which doesn’t depend on the particular details of the extra effect you originally neglected.”

For an example of this being done, see David Tong’s notes on quantum field theory, chapter 2, page 27. For the story as explained by a mathematician, see Terry Tao’s “The Euler-Maclaurin formula, Bernoulli numbers, the zeta function, and real-variable analytic continuation.” As that title might hint, these do presume a certain level of background knowledge, but that’s kind of the point. This is an instance where the result itself requires at least moderate expertise to understand, unlike, say, the four-colour theorem, where the premise and the result are pretty easy to set out, and it’s the stuff in between which is much harder to follow.

ADDENDUM (19 January 2014): I’ve heard the argument in favour of this gee-whizzery that it “gets people excited about mathematics.” So what? A large number of people are misinformed; a tiny fraction of that population goes on to learn more and realize that they were, essentially, lied to. Getting people interested in mathematics is a laudable goal, but you need to pick your teaser-trailer examples more carefully.

And I see Terry Tao has weighed in himself with a clear note and some charming terminology.

Time Capsule

While looking through old physics books for alternate takes on my quals problems, I found a copy of Sir James Jeans’ Electricity and Magnetism (5th edition, 1925). It’s a fascinating time capsule of early views on relativity and what we know call the “old quantum theory,” that is, the attempt to understand atomic and molecular phenomena by adding some constraints to fundamentally classical physics. Jeans builds up Maxwellian electromagnetism starting from the assumption of the aether. Then, in chapter 20, which was added in the fourth edition (1919), he goes into special relativity, beginning with the Michelson–Morley experiment. Only after discussing many examples in detail does he, near the end of the chapter, say

If, then, we continue to believe in the existence of an ether we are compelled to believe not only that all electromagnetic phenomena are in a conspiracy to conceal from us the speed of our motion through the ether, but also that gravitational phenomena, which so far as is known have nothing to do with the ether, are parties to the same conspiracy. The simpler view seems to be that there is no ether. If we accept this view, there is no conspiracy of concealment for the simple reason that there is no longer anything to conceal.

Continue reading Time Capsule

Delayed Gratification

A post today by PZ Myers nicely expresses something which has been frustrating me about people who, in arguing over what can be a legitimate subject of “scientific” study, play the “untestable claim” card.

Their ideal is the experiment that, in one session, shoots down a claim cleanly and neatly. So let’s bring in dowsers who claim to be able to detect water flowing underground, set up control pipes and water-filled pipes, run them through their paces, and see if they meet reasonable statistical criteria. That’s science, it works, it effectively addresses an individual’s very specific claim, and I’m not saying that’s wrong; that’s a perfectly legitimate scientific experiment.

I’m saying that’s not the whole operating paradigm of all of science.

Plenty of scientific ideas are not immediately testable, or directly testable, or testable in isolation. For example: the planets in our solar system aren’t moving the way Newton’s laws say they should. Are Newton’s laws of gravity wrong, or are there other gravitational influences which satisfy the Newtonian equations but which we don’t know about? Once, it turned out to be the latter (the discovery of Neptune), and once, it turned out to be the former (the precession of Mercury’s orbit, which required Einstein’s general relativity to explain).

There are different mathematical formulations of the same subject which give the same predictions for the outcomes of experiments, but which suggest different new ideas for directions to explore. (E.g., Newtonian, Lagrangian and Hamiltonian mechanics; or density matrices and SIC-POVMs.) There are ideas which are proposed for good reason but hang around for decades awaiting a direct experimental test—perhaps one which could barely have been imagined when the idea first came up. Take directed percolation: a simple conceptual model for fluid flow through a randomized porous medium. It was first proposed in 1957. The mathematics necessary to treat it cleverly was invented (or, rather, adapted from a different area of physics) in the 1970s…and then forgotten…and then rediscovered by somebody else…connections with other subjects were made… Experiments were carried out on systems which almost behaved like the idealization, but always turned out to differ in some way… until 2007, when the behaviour was finally caught in the wild. And the experiment which finally observed a directed-percolation-class phase transition with quantitative exactness used a liquid crystal substance which wasn’t synthesized until 1969.

You don’t need to go dashing off to quantum gravity to find examples of ideas which are hard to test in the laboratory, or where mathematics long preceded experiment. (And if you do, don’t forget the other applications being developed for the mathematics invented in that search.) Just think very hard about the water dripping through coffee grounds to make your breakfast.

Of Two Time Indices

In the appendix to a paper I am currently co-authoring, I recently wrote the following within a parenthetical excursus:

When talking of dynamical systems, our probability assignments really carry two time indices: one for the time our betting odds are chosen, and the other for the time the bet concerns.

A parenthesis in an appendix is already a pretty superfluous thing. Treating this as the jumping-off point for further discussion merits the degree of obscurity which only a lengthy post on a low-traffic blog can afford.

Continue reading Of Two Time Indices

Bohr’s Horseshoe

Now and then, one hears physicist stories of uncertain origin. Take the case of Niels Bohr and his horseshoe. A short version goes like the following:

It is a bit like the story of Niels Bohr’s horseshoe. Upon seeing it hanging over a doorway someone said, “But Niels, I thought you didn’t believe horseshoes could bring good luck.” Bohr replied, “They say it works even if you don’t believe.” [source]

I find it interesting that nobody seems to know where this story comes from. The place where I first read it was a jokebook: Asimov’s Treasury of Humor (1971), which happens to be three years older than the earliest appearance Wikiquote knows about. In this book, Isaac Asimov tells a lot of jokes and offers advice on how to deliver them. The Bohr horseshoe, told at slightly greater length, is joke #80. Asimov’s commentary points out a difficulty with telling it:

To a general audience, even one that is highly educated in the humanities, Bohr must be defined — and yet he was one of the greatest physicists of all time and died no longer ago than 1962. But defining Bohr isn’t that easy; if it isn’t done carefully, it will sound condescending, and even the suspicion of condescension will cool the laugh drastically.

Note the light dusting of C. P. Snow. Asimov proposes the following solution.

If you despair of getting the joke across by using Bohr, use Einstein. Everyone has heard of Einstein and anything can be attributed to him. Nevertheless, if you think you can get away with using Bohr, then by all means do so, for all things being equal, the joke will then sound more literate and more authentic. Unlike Einstein, Bohr hasn’t been overused.

I find this, except for the last sentence, strangely appropriate in the context of quantum-foundations arguments.

Know Thy Audience?

D. W. Logan et al. have an editorial in PLoS Computational Biology giving advice for scientists who want to become active Wikipedia contributors. I was one, for a couple years (cue the “I got better”); judging from my personal experience, most of their advice is pretty good, save for item four:

Wikipedia is not primarily aimed at experts; therefore, the level of technical detail in its articles must be balanced against the ability of non-experts to understand those details. When contributing scientific content, imagine you have been tasked with writing a comprehensive scientific review for a high school audience. It can be surprisingly challenging explaining complex ideas in an accessible, jargon-free manner. But it is worth the perseverance. You will reap the benefits when it comes to writing your next manuscript or teaching an undergraduate class.

Come again?

Whether Wikipedia as a whole is “primarily aimed at experts” or not is irrelevant for the scientist wishing to edit the article on a particular technical subject. Plenty of articles — e.g., Kerr/CFT correspondence or Zamolodchikov c-theorem — have vanishingly little relevance to a “high school audience.” Even advanced-placement high-school physics doesn’t introduce quantum field theory, let alone renormalization-group methods, centrally extended Virasoro algebras and the current frontiers of gauge/gravity duality research. Popularizing these topics may be possible, although even the basic ideas like critical points and universality have been surprisingly poorly served in that department so far. While it’s pretty darn evident for these examples, the same problem holds true more generally. If you do try to set about that task, the sheer amount of new invention necessary — the cooking-up of new analogies and metaphors, the construction of new simplifications and toy examples, etc. — will run you slap-bang into Wikipedia’s No Original Research policy.

Even reducing a topic from the graduate to the undergraduate level can be a highly nontrivial task. (I was a beta-tester for Zwiebach’s First Course in String Theory, so I would know.) And, writing for undergrads who already have Maxwell and Schrödinger Equations under their belts is not at all the same as writing for high-school juniors (or for your poor, long-suffering parents who’ve long since given up asking what you learned in school today). Why not try that sort of thing out on another platform first, like a personal blog, and then port it over to Wikipedia after receiving feedback? Citing your own work in the third person, or better yet recruiting other editors to help you adapt your content, is much more in accord with the letter and with the spirit of Wikipedia policy, than is inventing de novo great globs of pop science.

Popularization is hard. When you make a serious effort at it, let yourself get some credit.

Know Thy Audience, indeed: sometimes, your reader won’t be a high-school sophomore looking for homework help, but is much more likely to be a fellow researcher checking to see where the minus signs go in a particular equation, or a graduate student looking to catch up on the historical highlights of their lab group’s research topic. Vulgarized vagueness helps the latter readers not at all, and gives the former only a gentle illusion of learning. Precalculus students would benefit more if we professional science people worked on making articles like Trigonometric functions truly excellent than if we puttered around making up borderline Original Research about our own abstruse pet projects.


  • Logan DW, Sandal M, Gardner PP, Manske M, Bateman A, 2010 Ten Simple Rules for Editing Wikipedia. PLoS Comput Biol 6(9): e1000941. doi:10.1371/journal.pcbi.1000941

Textbook Cardboard and Physicist’s History

By the way, what I have just outlined is what I call a “physicist’s history of physics,” which is never correct. What I am telling you is a sort of conventionalized myth-story that the physicists tell to their students, and those students tell to their students, and is not necessarily related to the actual historical development, which I do not really know!

Richard Feynman

Back when Brian Switek was a college student, he took on the unenviable task of pointing out when his professors were indulging in “scientist’s history of science”: attributing discoveries to the wrong person, oversimplifying the development of an idea, retelling anecdotes which are more amusing than true, and generally chewing on the textbook cardboard. The typical response? “That’s interesting, but I’m still right.”

Now, he’s a palaeontology person, and I’m a physics boffin, so you’d think I could get away with pretending that we don’t have that problem in this Department, but I started this note by quoting Feynman’s QED: The Strange Theory of Light and Matter (1986), so that’s not really a pretence worth keeping up. When it comes to formal education, I only have systematic experience with one field; oh, I took classes in pure mathematics and neuroscience and environmental politics and literature and film studies, but I won’t presume to speak in depth about how those subjects are taught.

So, with all those caveats stated, I can at least sketch what I suspect to be a contributing factor (which other sciences might encounter to a lesser extent or in a different way).

Suppose I want to teach a classful of college sophomores the fundamentals of quantum mechanics. There’s a standard “physicist’s history” which goes along with this, which touches on a familiar litany of famous names: Max Planck, Albert Einstein, Niels Bohr, Louis de Broglie, Werner Heisenberg, Ernst Schrödinger. We like to go back to the early days and follow the development forward, because the science was simpler when it got started, right?

The problem is that all of these men were highly trained, professional physicists who were thoroughly conversant with the knowledge of their time — well, naturally! But this means that any one of them knew more classical physics than a modern college sophomore. They would have known Hamiltonian and Lagrangian mechanics, for example, in addition to techniques of statistical physics (calculating entropy and such). Unless you know what they knew, you can’t really follow their thought processes, and we don’t teach big chunks of what they knew until after we’ve tried to teach what they figured out! For example, if you don’t know thermodynamics and statistical mechanics pretty well, you won’t be able to follow why Max Planck proposed the blackbody radiation law he did, which was a key step in the development of quantum theory.

Consequently, any “historical” treatment at the introductory level will probably end up “conventionalized.” One has to step extremely carefully! Strip the history down to the point that students just starting to learn the science can follow it, and you might not be portraying the way the people actually did their work. That’s not so bad, as far as learning the facts and formulæ is concerned, but you open yourself up to all sorts of troubles when you get to talking about the process of science. Are we doing physics differently than folks did N or 2N years ago? If we are, or if we aren’t, is that a problem? Well, we sure aren’t doing it like they did in chapter 1 of this textbook here. . . .

Calloo! Callay!

My copy of Quantum Mechanics and Path Integrals by Feynman and Hibbs just arrived! If, say, David Griffiths’ textbook epitomizes the ordinary “vernacular” treatment of quantum mechanics, QMaPI is a classic unorthodox approach. Intended for students who already have a bit of background in the subject, it builds up the Lagrangian alternative to the Hamiltonian method, a highly useful idea when one goes on to study field theory, string theory or advanced statistical physics.

For years, this book was only available in beat-up old library copies and illegal DJVU files from Lithuania, but now, Dover has brought forth a new edition. I’m not certain on this, but it appears as if the book was so heavily pre-ordered that sold out of it the day it became available for purchase.

EDIT TO ADD: One erratum — on p. 364, Thorber should be Thornber.

Monday BPSDB: Null Physics

BPSDBA fellow named Terry Witt has been advertising his self-published book, Our Undiscovered Universe, in places like Discover magazine and Scientific American. Unfortunately, the ad pages aren’t exactly peer-reviewed, or even cross-checked with a nearby grad student; being businesses, magazines naturally care about revenue. Upon examination, Our Undiscovered Universe turns out to be brimming over with crank physics and general nonsense. Ben Monreal, who was one of the intimidatingly smart people in the lab where I did my undergrad thesis, has weighed Witt’s “Null Physics” and found it wanting; his review of Our Undiscovered Universe is quite a good read.

Witt’s book starts with pseudomathematics before moving on to pseudophysics. As Ben explains,

Chapter 1 is where Witt lays out a series of “proofs” derived from what he calls the “Null Axiom”. That axiom is: “Existence sums to nonexistence” (pg. 28)—something that Witt calls self-evident after a page of invalid set theory. The central mistake, if I had to identify one, is the claim that “X does not exist” is the same as “everything except X exists”. This is utter baloney, whether in formal logic or in set theory or in daily experience.

Actually, as the book unfolds, Witt doesn’t appear to use this dead-in-the-water non-axiom for anything. He does, however, pile on more pseudomathematics:

Chapter 3 contains such gems as Theorem 3.1: “The Existence of Any Half of the Universe is Equal to the Nonexistence of the Other Half” (pg. 66) and Theorem 3.9: “The Time Required for Light to Traverse the Universe is Eternity, infinity/c” (pg. 72). I am not making this up. Witt throws around “infinity” as though it were an ordinary real number; he multiplies and divides by it, etc., with normal algebraic cancellation. This is complete nonsense; there are two centuries of mathematical thought figuring out the mathematical properties of infinity, and Witt’s approach is valid in exactly none of them. (Witt later explained on his online forum—currently disabled—that he’s reinvented all of the mathematics associated with “infinity”. His reasoning, if that’s what you call it, was that his new definition jibed with a grand idea about math being dependent on nature; it was an argument from incredulity.)

When Witt does finally get around to physics, five chapters into the book, he doesn’t do any better.
Continue reading Monday BPSDB: Null Physics

Quantum Woo, Part N

BPSDBTime for a little BPSDB! The redoubtable Ben Goldacre has the dirt on Bill Nelson’s “QXCI machine,” a device for “bioenergetic health auditing,” a medical procedure well-known among specialists as an essential step in the surgical removal of cash from wallets. Best of all, though, is what QXCI stands for: Quantum Xrroid Consciousness Interface. Now, quantum physics has jack to do with consciousness, but more importantly, “quantum xrroid” just sounds. . . painful. Like a blood boil growing inside your X, if you know what I mean.

Maybe a “quantum xrroid” means that your X is in a superposition of inflamed and not inflamed and only settles on one or the other option when your doctor examines it.

(Incidentally, I met the redoubtable Ben Goldacre in Vegas a few weeks ago — and thereby would hang a tale, if he weren’t still hoarding the photo evidence.)

Quantum One

Michael Nielsen’s recent essay “Why the world needs quantum mechanics,” about the quintessential weirdness of quantum phenomena, provoked Dave Bacon to ask if there’s a better way to teach introductory courses in quantum physics. This question strikes a chord with me, since my first semester of college quantum — the class known as “8.04” — was rather remarkably dreadful.

It began with some fluff about early models of the atom, leaving out most of the ideas actually proposed in favor of a “textbook cardboard” version of the discoveries made in early TwenCen. If we can’t teach history well, why teach it at all? We’re certainly not promoting a genuine understanding of how science works if we only present a caricature of it. I doubt one could even instil an appreciation for the problems which Bohr, Heisenberg, Schrödinger and company solved in the years leading up to 1927: sophomore physics students can’t follow in their footsteps, because sophomore physics students don’t know as much classical physics as professional physicists of the 1920s did. To understand their starting point and the steps they took requires, oddly enough, subject material which even MIT undergrads don’t learn until later.
Continue reading Quantum One