The online store of Marie “does it spark joy” Kondo is the new SkyMall catalogue.

The $60 “French Market” totebag looks flammable, so if you do strike sparks, you might have to douse the flames with water from the Balance Gem Water Bottle ($98).

And $156 for a “small cheese knife”? That, my friends, is going beyond SkyMall. We can’t stop here — this is *Williams-Sonoma country.*

And, of course, there’s the section of the shop devoted to — ting ting! — crystals. A tuning fork and a lump of rose quartz, $75 if you please.

Goddamn pseudoscientific “healing energy” crap for wine moms. Not only does it leech the respectability of science, but by aiming for that suburban market, it loses any chance of emotional depth. Nobody *actually* feels meaning or satisfaction from a SkyMall chakra stone, because of course they don’t. It just sits there. It does a worse job of just sitting there than a potted plant. The best you can do is try to convince yourself otherwise so you don’t admit you blew $75 feeding an exploitative industry that you could have spent paying off schoolchildren’s lunch debt or helping someone make the rent.

As long as you’re trading in bunkum, why not up your game and sell something with real weight and history to it? You want some *serious* Goop? How about a kit for performing some fucking divination by entrails. Can’t find Mr. Right? Learn the secrets of the heart by going through the liver.

The way I see it, the two big *Why?* questions about quantum mechanics are, first, why do we use the particular mathematical apparatus of quantum theory, as opposed to any alternative we might imagine? And second, why do we only find it necessary to work with the full perplexities of quantum physics some of the time? These two questions are related. In order to understand how imprecise measurements might wash out quantum weirdness, we need to characterize which features of quantum theory really are fundamentally weird. And this, in turn, requires separating deep principles from convenient conventions and illuminating the true core of the physics. My own research has focused on the first question, but the second is never too far from my mind.

Of course, I have a lot on my mind these days, but I don’t think I’m special in that regard.

If you ask me, a “quantum system” can be any part of nature that is subject to an agent’s inquiry. A “quantum measurement” is, in principle, any action that an agent takes upon a quantum system. The road between Boston’s City Hall and the Holocaust Memorial is a quantum system. When the police use their bicycles as battering rams against queer kids and street medics, running towards the trouble is a quantum measurement. Being threatened with pepper spray, while secoondhand exposure already stings the eye and throat, one human thrown to the pavement in the intersection in front of you while another arrest happens on the sidewalk just behind you, is an outcome of that measurement. Unsurprisingly, textbooks provide little guidance on casting that event into the algebraic formalism of density matrices, and in the moment, other types of expertise are more immediately useful.

I first encountered quantum physics in a serious way during the spring of my second year at university — 2003, that would have been. I did not particularly care about the conceptual or philosophical “foundations” of it until the summer of 2010. The interval in between encompassed six semesters of quantum mechanics and subjects dependent upon it, along with my first attempts to find a research problem in the area. Once my curiosity had been provoked, it took the better part of a year to find an “interpretation” of quantum mechanics that was at all satisfying, and longer than that to make the transition from “this is how a member of that school would answer that question” to “this is what I declare myself”. Part of that transition was my discovery that I could put my own stamp on the ideas: The concepts and the history provoked new mathematical questions, which I could approach with a background that nobody else had.

The interpretation I adopted was the *QBism* of Chris Fuchs and Rüdiger Schack, later joined by N. David Mermin.

**QBism** is

an interpretation of quantum mechanics in which the ideas of

agentandexperienceare fundamental. A “quantum measurement” is an act that an agent performs on the external world. A “quantum state” is an agent’s encoding of her own personal expectations for what she might experience as a consequence of her actions. Moreover, each measurement outcome is a personal event, an experience specific to the agent who incites it. Subjective judgments thus comprise much of the quantum machinery, but the formalism of the theory establishes the standard to which agents should strive to hold their expectations, and that standard for the relations among beliefs is as objective as any other physical theory.

That’s how we put it in the FAQ. Any physicist who is weird enough to endorse an interpretation of quantum mechanics will naturally get inquiries about it. Many of these, we get often enough that we try to compile good answers together into a nicely portable package — with the proviso that the quantum is a project, and some answers are not final because if physics were easy, we’d be done by now.

There’s a question which seems particularly suited to answering in the blog format, though: “Why don’t you believe in the Many Worlds Interpretation?”

For starters, there is not *one* MWI, but many (perhaps fittingly), and none of them are compelling. That which they all presume, I would like to explain. Moreover, we have every reason to think we *can* explain those features of quantum mechanics that the MWIs merely assert, and in so doing we can *really learn some physics.*

MWI evangelism relies upon a set of tropes that become less engaging with increased exposure. The evangelist will often insist that his MWI, which he may equate with “the” MWI, is “simple”. This simplicity is belied by the dozens if not hundreds of pages that he is willing to write about his MWI; for he must establish its glory in detail, and if he admits there are others, he must prove his better than them all. “Simplicity”, rather like “elegance”, appears to be a burden of the beholder. (Nor is it clear that “simplicity”, however one might measure it, is necessarily a virtue. As Peter Shor asked, “Who promised us that Nature’s arcane rules / Would make sense to a merely human brain?”)

The evangelist is likely to say that “the” (his) MWI is a “natural” or perhaps even “inevitable” consequence of accepting that quantum physics is a correct theory. Stripped of rhetorical flourishes, the argument for naturalness becomes, “If wavefunctions are objective and always evolve unitarily, then wavefunctions are objective and always evolve unitarily.” It is a surprisingly empty business. Trying to add content to it, the evangelists disagree with one another; the “naturalness”, such as it was, is quickly used up. The topics of these disputes go by anodyne names, like “deriving the Born rule”. When their content is uncovered, they turn out to involve the very matter of how to wring meaningful probabilities out of the “interpretation”. In other words, they are about how to make the “interpretation” an actual scientific theory rather than a metaphysical ramble with no predictive traction. (For example, Sean Carroll’s preferred approach is based on an idea of Lev Vaidman; but Vaidman finds the key step in Carroll’s version “illegitimate”.) On this theme, I recommend essays by Adrian Kent, Huw Price and Carl Caves. I will cheerfully disagree with any of them on other points, no doubt, but their critiques have been, for me, the most stimulating. And I take no shame in recommending papers based on the enjoyable quality of their prose.

The MWI evangelist affects an air of audacity, but in more cases than not, his gospel betrays a deeply rooted ordinariness. His intuition finds no way for a theory to be “correct” unless its most obvious mathematical features are in blunt one-to-one correspondence with physical reality. Under his pose of radicality lies a trite obstinacy, a shallow imagination-like product that presents itself as depth and vision. Even when the evangelist is a scientist, during the encounter he conducts himself more like a fanboy for science than a seasoned practitioner of it.

(There is a certain emotional commonality between these conversations and those wherein a Silicon Valley type enthuses about biohacking, a trans woman replies, “Yes, check out my anti-androgen stockpile,” and the SV bro suddenly insists *Not like that!* Come to think of it, there may well be subreddits that provide all these varieties of STEM fanboyism simultaneously. And yes, my pronoun choices have been deliberate.)

Now, to be clear, I’ve been recounting my experiences with *evangelism*, which is qualitatively worse than *endorsement* or perhaps even a measure of *advocacy.* If you’re worried about being That Guy, you probably aren’t That Guy. Many physicists can hold this opinion or that with little consequence. The evangelist will go beyond, like a bad stand-up comedian still riffing on what was edgy in 1970, passing off a choice from the menu of lazy answers as daring and transgressive.

There is no “other branch of the wavefunction” where things turned out OK. There’s just this world, our world, full of careless actions and unintended consequences and science that, for all its power, always comes back to people.

Reading back over what I have written so far, it sounds more snarlish than I would like, but each individual piece says what I wanted, so I will offer my blueberry bread recipe and press on. Indeed, I might as well snarl about some other popular (or at least oft-mentioned) interpretational gambits while I’m here. Bohmian mechanics? More appealing to philosophers than to physicists; by now, sterile. Rovellian “relational quantum mechanics”? An exercise in transferring properties from the vertices of a graph to its edges, constantly backing away from its shot at grandeur. “The” Copenhagen interpretation? Again, there is not one of them, but a quarrelsome flock. Focusing on Bohr in particular, I found his philosophizing more subtle than he is often given credit for, yet ultimately unsatisfying even so. The venerable/hoary “Shut up and calculate”? As I wrote elsewhere, that is not a *stable* position, for it is vulnerable to perturbations by curiosity.

Even the most ascetic claim — the assertion to shut up and calculate with one mathematical formalism rather than another — is in some way a claim about the character of the world. Perhaps bound up with historical happenstance and social convention, but a claim about Nature nonetheless: Were the world a different way, would we not, after we shut up, calculate in a different fashion?

“Objective collapse models” might briefly be summarized as, *wavefunctions are objective, and when they grow too big they fall over.* I can sense a kind of second-hand intuitive appeal to this: In classical physics, we’re accustomed to dynamics becoming nonlinear when we push them hard. So, I can see how ideas in this general region might appeal to somebody else’s heuristics, even though to me, such schemes just modify an uninteresting idea of what quantum theory is about, and thus risk inheriting that dullness themselves. I do feel the romance in the possibility of gravity being the failure mode of quantum mechanics. If I were at a conference and a talk on that were on the program, I’d listen and maybe even think about ways I could turn their mathematics to my own nefarious purposes. Whereas if an MWI evangelist were next on the schedule, I’d skip the historical revisionism and leave the hotel to find a place that sells affordable coffee. Generally, I think the grand game of modifying quantum theory needs a better understanding of what principles quantum theory should be derived from — a project that is still very much in work.

Every job interview wraps around to the “What do you believe are your greatest weaknesses?” bit, and it would be only fair for me to talk about the objections to QBism. The Stanford Encyclopedia of Philosophy gives a decent third-party take on the complaints that have been put on the record. The most insightful comments, though, in the sense of potentially generating new ideas and being intellectually fruitful, have come in much less formal places — e-mails, chats in hotel bars and the like. It may be a rewarding exercise to try doing the dinosaur-bone thing and reconstructing how those exchanges have gone based on FAQBism. I caution the student, however, that many of the questions we hear *most* frequently are difficult to say anything interesting in response to, because they are interrogating their preconceptions rather than anything we have actually said. (A tell for this, one that I was initially surprised by but have since found dependable, is when the interrogator fails to see how little QBism has to do with “Bayesian inference”; see question 9.)

Just about the time I was wrapping up my PhD, I had enough cash on hand that I could do whatever I wanted for about a year, provided I kept living like a grad student while doing it. I decided that my version of backpacking around Europe would be investigating the theory of optimal quantum measurements. This had been a side interest during my thesis work, and turning it into my primary focus was appealing. There were *lessons about reality* in those calculations, I felt, lessons at least important enough to be worth another year of rice bowls for Friday dinner.

Eventually, on the strength of what I found, I was able to get funding to carry on with the work, so it is what I am still doing. Nowadays, of course, one’s justification for conducting pure research has an element of agony to it. The only remaining options in our world are anger and despair; the latter means paralysis, but can we draw strength from the former without, in technical terms, poisoning our souls? A few interludes of mathematical beauty, here and there a perspective shift that makes a bit of old intellectual history newly charming, a few chances to nurture wild new ideas with colleagues and friends — is that enough?

*In other circumstances, publication would have seemed very premature. But in April 1940, who could be sure of a tomorrow? * —André Weil

*Yes, I definitely think we in general see things more clearly now.* —Greta Thunberg

The kind of mistake I was prone to making, and the flaws in the way mathematics was taught, meshed perfectly. Carelessness cost more in math than in anything else, on the whole. If I was writing a history essay and I happened to swap Elba and St. Helena, I might only get docked a couple points out of a hundred, or perhaps none at all if the teacher had too many papers to grade. But if I wandered away from my pre-algebra homework, and upon my return my garishly awful handwriting had turned absolute-value bars into ordinary parentheses, my calculations would be completely off from that point onward. Nor did any of my teachers pick up on my problem — “Blake, you’ve got to *be more careful*!” — which makes me suspect that they weren’t much better at identifying what went wrong for *other* students, either.

In history and to a large extent in science, I was able to get by all through middle and high school with what I had learned out of books and documentaries on my own. (I was extraordinarily lucky to have a family that already had plenty of books around, and the means and the sense to provide me with more as I packed their contents into my brains.) I don’t think I had to learn anything in school that came across as *wholly new.* Everything was at most an elaboration of a topic I had already seen, something I’d grasped from a Larry Gonick cartoon guide, let’s say, done up with a few more details that might just have been included for the sake of having homework problems to assign. Algebra and geometry and trig and calculus, though, came closer to asking for a genuine production on my part.

Techniques of checking one’s work, which might have helped me to become a bit more generally competent, were either not taught or not motivated. “Plug your value of $x$ back in and check” might have been the last step of a few algebra exercises, but only because it was part of the rubric, devised to add another thing that could be graded.

The weird thing is that I had a sense of the *importance* of the mathematics, of the *motivation* for it. I knew why Kepler had cared about sines and cosines — to hear the music of the spheres, to turn comets from signs of dread into those of wonder. Exponentials tracked the explosion of populations and the decay of radioisotopes, each ominous in its own way. The subject offered wonders of pure thought and marvels of application. At the time, schoolwork merely seemed disconnected from those treasures which I saw in secondhand outline. Now, in retrospect, it appears almost a parody of them.

B. C. Stacey, “On QBism and Assumption (Q)” [arXiv:1907.03805].

I correct two misapprehensions, one historical and one conceptual, in the recent literature on extensions of the Wigner’s Friend thought-experiment. Perhaps fittingly, both concern the accurate description of some quantum physicists’ beliefs by others.

Also available via SciRate.

]]>I summarize a research program that aims to reconstruct quantum theory from a fundamental physical principle that, while a quantum system has no intrinsic hidden variables, it can be understood using a reference measurement. This program reduces the physical question of why the quantum formalism is empirically successful to the mathematical question of why complete sets of equiangular lines appear to exist in complex vector spaces when they do not exist in real ones. My primary goal is to clarify motivations, rather than to present a closed book of numbered theorems, and consequently the discussion is more in the manner of a colloquium than a PRL.

Also available via SciRate.

]]>Coherence, treated as a resource in quantum information theory, is a basis-dependent quantity. Looking for states that have constant coherence under canonical changes of basis yields highly symmetric structures in state space. For the case of a qubit, we find an easy construction of qubit SICs (Symmetric Informationally Complete POVMs). SICs in dimension 3 and 8 are also shown to be equicoherent.

Also available via SciRate.

]]>When I was a prickly atheist teenager, I was rather confident that the people who put “evolution is just a theory” stickers in all our biology books would be Good Germans if given the chance.

Turns out? I was right.

If I had foreseen that organized atheism would descend into sexism and xenophobia, *then* I would give myself credit. Yes, pretty much as soon as I met a convention-ful of skeptics, I found myself ill at ease with the blithe acceptance of economic injustice, and vaguely surprised by how easily the cogs of critical thinking were disengaged once the conversation moved beyond “UFOs, aspirin commercials, and 35,000-year-old channelees”. I should have been more upset, and sooner.

When I was a university student, I wrote a sestina in the voice of Persephone, mourning her life, with the trick ending that she’s a goth girl and wishes she had eaten *more* of the pomegranate seeds so that she could groove on the underworld for a longer fraction of the year. This may be indicative of my type of indulgence then.

When I was a university student, I began a novel. Like any youngster who has just discovered layering and allusion — anagrams with doctorates — I went full in, under the spell of *Pale Fire* and Appel’s *Annotated Lolita* and elective courses on hypertext fiction. I am sure the result would in many places embarrass me now, though at least I am still fond of this sample. I doubt I had the stylistic control to make all of my attempts at subversion be more than recapitulations. In retrospect, one character seems, within the strictures of a “romance” subplot, to be discovering their own asexuality. A better writer would have done more with that. And the motives of my off-screen villains now feel a bit too armchair, too intellectualized, when simple misogynist fury would suffice.

Also, I underplayed climate change, treating it as a diegetic justification for a mild surrealism, a reason for the world to be reshaped — under a spell, again, this time of Borges’ “Death and the Compass”.

You see, I finished that novel in 2008, when plenty was wrong with the world, but matters were sufficiently good for sufficiently many that it felt we could make things *right,* if we only worked hard.

When I was younger, I found that “Holy Writ” was habitually obscure, frequently cruel and almost always in need of an editor. Now, as we are poised to inherit the heated wind, I would add that the best reason to know the story of Abraham nearly sacrificing Isaac is to appreciate the twist ending that Wilfred Owen gave it:

*But the old man would not so, but slew his son,
And half the seed of Europe, one by one.*

It is easy to argue that the founders of quantum mechanics made statements which are opaque and confusing. It is fair to say that their philosophical takes on the subject are not infrequently unsatisfying. We can all use reminders that human flaws and passions are a part of physics. So, it would be nice to have a popular book on these themes, one that makes no vital omissions, represents its sources accurately and lives up to its own ideals.

Sadly, we’re still waiting.

]]>My primary claim is the following:

*We should really expunge the term “the Copenhagen interpretation” from our vocabularies.*

What Bohr thought was not what Heisenberg thought, nor was it what Pauli thought; there was no single unified “Copenhagen interpretation” worthy of the name. Indeed, the term does not enter the written literature until the 1950s, and that was mostly due to Heisenberg acting like he and Bohr were more in agreement back in the 1920s than they actually had been.

For Bohr, the “collapse of the wavefunction” (or the “reduction of the wave packet”, or whatever you wish to call it) was *not* a singular concept tacked on to the dynamics, but an essential part of what the quantum theory *meant*. He considered any description of an experiment as necessarily beginning and ending in “classical language”. So, for him, there was no problem with ending up with a measurement outcome that is just a classical fact: You introduce “classical information” when you specify the problem, so you end up with “classical information” as a result. “Collapse” is not a matter of the Hamiltonian changing stochastically or anything like that, as caricatures of Bohr would have it, but instead, it’s a question of what writing a Hamiltonian means. For example, suppose you are writing the Schrödinger equation for an electron in a potential well. The potential function $V(x)$ that you choose depends upon your experimental arrangement — the voltages you put on your capacitor plates, etc. In the Bohrian view, the description of how you arrange your laboratory apparatus is in “classical language”, or perhaps he’d say “ordinary language, suitably amended by the concepts of classical physics”. Getting a classical fact at your detector is just the necessary flipside of starting with a classical account of your source.

(Yes, Bohr was the kind of guy who would choose the yin-yang symbol as his coat of arms.)

To me, the clearest expression of all this from the man himself is a lecture titled “The causality problem in atomic physics”, given in Warsaw in 1938 and published in the proceedings, *New Theories in Physics,* the following year. This conference is notable for several reasons, among them the fact that Hans Kramers, speaking both for himself and on behalf of Heisenberg, suggested that quantum mechanics could break down at high energies. More than a decade after what we today consider the establishment of the quantum theory, the pioneers of it did not all trust it in their bones; we tend to forget that nowadays.

As to how Heisenberg disagreed with Bohr, and what all this has to do with decoherence, I refer to Camilleri and Schlosshauer.

Do I find the Bohrian position that I outlined above satisfactory? No, I do not. Perhaps the most important reason why, the reason that emotionally cuts the most deeply, is rather like the concern which Rudolf Haag raised while debating Bohr in the early 1950s:

I tried to argue that we did not understand the status of the superposition principle. Why are pure states described as [rays] in a complex linear space? Approximation or deep principle? Niels Bohr did not understand why I should worry about this. Aage Bohr tried to explain to his father that I hoped to get inspiration about the direction for the development of the theory by analyzing the existing formal structure. Niels Bohr retorted: “But this is very foolish. There is no inspiration besides the results of the experiments.” I guess he did not mean that so absolutely but he was just annoyed. […] Five years later I met Niels Bohr in Princeton at a dinner in the house of Eugene Wigner. When I drove him afterwards to his hotel I apologized for my precocious behaviour in Copenhagen. He just waved it away saying: “We all have our opinions.”

Why rays? Why complex linear space? I want to know too.

]]>(He won’t; he’s too busy appropriating its respectability.)

Today, I learned that Boghossian’s institutional review board found that his actions in the “Grievance Studies” hoax constitute research misconduct.

**EDIT TO ADD:** Further commentary on the ethics of academic hoaxes.

- B. C. Stacey, “Sporadic SICs and Exceptional Lie Algebras”

Sometimes, mathematical oddities crowd in upon one another, and the exceptions to one classification scheme reveal themselves as fellow-travelers with the exceptions to a quite different taxonomy.

**UPDATE (30 March 2019):** Thanks to a kind offer by John Baez, we’re going through this material step-by-step over at a blog with a community, the *n*-Category Café:

J. B. DeBrota, C. A. Fuchs and B. C. Stacey, “Triply Positive Matrices and Quantum Measurements Motivated by QBism” [arXiv:1812.08762].

We study a class of quantum measurements that furnish probabilistic representations of finite-dimensional quantum theory. The Gram matrices associated with these Minimal Informationally Complete quantum measurements (MICs) exhibit a rich structure. They are “positive” matrices in three different senses, and conditions expressed in terms of them have shown that the Symmetric Informationally Complete measurements (SICs) are in some ways optimal among MICs. Here, we explore MICs more widely than before, comparing and contrasting SICs with other classes of MICs, and using Gram matrices to begin the process of mapping the territory of all MICs. Moreover, the Gram matrices of MICs turn out to be key tools for relating the probabilistic representations of quantum theory furnished by MICs to quasi-probabilistic representations, like Wigner functions, which have proven relevant for quantum computation. Finally, we pose a number of conjectures, leaving them open for future work.

This is a sequel to our paper from May, and it contains one minor erratum for an article from 2013.

]]>B. C. Stacey, “QBism and the Ithaca Desiderata” [arXiv:1812.05549].

]]>In 1996, N. David Mermin proposed a set of desiderata for an understanding of quantum mechanics, the “Ithaca Interpretation”. In 2012, Mermin became a public advocate of QBism, an interpretation due to Christopher Fuchs and Ruediger Schack. Here, we evaluate QBism with respect to the Ithaca Interpretation’s six desiderata, in the process also evaluating those desiderata themselves. This analysis reveals a genuine distinction between QBism and the IIQM, but also a natural progression from one to the other.

So, if you’re wanting for some commentary on quantum mechanics, here goes:

Gleason’s theorem begins with a brief list of postulates, which are conditions for expressing “measurements” in terms of Hilbert spaces. To each physical system we associate a complex Hilbert space, and each measurement corresponds to a resolution of the identity operator — in Gleason’s original version, to an orthonormal basis. The crucial assumption is that the probability assigned to a measurement outcome (i.e., to a vector in a basis) does not depend upon which basis that vector is taken to be part of. The probability assignments are “noncontextual,” as they say. The conclusion of Gleason’s argument is that any mapping from measurements to probabilities that satisfies his assumptions must take the form of the Born rule applied to some density operator. In other words, the theorem gives the set of valid states *and* the rule for calculating probabilities given a state.

(It is significantly easier to prove the POVM version of Gleason’s theorem, in which a “measurement” is not necessarily an orthonormal basis, but rather any resolution of the identity into positive semidefinite operators, $\sum_i E_i = I$. In this case, the result is that any valid assignment of probabilities to measurement outcomes, or “effects,” takes the form $p(E) = {\rm tr}(\rho E)$ for some density operator $\rho$. The math is easier; the conceptual upshot is the same.)

I have a sneaky suspicion that a good many other attempted “derivations of the Born rule” really amount to little more than burying Gleason’s assumptions under a heap of dubious justifications. MGM don’t quite do that; what they present is more interesting.

They start with what they consider the “standard postulates” of quantum mechanics, which in their reckoning are five in number. Then they discard the last two and replace them with rules of a more qualitative character. Their central result is that the discarded postulates can be re-derived from those that were kept, plus the more qualitative-sounding conditions.

MGM say that the assumptions they keep are about state space, while the ones they discard are about measurements. But the equations in the three postulates that they keep could just as well be read as assumptions about measurements instead. Since they take measurement to be an operationally primitive notion — fine by me, anathama to many physicists! — this is arguably the better way to go. Then they add a postulate that has the character of noncontextuality: The probability of an event is independent of how that event is embedded into a measurement on a larger system. So, they work in the same setting as Gleason (Hilbert space), invoke postulates of the same nature, and arrive in the same place. The conclusion, if you take their postulates about complex vectors as referring to measurement outcomes, is that “preparations” are dual to outcomes, and outcomes occur with probabilities given by the Born rule, thereupon turning into new preparations.

Let’s treat this in a little more detail.

Here is the first postulate of what MGM take to be standard quantum mechanics:

To every physical system there corresponds a complex and separable Hilbert space $\mathbb{C}^d$, and the pure states of the system are the rays $\psi \in {\rm P}\mathbb{C}^d$.

We strike the words “pure states” and replace them with “sharp effects” — an equally undefined term at this point, which can only gain meaning in combination with other ideas later.

(I spend at least a little of every working day wondering why quantum mechanics makes use of complex numbers, so this already feels intensely arbitrary to me, but for now we’ll take it as read and press on.)

MGM define an “outcome probability function” as a mapping from rays in the Hilbert space $\mathbb{C}^d$ to the unit interval $[0,1]$. The abbreviation OPF is fine, but let’s read it instead as *operational preparation function.* The definition is the same: An OPF is a function ${\bf f}: {\rm P}\mathbb{C}^d \to [0,1]$. Now, though, it stands for the probability of obtaining the measurement outcome $\psi$, for each $\psi$ in the space ${\rm P}\mathbb{C}^d$ of sharp effects, given the preparation ${\bf f}$. All the properties of OPFs that they invoke can be justified equally well in this reading. If ${\bf f}(\psi) = 1$, then event $\psi$ has probability 1 of occurring given the preparation ${\bf f}$. For any two preparations ${\bf f}_1$ and ${\bf f}_2$, we can imagine performing ${\bf f}_1$ with probability $p$ and ${\bf f}_2$ with probability $1-p$, so the convex combination $p{\bf f}_1 + (1-p){\bf f}_2$ must be a valid preparation. And, given two systems, we can imagine that the preparation of one is ${\bf f}$ while the preparation of the other is ${\bf g}$, so the preparation of the joint system is some composition ${\bf f} \star {\bf g}$. And if measurement outcomes for separate systems compose according to the tensor product, and this $\star$ product denotes a joint preparation that introduces no correlations, then we can say that $({\bf f} \star {\bf g})(\psi \otimes \phi) = {\bf f}(\psi) {\bf g}(\phi)$. Furthermore, we can argue that the $\star$ product must be associative, ${\bf f} \star ({\bf g} \star {\bf h}) = ({\bf f} \star {\bf g}) \star {\bf h}$, and everything else that the composition of OPFs needs to satisfy in order to make the algebra go.

Ultimately, the same math has to work out, after we swap the words around, because the final structure is self-dual: The same set of rays ${\rm P}\mathbb{C}^d$ provides the extremal elements both of the state space and of the set of effects. So, if we take the dual of the starting point, we have to arrive in the same place by the end.

But is either choice of starting point more *natural*?

I find that focusing on the measurement outcomes is preferable when trying to connect with the Bell and Kochen–Specker theorems. In the former, the “preparation” is fixed, while in the latter, it can be arbitrary, but either way, we don’t say much *interesting* about it. The action lies in the choice of measurements, and how the rays that represent one measurement can interlock with those for another. So, from that perspective, putting the emphasis on the measurements and then deriving the state space is the more “natural” move. It puts the mathematics in conceptual and historical context.

That said, on a deeper level, I don’t find either choice all that compelling. To appreciate why, we need only look again at that arcane symbol, ${\rm P}\mathbb{C}^d$. That is the setting for the whole argument, and it is completely opaque. Why the complex numbers? Why throw away an overall phase? What is the meaning of “dimension,” and why does it scale multiplicatively when we compose systems? (A typical justification for this last point would be that if we have $n$ completely distinct options for the state of one system, and we have $m$ completely distinct options for the state of a second system, then we can pick one from each set for a total of $nm$ possibilities. But what are these options “completely distinct” with respect to, if we have not yet introduced the concept of measurement? Why should dimension be the quantity that scales in such a nice way, if we have no reason to care about vectors being orthogonal?) All of this cries out for a deeper understanding.

]]>When I heard about *J. Con. Id.,* I couldn’t help thinking that I have myself supported some unpopular scientific opinions. A few times, that’s where my best professional judgment led me. When my colleagues and I have found ourselves in that position, we set forth our views by publishing … in *Nature.*

- M. J. Wade et al., “Multilevel and kin selection in a connected world,”
*Nature***463**(2010), E8–E9. - N. David Mermin, “Physics: QBism puts the scientist back into science,”
*Nature***507**(2014), 421–23.

(I have to admit that the 2010 comment is not as strong as it could have been. It was a bit of a written-by-committee job, with all that that implies. I recommend that every young scientist go through that process … once. Better papers in the genre came later. And for my own part, I think I did a better job distinguishing all the confusing variants of terminology when I had more room to stretch, in Chapter 9 of arXiv:1509.02958.)

]]>Over on his blog, Peter Woit quotes a scene from the imagination of John Horgan, whose *The End of Science* (1996) visualized physics falling into a twilight:

A few diehards dedicated to truth rather than practicality will practice physics in a nonempirical, ironic mode, plumbing the magical realm of superstrings and other esoterica and fretting about the meaning of quantum mechanics. The conferences of these ironic physicists, whose disputes cannot be experimentally resolved, will become more and more like those of that bastion of literary criticism, the Modern Language Association.

OK (*cracks knuckles*), a few points. First, “fretting about the meaning of quantum mechanics” has, historically, been damn important. A lot of quantum information theory came out of people doing exactly that, just with equations. The *productive* way of “fretting” involves plumbing the meaning of quantum mechanics by *finding what new capabilities quantum mechanics can give you.* Let’s take one of the least blue-sky applications of quantum information science: securing communications with quantum key distribution. Why trust the security of quantum key distribution? There’s a whole theory behind the idea, one which depends upon the quantum de Finetti theorem. Why is there a quantum de Finetti theorem in a form that physicists could understand and care about? Because Caves, Fuchs and Schack wanted to prove that the phrase “unknown quantum state” has a well-defined meaning for personalist Bayesians.

This example could be augmented with many others. (I selfishly picked one where I could cite my own collaborator.)

It’s illuminating to quote the passage from Horgan’s book just before the one that Woit did:

This is the fate of physics. The vast majority of physicists, those employed in industry and even academia, will continue to apply the knowledge they already have in hand—inventing more versatile lasers and superconductors and computing devices—without worrying about any underlying philosophical issues.

But there just isn’t a clean dividing line between “underlying philosophical issues” and “more versatile computing devices”! In fact, the foundational question of what the nature of “quantum states” really are overlaps with the question of which quantum computations can be emulated on a classical computer, and how some preparations are better *resources* for quantum computers than others. Flagrantly disregarding attempts to draw a boundary line between “foundations” and “applications” is my day job now, but quantum information was already getting going in earnest during the mid-1990s, so this isn’t a matter of hindsight. (Feynman wasn’t the first to talk about quantum computing, but he was certainly influential, and the motivations he spelled out were pretty explicitly foundational. Benioff, who preceded Feynman, was also interested in foundational matters, and even said as much while building quantum Hamiltonians for Turing machines.) And since Woit’s post was about judging whether a prediction held up or not, I feel pretty OK applying a present-day standard anyway.

In short: Meaning matters.

But then, Horgan’s book gets the Einstein–Podolsky—Rosen thought-experiment completely wrong, and I should know better than to engage with what any book like that on the subject of what quantum mechanics might mean.

To be honest, Horgan is unfair to the Modern Language Association. Their convention program for January 2019 indicates a community that is actively engaged in the world, with sessions about the changing role of journalism, how the Internet has enabled a new kind of “public intellectuals”, how to bring African-American literature into summer reading, the dynamics of organized fandoms, etc. In addition, they plainly advertise sessions as open to the public, which I can only barely imagine a physics conference doing more than a nominal jab at. Their public sessions include a film screening of a documentary about the South African writer and activist Peter Abrahams, as well as workshops on practical skills like how to cite sources. That’s not just valuable training, but also a topic that is actively evolving: How do you cite a tweet, or an archived version of a Wikipedia page, or a post on a decentralized social network like Mastodon?

Dragging the sciences for supposedly resembling the humanities has not grown more endearing since 1996.

All this came up in the context of physics being done by artificial intelligence. If anything, the idea of “machines replacing physicists” is *less* plausible to me now than it was two decades ago, because back then, there was at least a *chance* that AI would have had something to do with understanding how human minds work, rather than just throwing a bunch of GPUs at a problem and calling the result “machine learning”. This perspective is informed in part by long talks with a friend whose research area *is* machine learning, and who is quite dissatisfied with the common approach to it. Specifically, they work in computer vision, where the top-notch algorithms still identify the Queen’s crown as a shower cap and can be fooled into calling a panda a vulture. People have problems, but not *those* problems.

What research is it that has prompted the specter of the Machine solving the Theory of Everything? Honestly, the “machine learning in the string landscape” language sounds to me like a new coat of paint over a general approach that physicists have been using for as long as we’ve had computers. Pose a problem, get your grad students to feed it into the computer, obtain numerical results, see what conjectures those results suggest, and if you’re lucky, prove those conjectures. In this particular case, the conjectures eventually proven might not ultimately connect to experiment, but that’s the old problem of *quantum gravity being hard to study,* not a new problem about the way physics is being done. And in order to put a question to a computer, you have to (get your grad student to) phrase it very carefully. You can’t ask the computer for a heuristic argument based on conjectural features that a nonperturbative theory of quantum gravity might have, were we to know of one. You have to talk about structures you can define (structures that you might *guess* are relevant to a nonperturbative theory of quantum gravity). For example, you might define a Calabi–Yau space in terms of an affine cone over a toric variety, which you in turn define in terms of a convex lattice polygon, etc., eventually converting the problem into one that you can code up. There’s still the big gap between your work and experiment, and there’s still the lack of a well-defined over-arching theory, but you’ve made your little corner of “experimental mathematics” less vague.

You might even be moving science in a healthy direction, by taking some of the ideas that have grown under the “string theory” umbrella and making them less a matter for physicists, and more a concern for the people who like to map extraordinarily complex mathematical structures for their own sake — the people who, for example, stay up late thinking about the centralizers of the Monster group. That shift could be a mutually beneficial development.

]]>