QBism and the Ithaca Desiderata

Time again for the New Paper Dance!

B. C. Stacey, “QBism and the Ithaca Desiderata” [arXiv:1812.05549].

In 1996, N. David Mermin proposed a set of desiderata for an understanding of quantum mechanics, the “Ithaca Interpretation”. In 2012, Mermin became a public advocate of QBism, an interpretation due to Christopher Fuchs and Ruediger Schack. Here, we evaluate QBism with respect to the Ithaca Interpretation’s six desiderata, in the process also evaluating those desiderata themselves. This analysis reveals a genuine distinction between QBism and the IIQM, but also a natural progression from one to the other.

The State Space of Quantum Mechanics is Redundant

There was some water-cooler talk around the office this past week about a paper by Masanes, Galley and Müller that hit the arXiv, and I decided to write up my thoughts about it for ease of future reference. In short, I have no reason yet to think that the math is wrong, but what they present as a condition on states seems more naturally to me like a condition on measurement outcomes. Upon making this substitution, the Masanes, Galley and Müller result comes much closer to resembling Gleason’s theorem than they say it does.

So, if you’re wanting for some commentary on quantum mechanics, here goes:
Continue reading The State Space of Quantum Mechanics is Redundant

Proceedings of the Royal Society of Smegheads

So, the news from a little while back was that a new Journal of Controversial Ideas is in the pipeline, with a big part of the motivation being to protect “academic freedom” from the (nonexistent) Campus Free Speech Crisis. If this sounds to you like a way for the hateful to spout toxic ravings about marginalized peoples from behind a screen of anonymity, then I’d say you have a low opinion of human nature, a low opinion that is entirely merited by the data. If it also sounds to you like a good way to part a mark from his dollar with “peer review” that amounts to a vanity pay-to-publish scheme, then I’d say your sense of cynicism is nicely calibrated.

When I heard about J. Con. Id., I couldn’t help thinking that I have myself supported some unpopular scientific opinions. A few times, that’s where my best professional judgment led me. When my colleagues and I have found ourselves in that position, we set forth our views by publishing … in Nature.

(I have to admit that the 2010 comment is not as strong as it could have been. It was a bit of a written-by-committee job, with all that that implies. I recommend that every young scientist go through that process … once. Better papers in the genre came later. And for my own part, I think I did a better job distinguishing all the confusing variants of terminology when I had more room to stretch, in Chapter 9 of arXiv:1509.02958.)

The Rise of Ironic Physics and/or Machine Physicists?

CONTENT ADVISORY: old-fashioned blog snarkery about broad trends in physics.

Over on his blog, Peter Woit quotes a scene from the imagination of John Horgan, whose The End of Science (1996) visualized physics falling into a twilight:

A few diehards dedicated to truth rather than practicality will practice physics in a nonempirical, ironic mode, plumbing the magical realm of superstrings and other esoterica and fret­ting about the meaning of quantum mechanics. The conferences of these ironic physicists, whose disputes cannot be experimentally resolved, will become more and more like those of that bastion of literary criticism, the Modern Language Association.

OK (*cracks knuckles*), a few points. First, “fretting about the meaning of quantum mechanics” has, historically, been damn important. A lot of quantum information theory came out of people doing exactly that, just with equations. The productive way of “fretting” involves plumbing the meaning of quantum mechanics by finding what new capabilities quantum mechanics can give you. Let’s take one of the least blue-sky applications of quantum information science: securing communications with quantum key distribution. Why trust the security of quantum key distribution? There’s a whole theory behind the idea, one which depends upon the quantum de Finetti theorem. Why is there a quantum de Finetti theorem in a form that physicists could understand and care about? Because Caves, Fuchs and Schack wanted to prove that the phrase “unknown quantum state” has a well-defined meaning for personalist Bayesians.

This example could be augmented with many others. (I selfishly picked one where I could cite my own collaborator.)

It’s illuminating to quote the passage from Horgan’s book just before the one that Woit did:

This is the fate of physics. The vast majority of physicists, those employed in industry and even academia, will continue to apply the knowledge they already have in hand—inventing more versatile lasers and superconductors and computing devices—without worrying about any underlying philosophical issues.

But there just isn’t a clean dividing line between “underlying philosophical issues” and “more versatile computing devices”! In fact, the foundational question of what the nature of “quantum states” really are overlaps with the question of which quantum computations can be emulated on a classical computer, and how some preparations are better resources for quantum computers than others. Flagrantly disregarding attempts to draw a boundary line between “foundations” and “applications” is my day job now, but quantum information was already getting going in earnest during the mid-1990s, so this isn’t a matter of hindsight. (Feynman wasn’t the first to talk about quantum computing, but he was certainly influential, and the motivations he spelled out were pretty explicitly foundational. Benioff, who preceded Feynman, was also interested in foundational matters, and even said as much while building quantum Hamiltonians for Turing machines.) And since Woit’s post was about judging whether a prediction held up or not, I feel pretty OK applying a present-day standard anyway.

In short: Meaning matters.

But then, Horgan’s book gets the Einstein–Podolsky—Rosen thought-experiment completely wrong, and I should know better than to engage with what any book like that on the subject of what quantum mechanics might mean.

To be honest, Horgan is unfair to the Modern Language Association. Their convention program for January 2019 indicates a community that is actively engaged in the world, with sessions about the changing role of journalism, how the Internet has enabled a new kind of “public intellectuals”, how to bring African-American literature into summer reading, the dynamics of organized fandoms, etc. In addition, they plainly advertise sessions as open to the public, which I can only barely imagine a physics conference doing more than a nominal jab at. Their public sessions include a film screening of a documentary about the South African writer and activist Peter Abrahams, as well as workshops on practical skills like how to cite sources. That’s not just valuable training, but also a topic that is actively evolving: How do you cite a tweet, or an archived version of a Wikipedia page, or a post on a decentralized social network like Mastodon?

Dragging the sciences for supposedly resembling the humanities has not grown more endearing since 1996.
Continue reading The Rise of Ironic Physics and/or Machine Physicists?

What I Do

At the moment, I’m taking a quick break from reading some rather dense mathematical prose, and I spent yesterday plugging away at a draft of my research group’s next technical publication. This led me to reflect on a lesson that I think a lot of science education leaves out: Even in a technical article, you have to have a story to carry the progression through. “These are all the boffo weird roadside attractions we found while proving the theorems in our last paper” is honest, but not adequate.

Our research project is the reconstruction of the mathematical formalism of quantum theory from physical principles. We tease apart the theory, identify what is robustly strange about it — for many more quantum phenomena can be emulated with classical stochasticity than are often appreciated — and try to build a new representation that brings the most remarkable features of the physics to the forefront. In special relativity, we have Einstein’s postulates, and the dramatic tension between them: Inertial observers can come to agree upon the laws of physics, but they cannot agree upon a standard of rest. In thermodynamics, we have the Four Laws, which come with their own dramatic tension, in that energy is conserved while entropy is nondecreasing. Both of these theories are expressed in terms of what agents can and cannot do, yet they are more than “mere” engineering, because they apply to all agents. Or, to say it another way, it is to the benefit of any agent to pick up the theory and use it as a guide.

What, then, is the analogue for quantum theory? If the textbook presentation of quantum physics is like the formulae for the Lorentz transform, with all those square roots and whatnot, or the Maxwell relations in thermo, with all those intermingling partial derivatives that we invent hacks about determinants to remember, what is quantum theory’s version of Einstein’s postulates or the Four Laws?

That’s the grandiose version, anyway. The reason I got invited to speak at an American Mathematical Society meeting is that the geometric structures that arise in this work are vexingly fascinating. You want about Galois fields and Hilbert’s 12th problem? We’ve got ’em! How about sphere packing and unexpected octonions? We’ve got those, too! And the structure that leads down the latter path turns out, on top of that, to yield a new way of thinking about Mermin’s 3-qubit Bell inequality. It is all lovely, and it is all strange.

The SIC problem gives us the opportunity to travel all throughout mathematics, because, while the definition looks pretty small, the question is bigger on the inside.

New Paper Dance, Encore

This time, it’s another solo-author outing.

B. C. Stacey, “Is the SIC Outcome There When Nobody Looks?” [arXiv:1807.07194].

Informationally complete measurements are a dramatic discovery of quantum information science, and the symmetric IC measurements, known as SICs, are in many ways optimal among them. Close study of three of the “sporadic SICs” reveals an illuminating relation between different ways of quantifying the extent to which quantum theory deviates from classical expectations.

New Papers Dance

In spite of the “everything, etc.” that is life these days, I’ve managed to do a bit of science here and there, which has manifested as two papers. First, there’s the one about quantum physics, written with the QBism group at UMass Boston:

J. B. DeBrota, C. A. Fuchs and B. C. Stacey, “Symmetric Informationally Complete Measurements Identify the Essential Difference between Classical and Quantum” [arXiv:1805.08721].

We describe a general procedure for associating a minimal informationally-complete quantum measurement (or MIC) and a set of linearly independent post-measurement quantum states with a purely probabilistic representation of the Born Rule. Such representations are motivated by QBism, where the Born Rule is understood as a consistency condition between probabilities assigned to the outcomes of one experiment in terms of the probabilities assigned to the outcomes of other experiments. In this setting, the difference between quantum and classical physics is the way their physical assumptions augment bare probability theory: Classical physics corresponds to a trivial augmentation — one just applies the Law of Total Probability (LTP) between the scenarios — while quantum theory makes use of the Born Rule expressed in one or another of the forms of our general procedure. To mark the essential difference between quantum and classical, one should seek the representations that minimize the disparity between the expressions. We prove that the representation of the Born Rule obtained from a symmetric informationally-complete measurement (or SIC) minimizes this distinction in at least two senses—the first to do with unitarily invariant distance measures between the rules, and the second to do with available volume in a reference probability simplex (roughly speaking a new kind of uncertainty principle). Both of these arise from a significant majorization result. This work complements recent studies in quantum computation where the deviation of the Born Rule from the LTP is measured in terms of negativity of Wigner functions.

To get an overall picture of our results without diving into the theorem-proving, you can watch John DeBrota give a lecture about our work.

Second, there’s the more classical (in the physicist’s sense, if not the economist’s):

B. C. Stacey and Y. Bar-Yam, “The Stock Market Has Grown Unstable Since February 2018” [arXiv:1806.00529].

On the fifth of February, 2018, the Dow Jones Industrial Average dropped 1,175.21 points, the largest single-day fall in history in raw point terms. This followed a 666-point loss on the second, and another drop of over a thousand points occurred three days later. It is natural to ask whether these events indicate a transition to a new regime of market behavior, particularly given the dramatic fluctuations — both gains and losses — in the weeks since. To illuminate this matter, we can apply a model grounded in the science of complex systems, a model that demonstrated considerable success at unraveling the stock-market dynamics from the 1980s through the 2000s. By using large-scale comovement of stock prices as an early indicator of unhealthy market dynamics, this work found that abrupt drops in a certain parameter U provide an early warning of single-day panics and economic crises. Decreases in U indicate regimes of “high co-movement”, a market behavior that is not the same as volatility, though market volatility can be a component of co-movement. Applying the same analysis to stock-price data from the beginning of 2016 until now, we find that the U value for the period since 5 February is significantly lower than for the period before. This decrease entered the “danger zone” in the last week of May, 2018.

Recent Advances in Packing

The weekend before last, I overcame my reluctance to travel and went to a mathematics conference, the American Mathematical Society’s Spring Central Sectional Meeting. I gave a talk in the “Recent Advances in Packing” session, spreading the word about SICs. My talk followed those by Steve Flammia and Marcus Appleby, who spoke about the main family of known SIC solutions while I covered the rest (the sporadic SICs). The co-organizer of that session, Dustin Mixon, has posted an overall summary and the speakers’ slides over at his blog.

No, That Viral Video Does Not Contain the Mathematical Secret of Reality

This is what I get for skimming an entertainment website for a momentary diversion.

So, everybody’s seen the cool new video, “‘Cantina Theme’ played by a pencil and a girl with too much time on her hands,” right?

And we’ve heard the claim, via Mashable and thence The AV Club, that the formula “can actually be used to determine the speed of light,” yes?

It’s a joke. The “proof” is words thrown into a box and filled with numbers so that nobody reads it too carefully. The algebra isn’t even right — hell, it does FOIL wrong — but that’s just a detail. I tried to think of a way to use it as a hook to explain some real science, as I’ve tried before upon occasion, but there just wasn’t any there there. The whole thing is goofing off.

Obvious goofing off, I would have thought. Somewhere south of a Star Trek Voyager technobabble speech. But no, never underestimate the ability of numbers to make a brain shut down.

Two Recent Items Concerning Wikipedia

A few years ago, I found a sentence in a Wikipedia page that irritated me so much, I wrote a 25-page article about it. Eventually, I got that article published in the Philosophical Transactions of the Royal Society. On account of all this, friends and colleagues sometimes send me news about Wikipedia, or point me to strange things they’ve found there. A couple such items have recently led me to Have Thoughts, which I share below.

This op-ed on the incomprehensibility of Wikipedia science articles puts a finger on a real problem, but its attempt at explanation assumes malice rather than incompetence. Yes, Virginia, the science and mathematics articles are often baffling and opaque. The Vice essay argues that the writers of Wikipedia’s science articles use the incomprehensibility of their prose as a shield to keep out the riffraff and maintain the “elite” status of their subject. I don’t buy it. In my opinion, this hypothesis does not account for the intrinsic difficulty of explaining science, nor for the incentive structures at work. Wikipedia pages grow by bricolage, small pieces of cruft accumulating over time. “Oh, this thing says [citation needed]. I’ll go find a citation to fill it in, while my coffee is brewing.” This is not conducive to clean pedagogy, or to a smooth transition from general-audience to specialist interest.

Have no doubt that a great many scientists are terrible at communication, but we can also imagine a world in which Wikipedia would attract the scientists that actually are good at communication.

There’s communication, and then there’s communication. (We scientists usually get formal training in neither.) I know quite a few scientists who are good at outreach. They work hard at it, because they believe it matters and they know that’s what it takes. Almost none of them have ever mentioned editing Wikipedia (even the one who used his science blog in his tenure portfolio). Thanks to the pressures of academia, the calculation always favors a mode of outreach where it’s easier to point to what you did, so you can get appropriate credit for it.

Thus, there might be a momentary impulse to make small-scale improvements, but there’s almost no incentive to effect changes that are structured on a larger scale — paragraphs, sections, organization among articles. This is a good incentive system for filling articles with technical minutiae, like jelly babies into a bag, but it’s not a way to plan a curriculum.

The piece in Vice says of a certain physics article,

I have no idea who the article exists for because I’m not sure that person actually exists: someone with enough knowledge to comprehend dense physics formulations that doesn’t also already understand the electroweak interaction or that doesn’t already have, like, access to a textbook about it.

You’d be surprised. It’s fairly common to remember the broad strokes of a subject but need a reference for the fiddly little details.

Writers don’t just dip in, produce some Wikipedia copy, and bounce.

I’m pretty sure this is … actually not borne out by the data? Like, many contributors just add little bits when they are strongly motivated, while the smaller active core of persistent editors clean up the content, get involved in article-improvement drives, wrangle behind the scenes, etc.

[EDIT TO ADD (24 November): To say it another way, both the distribution of edits per article and edits per editor are “fat tailed, which implies that even editors and articles with small numbers of edits should not be neglected.” Furthermore, most edits do not change an article’s length, or change it by only a small amount. The seeming tendency for “fewer editors gaining an ever more dominant role” is a real concern, but I doubt the opacity of technical articles is itself a tool of oligarchy. Indeed, I suspect that other factors contribute to the “core editor” group becoming more insular, one being the ease with which policies originally devised for good reasons can be weaponized.]

If you want “elitism,” you shouldn’t look in the technical prose on the project’s front end. Instead, you should go into the backroom. From what I’ve seen and heard, it’s very easy to run afoul of an editor who wants to lord over their tiny domain, and who will sling around policies and abbreviations and local jargon to get their way. Any transgression, or perceived transgression, is an excuse to revert.

Just take a look at “WP:PROF” — the “notability guideline” for evaluating whether a scholar merits a Wikipedia page. It’s almost 3500 words, laying out criteria and then expounding upon their curlicues. And if you create an article and someone else decides it should be deleted, you had better be familiar with the Guide to deletion (roughly 6700 words), which overlaps with the Deletion process documentation (another 4700 words). More than enough regulations for anyone to petulantly sling around until they get their way!

And on the subject of deletion, over on Mastodon the other day I got into a chat about the story of Günter Bechly, a paleontologist who went creationist and whose Wikipedia page was recently toasted. The incident was described by Haaretz thusly:

If Bechly’s article was originally introduced due to his scientific work, it was deleted due to his having become a poster child for the creationist movement.

I strongly suspect that it would have been deleted if it had been brought to anyone’s attention for any other reason, even if Bechly hadn’t gone creationist. His scientific work just doesn’t add up to what Wikipedia considers “notability,” the standard codified by the WP:PROF rulebook mentioned above. Nor were there adequate sources to write about his career in Wikipedia’s regulation flat, footnoted way. The project is clearly willing to have articles on creationists, if the claims in them can be sourced to their standards of propriety: Just look at their category of creationists! Bechly’s problem was that he was only mentioned in passing or written up in niche sources that were deemed unreliable.

If you poke around that deletion discussion for Bechly’s page, you’ll find it links to a rolling list of such discussions for “Academics and educators,” many of whom seem to be using Wikipedia as a LinkedIn substitute. It’s a mundane occurrence for the project.

And another thing about the Haaretz article. It mentions sockpuppets arriving to speak up in support of keeping Bechly’s page:

These one-time editors’ lack of experience became clear when they began voting in favor of keeping the article on Wikipedia – a practice not employed in the English version of Wikipedia since 2016, when editors voted to exchange the way articles are deleted for a process of consensus-based decision through discussion.

Uh, that’s been the rule since 2005 at least. Not the most impressive example of Journalisming.

To Thems That Have

Occasionally, I think of burning my opportunities of advancing in the physics profession — or, more likely, just burning my bridges with Geek Culture(TM) — by writing a paper entitled, “Richard Feynman’s Greatest Mistake”.

I did start drafting an essay I call “To Thems That Have, Shall Be Given More”. There are a sizable number of examples where Feynman gets credit for an idea that somebody else discovered first. It’s the rich-get-richer of science.
Continue reading To Thems That Have

"no matter how gifted, you alone cannot change the world"