J. J. Thomson: (points at atom) pudding

]]>“Oh, but it’s a source of inspiration!”

So, you’ve never been to a writers’ workshop, spent 30 minutes with the staff on the school literary magazine, seen the original “You’re the man now, dog!” scene, or had any other exposure to the thousand and one gimmicks invented over the centuries to get people to put one word after another.

“It provides examples for teaching the art of critique!”

Why not teach with examples, just hear me out here, by actual humans?

“Students can learn to write by rewriting the output!”

Am I the only one who finds passing off an edit of an unattributable mishmash as one’s own work to be, well, flagrantly unethical?

“You’re just yelling at a cloud! What’s next, calling for us to reject modernity and embrace tradition?”

I’d rather we built our future using the best parts of our present rather than the worst.

]]>A qubit is a thing that one *prepares* and that one *measures.* The mathematics of quantum theory tells us how to represent these actions algebraically. That is, it describes the set of all possible preparations, the set of all possible measurements, and how to compute the probability of getting a particular result from a chosen measurement given a particular preparation. To do something interesting, one would typically work with multiple qubits together, but we will start with a single one. And we will begin with the simplest kind of measurement, the *binary* ones. A binary test has two possible outcomes, which we can represent as 0 or 1, “plus” or “minus”, “ping” and “pong”, et cetera. In the lab, this could be sending an ion through a magnetic field and registering whether it swerved up or down; or, it could be sending a blip of light through a polarizing filter turned at a certain angle and registering whether there is or is not a flash. Or any of many other possibilities! The important thing is that there are two outcomes that we can clearly distinguish from each other.

For any physical implementation of a qubit, there are three binary measurements of special interest, which we can call the $X$ test, the $Y$ test and the $Z$ test. Let us denote the possible outcomes of each test by $+1$ and $-1$, which turns out to be a convenient choice. The *expected value* of the $X$ test is the average of these two possibilities, weighted by the probability of each. If we write $P(+1|X)$ for the probability of getting the $+1$ outcome given that we do the $X$ test, and likewise for $P(-1|X)$, then this expected value is $$ x = P(+1|X) \cdot (+1) + P(-1|X) \cdot (-1). $$ Because this is a weighted average of $+1$ and $-1$, it will always be somewhere in that interval. If for example we are completely confident that an $X$ test will return the outcome $+1$, then $x = 1$. If instead we lay even odds on the two possible outcomes, then $x = 0$. Likewise, $$ y = P(+1|Y) \cdot (+1) + P(-1|Y) \cdot (-1), $$ and $$ z = P(+1|Z) \cdot (+1) + P(-1|Z) \cdot (-1). $$

To specify the preparation of a single qubit, all we have to do is pick a value for $x$, a value for $y$ and a value for $z$. But not all combinations $(x,y,z)$ are physically allowed. The valid preparations are those for which the point $(x,y,z)$ lies on or inside the ball of radius 1 centered at the origin: $$ x^2 + y^2 + z^2 \leq 1. $$ We call this the *Bloch ball,* after the physicist Felix Bloch (1905–1983). The surface of the Bloch ball, at the distance exactly 1 from the origin, is the *Bloch sphere.* The points where the axes intersect the Bloch sphere — $(1,0,0)$, $(-1,0,0)$, $(0,1,0)$ and so forth — are the preparations where we are perfectly confident in the outcome of one of our three tests. Points in the interior of the ball, not on the surface, imply uncertainty about the outcomes of all three tests. But look what happens: If I am perfectly confident of what will happen should I choose to do an $X$ test, then my expected values $y$ and $z$ must both be zero, meaning that I am *completely uncertain* about what might happen should I choose to do either a $Y$ test or a $Z$ test. There is an inevitable tradeoff between levels of uncertainty, baked into the shape of the theory itself. One might even call that a matter… of principle.

We are now well-poised to improve upon the language in the news stories. The point that specifies the preparation of a qubit can be at the North Pole $(0,0,1)$, the South Pole $(0,0,-1)$, or anywhere in the ball between them. We have a whole continuum of ways to be intermediate between completely confident that the $Z$ test will yield $+1$ (all the way north) and completely confident that it will yield $-1$ (all the way south).

Now, there are other things one can do to a qubit. For starters, there are other binary measurements beyond just the $X$, $Y$ and $Z$ tests. Any pair of points exactly opposite each other on the Bloch sphere define a test, with each point standing for an outcome. The closer the preparation point is to an outcome point, the more probable that outcome. To be more specific, let’s write the preparation point as $(x,y,z)$ and the outcome point as $(x’,y’,z’)$. Then the probability of getting that outcome given that preparation is $$ P = \frac{1}{2}(1 + x x’ + y y’ + z z’). $$

An interesting conceptual thing has happened here. We have encoded the preparation of a qubit by a set of expected values, i.e., a set of probabilities. Consequently, all those late-night jazz-cigarette arguments over what probability means will spill over into the arguments about what quantum mechanics means. Moreover, and not unrelatedly, we can ask, “Why *three* probabilities? Why is it the Bloch sphere, instead of the Bloch disc or the Bloch hypersphere?” It would be perfectly legitimate, mathematically, to require probabilities for only two tests in order to specify a preparation point, or to require more than three. That would not be quantum mechanics; the fact that three coordinates are needed to nail down the preparation of the simplest possible system is a structural fact of quantum theory. But is there a deeper truth from which that can be deduced?

One could go in multiple directions from here: What about tests with more than two outcomes? Systems composed of more than one qubit? Very quickly, the structures involved become more difficult to visualize, and familiarity with linear algebra — eigenvectors, eigenvalues and their friends — becomes a prerequisite. People have also tried a variety of approaches to understand what quantum theory might be derivable from. Any of those topics could justify something in between a blog post and a lifetime of study.

**SUGGESTED READINGS:**

- E. Rieffel and W. Polak,
*Quantum Computing: A Gentle Introduction*(MIT Press, 2011), chapter 2 - J. Rau,
*Quantum Theory: An Information Processing Approach*(Oxford University Press, 2021), section 3.3 - M. Weiss, “Python tools for the budding quantum bettabilitarian” (2022)

“Dead from the next pandemic? Dead from civil war? Dead from the combination pandemic-and-civil-war?”

]]>“Preprints accepted by the

The Supreme Court today is drooling with eagerness to kill Biden’s vaccine-or-test mandate, on the legal rationale of “we declare that we can, so we will”. So, first, congratulations to Omicron. Second, this makes it even more plain that they’ll throttle the EPA on the same “fuck any regulators that want to actually regulate” basis, in a few months.

The Democrats will probably lose at least the House in November (the map isn’t turning out as gerrymandered as a lot of folks expected, but it’s still bad enough). That’s the chance of court reform gone, with a reactionary majority free to uphold theocracy, sabotage the vote, treat LGBT people as subhuman, attack labor rights, fuck over press freedom (if *Roe* is gone, *NYT v Sullivan* can hardly be safe).

The Democrats will lose the Senate in 2024 (the map will be terrible unless 2022 goes amazingly for them). Oh, and two years of Republicans running the House means two years of Benghazi!-ing, a shutdown or two, doing everything possible to make Trump president again. Did we mention that one factor in that nominally not-so-bad map has been “incumbent protection”, i.e., baking in the MAGA?

… OK, maybe Trump will be dead by then, or too ill to be propped up on two feet. DeSantis seems the most likely heir at the moment. But whatever.

And supposing Biden wins in ’24? Not a lot he’ll be able to do with both the Senate and (probably) the House against him.

Point is, we’re on a three-year train to Fuckedville while the planet cooks around us.

I’ve seen people cast about for analogies for what’s in progress/likely to be coming. Turkey under Erdogan? Hungary under Orban? The Time of Troubles? There always seems to be some ingredient that makes the analogy not quite match, for me, but not a single option on the table looks good.

What was that old Adam Smith line about there being “a great deal of ruin in a nation”? Right now, we’re in the middle of measuring just how much ruin there is.

(Apropos, how much ruin is there in a health-care system?)

Autocracy is here. It just isn’t evenly distributed, yet.

]]>Nobody has answered “24x – 3”.

(Grandpa Stacey voice) I am disappointed.

]]>As the kids say, “Like, quantum and subscribe!”

]]>B. C. Stacey, *A First Course in the Sporadic SICs*. SpringerBriefs in Mathematical Physics volume 41 (2021).

This book focuses on the Symmetric Informationally Complete quantum measurements (SICs) in dimensions 2 and 3, along with one set of SICs in dimension 8. These objects stand out in ways that have earned them the moniker of “sporadic SICs”. By some standards, they are more approachable than the other known SICs, while by others they are simply atypical. The author forays into quantum information theory using them as examples, and the author explores their connections with other exceptional objects like the Leech lattice and integral octonions. The sporadic SICs take readers from the classification of finite simple groups to Bell’s theorem and the discovery that “hidden variables” cannot explain away quantum uncertainty.

While no one department teaches every subject to which the sporadic SICs pertain, the topic is approachable without too much background knowledge. The book includes exercises suitable for an elective at the graduate or advanced undergraduate level.

**ERRATA:**

In the preface, on p. v, there is a puzzling appearance of “in references [77–80]”. This is due to an error in the process of splitting the book into chapters available for separate downloads. These references are arXiv:1301.3274, arXiv:1311.5253, arXiv:1612.07308 and arXiv:1705.03483.

Page 6: “5799” should be “5779” (76 squared plus 3), and M. Harrison should be added to the list of co-credited discoverers. The most current list of known solutions, exact and numerical, is to my knowledge this presentation by Grassl.

Page 58: “Then there are 56 octavians” should be “Then there are 112 octavians”.

]]>**EDIT TO ADD (8 September):** To my surprise, I was able to edit those notes in the direction of being a paper. A few items came out after my post which lifted the burden of discussing certain topics and let a theme come together. Accordingly, see arXiv:2109.03186.

And I’m all, “So, instead of testing *one* effective field theory by putting matter into extreme conditions, you’re testing … multiple … effective field theories … by putting matter into extreme conditions.” I’m making my astonished face, can’t you tell?

Bartleby the Scrivener: I would prefer not to

and… scene

]]>- B. C. Stacey, “Canonical probabilities by directly quantizing thermodynamics” (PDF).

The idea is that Boltzmann’s rule $p(E_n) \propto e^{-E_n / k_B T}$ pops up really naturally when you ask for a rule that plays nicely with the *composing-together* of uncorrelated systems. This, in turn, gives a convenient expression to the idea that classical physics is what you get when you handle quantum systems sloppily.

It’s common to present “the Copenhagen interpretation” as a kind of dynamical collapse model, in which wavefunctions are ontic entities (like a sophomore’s picture of the electromagnetic field) that evolve according to the Schrödinger equation, except in moments of “measurement” that take place in unspecified conditions. This portrayal is typically intended to make “the Copenhagen interpretation” sound like a mutant form of Newtonian mechanics where $F = ma$ almost always, except at peculiar instants when $F$ suddenly becomes $ma/2$ and then switches back again. Of course, this is abhorrent and pathological.

When I was a child, my parents bought me a magnet from a museum gift shop. It had a long handle, likely made deliberately to resemble a magic wand, and as educational toys go, it served its function, since I went around poking all sorts of things to see if the magnet would grab them. I suspect this is a common enough type of learning experience. One discovers, for example, that it will pick up paperclips but not pennies. Having calibrated one’s understanding of the magnet, one can then use it as a tool — say, by telling which of two matchboxes is filled with paperclips, or that something is different about a wire coil connected to a battery versus one that is not.

What concerned Bohr himself was that this transition — between the calibration phase, when an object is under scrutiny, and its later use as a laboratory instrument — is conceptually nontrivial. First a lens is a strangely curved block of glass we must work to comprehend, and then it is a means to overthrow Aristotle. There are not two different dynamical laws, but two different *languages*.

Here’s how John Wheeler put it:

“Bohr stresses […] that the stick we hold can itself be an object of investigation, as when we run our fingers over its surface. The same stick, when grasped firmly and used to explore something else, becomes an extension of the observer or—when we depersonalize—a part of the measuring equipment. As we withdraw the stick from the one role, and recast it in the other role, we transpose the line of demarcation from one end of it to the other. The distinction between the probed and the probe, so evident at this scale of the everyday, is the without-which-nothing of every elementary phenomenon, of every closed quantum process.”

[From “Law Without Law”, in the Wheeler–Zurek collection, p. 206]

The commonalities and contrasts with QBism should be evident enough. Extension of the observer, yes; depersonalize to mere dead “equipment”, no, for it is the latter move that gets one into trouble with Wigner’s Friend. And, on a perhaps more practical level where the choice of research problems is concerned, Bohr takes the quantum formalism pretty much as given and leaves “the quantum principle” not explicitly defined.

It may also be illustrative to consider how Rovelli’s “Relational Quantum Mechanics” treats this point. I tentatively infer that Rovelli thinks giving a special role to an agent means imposing two different dynamical laws, one for systems of agent-type and another for all nonagent physical entities. Even if he doesn’t spell it out, that seems to be the mindset he operates with, and the background he relies upon. Of course, he balks at that dichotomy. I would, too!

]]>