Venting

I confess myself a bit baffled by people who act like “how to interact with ChatGPT” is a useful classroom skill. It’s not a word processor or a spreadsheet; it doesn’t have documented, well-defined, reproducible behaviors. No, it’s not remotely analogous to a calculator. Calculators are built to be right, not to sound convincing. It’s a bullshit fountain. Stop acting like you’re a waterbender making emotive shapes by expressing your will in the medium of liquid bullshit. The lesson one needs about a bullshit fountain is not to swim in it.

“Oh, but it’s a source of inspiration!”

So, you’ve never been to a writers’ workshop, spent 30 minutes with the staff on the school literary magazine, seen the original “You’re the man now, dog!” scene, or had any other exposure to the thousand and one gimmicks invented over the centuries to get people to put one word after another.

“It provides examples for teaching the art of critique!”

Why not teach with examples, just hear me out here, by actual humans?

“Students can learn to write by rewriting the output!”

Am I the only one who finds passing off an edit of an unattributable mishmash as one’s own work to be, well, flagrantly unethical?

“You’re just yelling at a cloud! What’s next, calling for us to reject modernity and embrace tradition?”

I’d rather we built our future using the best parts of our present rather than the worst.

A Picture for the Mind: the Bloch Ball

Now and then, stories will pop up in the news about the latest hot new thing in quantum computers. If the story makes any attempt to explain why quantum computing is special or interesting, it often recycles a remark along the lines of, “A quantum bit can be both 0 and 1 simultaneously.” This, well, ehhhhh… It’s rather like saying that Boston is at both the North Pole and the South Pole simultaneously. Something important has been lost. I figured I should take a stab at explaining what. Our goal today is to develop a mental picture for a qubit, the basic unit that quantum computers are typically regarded as built out of. To be more precise, we will develop a mental picture for the mathematics of a qubit, not for how to implement one in the lab. There are many ways to do so, and getting into the details of any one method would, for our purposes today, be a distraction. Instead, we will be brave and face the issue on a more abstract level.

A qubit is a thing that one prepares and that one measures. The mathematics of quantum theory tells us how to represent these actions algebraically. That is, it describes the set of all possible preparations, the set of all possible measurements, and how to compute the probability of getting a particular result from a chosen measurement given a particular preparation. To do something interesting, one would typically work with multiple qubits together, but we will start with a single one. And we will begin with the simplest kind of measurement, the binary ones. A binary test has two possible outcomes, which we can represent as 0 or 1, “plus” or “minus”, “ping” and “pong”, et cetera. In the lab, this could be sending an ion through a magnetic field and registering whether it swerved up or down; or, it could be sending a blip of light through a polarizing filter turned at a certain angle and registering whether there is or is not a flash. Or any of many other possibilities! The important thing is that there are two outcomes that we can clearly distinguish from each other.

For any physical implementation of a qubit, there are three binary measurements of special interest, which we can call the $X$ test, the $Y$ test and the $Z$ test. Let us denote the possible outcomes of each test by $+1$ and $-1$, which turns out to be a convenient choice. The expected value of the $X$ test is the average of these two possibilities, weighted by the probability of each. If we write $P(+1|X)$ for the probability of getting the $+1$ outcome given that we do the $X$ test, and likewise for $P(-1|X)$, then this expected value is $$ x = P(+1|X) \cdot (+1) + P(-1|X) \cdot (-1). $$ Because this is a weighted average of $+1$ and $-1$, it will always be somewhere in that interval. If for example we are completely confident that an $X$ test will return the outcome $+1$, then $x = 1$. If instead we lay even odds on the two possible outcomes, then $x = 0$. Likewise, $$ y = P(+1|Y) \cdot (+1) + P(-1|Y) \cdot (-1), $$ and $$ z = P(+1|Z) \cdot (+1) + P(-1|Z) \cdot (-1). $$

To specify the preparation of a single qubit, all we have to do is pick a value for $x$, a value for $y$ and a value for $z$. But not all combinations $(x,y,z)$ are physically allowed. The valid preparations are those for which the point $(x,y,z)$ lies on or inside the ball of radius 1 centered at the origin: $$ x^2 + y^2 + z^2 \leq 1. $$ We call this the Bloch ball, after the physicist Felix Bloch (1905–1983). The surface of the Bloch ball, at the distance exactly 1 from the origin, is the Bloch sphere. The points where the axes intersect the Bloch sphere — $(1,0,0)$, $(-1,0,0)$, $(0,1,0)$ and so forth — are the preparations where we are perfectly confident in the outcome of one of our three tests. Points in the interior of the ball, not on the surface, imply uncertainty about the outcomes of all three tests. But look what happens: If I am perfectly confident of what will happen should I choose to do an $X$ test, then my expected values $y$ and $z$ must both be zero, meaning that I am completely uncertain about what might happen should I choose to do either a $Y$ test or a $Z$ test. There is an inevitable tradeoff between levels of uncertainty, baked into the shape of the theory itself. One might even call that a matter… of principle.

Bloch ball, with the center point and the points where the axes intersect the outer sphere marked with dots

We are now well-poised to improve upon the language in the news stories. The point that specifies the preparation of a qubit can be at the North Pole $(0,0,1)$, the South Pole $(0,0,-1)$, or anywhere in the ball between them. We have a whole continuum of ways to be intermediate between completely confident that the $Z$ test will yield $+1$ (all the way north) and completely confident that it will yield $-1$ (all the way south).

Now, there are other things one can do to a qubit. For starters, there are other binary measurements beyond just the $X$, $Y$ and $Z$ tests. Any pair of points exactly opposite each other on the Bloch sphere define a test, with each point standing for an outcome. The closer the preparation point is to an outcome point, the more probable that outcome. To be more specific, let’s write the preparation point as $(x,y,z)$ and the outcome point as $(x’,y’,z’)$. Then the probability of getting that outcome given that preparation is $$ P = \frac{1}{2}(1 + x x’ + y y’ + z z’). $$

An interesting conceptual thing has happened here. We have encoded the preparation of a qubit by a set of expected values, i.e., a set of probabilities. Consequently, all those late-night jazz-cigarette arguments over what probability means will spill over into the arguments about what quantum mechanics means. Moreover, and not unrelatedly, we can ask, “Why three probabilities? Why is it the Bloch sphere, instead of the Bloch disc or the Bloch hypersphere?” It would be perfectly legitimate, mathematically, to require probabilities for only two tests in order to specify a preparation point, or to require more than three. That would not be quantum mechanics; the fact that three coordinates are needed to nail down the preparation of the simplest possible system is a structural fact of quantum theory. But is there a deeper truth from which that can be deduced?

One could go in multiple directions from here: What about tests with more than two outcomes? Systems composed of more than one qubit? Very quickly, the structures involved become more difficult to visualize, and familiarity with linear algebra — eigenvectors, eigenvalues and their friends — becomes a prerequisite. People have also tried a variety of approaches to understand what quantum theory might be derivable from. Any of those topics could justify something in between a blog post and a lifetime of study.

SUGGESTED READINGS:

  • E. Rieffel and W. Polak, Quantum Computing: A Gentle Introduction (MIT Press, 2011), chapter 2
  • J. Rau, Quantum Theory: An Information Processing Approach (Oxford University Press, 2021), section 3.3
  • M. Weiss, “Python tools for the budding quantum bettabilitarian” (2022)

Blunt Pessimism

I’m really not feeling that good about our ability to handle the next epidemic that comes our way. —BCS, January 2017

The Supreme Court today is drooling with eagerness to kill Biden’s vaccine-or-test mandate, on the legal rationale of “we declare that we can, so we will”. So, first, congratulations to Omicron. Second, this makes it even more plain that they’ll throttle the EPA on the same “fuck any regulators that want to actually regulate” basis, in a few months.

The Democrats will probably lose at least the House in November (the map isn’t turning out as gerrymandered as a lot of folks expected, but it’s still bad enough). That’s the chance of court reform gone, with a reactionary majority free to uphold theocracy, sabotage the vote, treat LGBT people as subhuman, attack labor rights, fuck over press freedom (if Roe is gone, NYT v Sullivan can hardly be safe).

The Democrats will lose the Senate in 2024 (the map will be terrible unless 2022 goes amazingly for them). Oh, and two years of Republicans running the House means two years of Benghazi!-ing, a shutdown or two, doing everything possible to make Trump president again. Did we mention that one factor in that nominally not-so-bad map has been “incumbent protection”, i.e., baking in the MAGA?

… OK, maybe Trump will be dead by then, or too ill to be propped up on two feet. DeSantis seems the most likely heir at the moment. But whatever.

And supposing Biden wins in ’24? Not a lot he’ll be able to do with both the Senate and (probably) the House against him.

Point is, we’re on a three-year train to Fuckedville while the planet cooks around us.

I’ve seen people cast about for analogies for what’s in progress/likely to be coming. Turkey under Erdogan? Hungary under Orban? The Time of Troubles? There always seems to be some ingredient that makes the analogy not quite match, for me, but not a single option on the table looks good.

What was that old Adam Smith line about there being “a great deal of ruin in a nation”? Right now, we’re in the middle of measuring just how much ruin there is.

(Apropos, how much ruin is there in a health-care system?)

Autocracy is here. It just isn’t evenly distributed, yet.

PEM-diss

So, there’s a joke going around BirdSite about how scientists said “the internet will revolutionize the sharing of information and eliminate barriers to communication”, and what we got is viral tweets asking for the solution to “4 + 8 x 3 – 7, no calculators!!”

Nobody has answered “24x – 3”.

(Grandpa Stacey voice) I am disappointed.

New Textbook

Copies of a textbook surrounded by Oaxacan carved wooden animals

B. C. Stacey, A First Course in the Sporadic SICs. SpringerBriefs in Mathematical Physics volume 41 (2021).

This book focuses on the Symmetric Informationally Complete quantum measurements (SICs) in dimensions 2 and 3, along with one set of SICs in dimension 8. These objects stand out in ways that have earned them the moniker of “sporadic SICs”. By some standards, they are more approachable than the other known SICs, while by others they are simply atypical. The author forays into quantum information theory using them as examples, and the author explores their connections with other exceptional objects like the Leech lattice and integral octonions. The sporadic SICs take readers from the classification of finite simple groups to Bell’s theorem and the discovery that “hidden variables” cannot explain away quantum uncertainty.

While no one department teaches every subject to which the sporadic SICs pertain, the topic is approachable without too much background knowledge. The book includes exercises suitable for an elective at the graduate or advanced undergraduate level.

ERRATA:

In the preface, on p. v, there is a puzzling appearance of “in references [77–80]”. This is due to an error in the process of splitting the book into chapters available for separate downloads. These references are arXiv:1301.3274, arXiv:1311.5253, arXiv:1612.07308 and arXiv:1705.03483.

Page 6: “5799” should be “5779” (76 squared plus 3), and M. Harrison should be added to the list of co-credited discoverers. The most current list of known solutions, exact and numerical, is to my knowledge this presentation by Grassl.

Page 58: “Then there are 56 octavians” should be “Then there are 112 octavians”.

Thoughts on “relational Quantum Mechanics”

Recently, the far-flung QBism discussion group nominally centered at UMass Boston has been conversing about Carlo Rovelli’s relational interpretation of quantum mechanics. Trying to think all this through halfway clearly, I wrote some notes. They don’t seem to be moving in the direction of a paper, and they’re too chatty for the arXiv even by my standards, so this seems the best place to host them.

EDIT TO ADD (8 September): To my surprise, I was able to edit those notes in the direction of being a paper. A few items came out after my post which lifted the burden of discussing certain topics and let a theme come together. Accordingly, see arXiv:2109.03186.

Saturday Thought

One thing I just don’t get is people proclaiming “the End of Physics”. Like “the End of History”, it’s a very mockable phrase! Folks will be going, “Oh, our giant colliders haven’t found any surprises in years, and we never figured out an experiment to test string theory, so everyone’s drifting into quantum information and exotic condensed-matter physics, truly this is the sunset of an era.”

And I’m all, “So, instead of testing one effective field theory by putting matter into extreme conditions, you’re testing … multiple … effective field theories … by putting matter into extreme conditions.” I’m making my astonished face, can’t you tell?

Canonical Probabilities by Directly Quantizing Thermodynamics

I’ve had this derivation kicking around a while, and today seemed like as good a day as any to make a fuller write-up of it:

  • B. C. Stacey, “Canonical probabilities by directly quantizing thermodynamics” (PDF).

The idea is that Boltzmann’s rule $p(E_n) \propto e^{-E_n / k_B T}$ pops up really naturally when you ask for a rule that plays nicely with the composing-together of uncorrelated systems. This, in turn, gives a convenient expression to the idea that classical physics is what you get when you handle quantum systems sloppily.

More on Bohr

This post carries further on in the vein of my earlier writings on how the way most physicists talk about “the Copenhagen interpretation of quantum mechanics” is largely ahistorical.

It’s common to present “the Copenhagen interpretation” as a kind of dynamical collapse model, in which wavefunctions are ontic entities (like a sophomore’s picture of the electromagnetic field) that evolve according to the Schrödinger equation, except in moments of “measurement” that take place in unspecified conditions. This portrayal is typically intended to make “the Copenhagen interpretation” sound like a mutant form of Newtonian mechanics where $F = ma$ almost always, except at peculiar instants when $F$ suddenly becomes $ma/2$ and then switches back again. Of course, this is abhorrent and pathological.

When I was a child, my parents bought me a magnet from a museum gift shop. It had a long handle, likely made deliberately to resemble a magic wand, and as educational toys go, it served its function, since I went around poking all sorts of things to see if the magnet would grab them. I suspect this is a common enough type of learning experience. One discovers, for example, that it will pick up paperclips but not pennies. Having calibrated one’s understanding of the magnet, one can then use it as a tool — say, by telling which of two matchboxes is filled with paperclips, or that something is different about a wire coil connected to a battery versus one that is not.

What concerned Bohr himself was that this transition — between the calibration phase, when an object is under scrutiny, and its later use as a laboratory instrument — is conceptually nontrivial. First a lens is a strangely curved block of glass we must work to comprehend, and then it is a means to overthrow Aristotle. There are not two different dynamical laws, but two different languages.

Here’s how John Wheeler put it:

“Bohr stresses […] that the stick we hold can itself be an object of investigation, as when we run our fingers over its surface. The same stick, when grasped firmly and used to explore something else, becomes an extension of the observer or—when we depersonalize—a part of the measuring equipment. As we withdraw the stick from the one role, and recast it in the other role, we transpose the line of demarcation from one end of it to the other. The distinction between the probed and the probe, so evident at this scale of the everyday, is the without-which-nothing of every elementary phenomenon, of every closed quantum process.”

[From “Law Without Law”, in the Wheeler–Zurek collection, p. 206]

The commonalities and contrasts with QBism should be evident enough. Extension of the observer, yes; depersonalize to mere dead “equipment”, no, for it is the latter move that gets one into trouble with Wigner’s Friend. And, on a perhaps more practical level where the choice of research problems is concerned, Bohr takes the quantum formalism pretty much as given and leaves “the quantum principle” not explicitly defined.

It may also be illustrative to consider how Rovelli’s “Relational Quantum Mechanics” treats this point. I tentatively infer that Rovelli thinks giving a special role to an agent means imposing two different dynamical laws, one for systems of agent-type and another for all nonagent physical entities. Even if he doesn’t spell it out, that seems to be the mindset he operates with, and the background he relies upon. Of course, he balks at that dichotomy. I would, too!

What’s Wrong with this Sting Operation?

To the extent that academic peer review is good for anything, it is optimized to catch honest mistakes. It is weaker against deliberate fraud and stubborn denial. Science has a presumption of fair play, a sense that the natural world isn’t a cheater. If you want to explain how a “psychic” operates, you’re better off asking a magician than a physicist.

Nearly two decades ago now, there was a dust-up when a couple French TV personalities got a clutch of physics and mathematics papers published, and even received PhD’s, and their “work” turned out to be nonsense. (The Wikipedia article on l’affaire Bogdanov is currently not terrible, and it contains more pointers to details than almost anyone could honestly desire.) The news stories about the incident really played up the “even the physicists can’t tell if the papers are nonsense or not” angle. That rather oversells the case, though. I read the Bogdanovs’ “Topological field theory of the initial singularity of spacetime” when I was a first-year grad student, and I could see through it. If you know what a Lagrangian is, and the fog doesn’t intimidate you, then you can tell something is wrong. If you don’t know what a Lagrangian is, you’re probably not reading theoretical physics papers yet.

So, what went wrong?
Continue reading What’s Wrong with this Sting Operation?

Education, Inadvertent and otherwise

That feeling when it’s 3 in the morning and you’re watching an old PBS documentary aimed at grade-school kids and the mill workers are going on strike while Sumner declares that industries of the North are complicit in the slave economy of the South, and you’re like yes, exactly!

We’d all be so much better off, had the lessons of fourth grade only stuck.

(Also, the voice actor for the engineer/architect type character in a lot of those David Macaulay adaptations was Brian Blessed, which is pretty nice.)
Continue reading Education, Inadvertent and otherwise

"no matter how gifted, you alone cannot change the world"