The Rise of Ironic Physics and/or Machine Physicists?

CONTENT ADVISORY: old-fashioned blog snarkery about broad trends in physics.

Over on his blog, Peter Woit quotes a scene from the imagination of John Horgan, whose The End of Science (1996) visualized physics falling into a twilight:

A few diehards dedicated to truth rather than practicality will practice physics in a nonempirical, ironic mode, plumbing the magical realm of superstrings and other esoterica and fretĀ­ting about the meaning of quantum mechanics. The conferences of these ironic physicists, whose disputes cannot be experimentally resolved, will become more and more like those of that bastion of literary criticism, the Modern Language Association.

OK (*cracks knuckles*), a few points. First, “fretting about the meaning of quantum mechanics” has, historically, been damn important. A lot of quantum information theory came out of people doing exactly that, just with equations. The productive way of “fretting” involves plumbing the meaning of quantum mechanics by finding what new capabilities quantum mechanics can give you. Let’s take one of the least blue-sky applications of quantum information science: securing communications with quantum key distribution. Why trust the security of quantum key distribution? There’s a whole theory behind the idea, one which depends upon the quantum de Finetti theorem. Why is there a quantum de Finetti theorem in a form that physicists could understand and care about? Because Caves, Fuchs and Schack wanted to prove that the phrase “unknown quantum state” has a well-defined meaning for personalist Bayesians.

This example could be augmented with many others. (I selfishly picked one where I could cite my own collaborator.)

It’s illuminating to quote the passage from Horgan’s book just before the one that Woit did:

This is the fate of physics. The vast majority of physicists, those employed in industry and even academia, will continue to apply the knowledge they already have in hand—inventing more versatile lasers and superconductors and computing devices—without worrying about any underlying philosophical issues.

But there just isn’t a clean dividing line between “underlying philosophical issues” and “more versatile computing devices”! In fact, the foundational question of what the nature of “quantum states” really are overlaps with the question of which quantum computations can be emulated on a classical computer, and how some preparations are better resources for quantum computers than others. Flagrantly disregarding attempts to draw a boundary line between “foundations” and “applications” is my day job now, but quantum information was already getting going in earnest during the mid-1990s, so this isn’t a matter of hindsight. (Feynman wasn’t the first to talk about quantum computing, but he was certainly influential, and the motivations he spelled out were pretty explicitly foundational. Benioff, who preceded Feynman, was also interested in foundational matters, and even said as much while building quantum Hamiltonians for Turing machines.) And since Woit’s post was about judging whether a prediction held up or not, I feel pretty OK applying a present-day standard anyway.

In short: Meaning matters.

But then, Horgan’s book gets the Einstein–Podolsky—Rosen thought-experiment completely wrong, and I should know better than to engage with what any book like that on the subject of what quantum mechanics might mean.

To be honest, Horgan is unfair to the Modern Language Association. Their convention program for January 2019 indicates a community that is actively engaged in the world, with sessions about the changing role of journalism, how the Internet has enabled a new kind of “public intellectuals”, how to bring African-American literature into summer reading, the dynamics of organized fandoms, etc. In addition, they plainly advertise sessions as open to the public, which I can only barely imagine a physics conference doing more than a nominal jab at. Their public sessions include a film screening of a documentary about the South African writer and activist Peter Abrahams, as well as workshops on practical skills like how to cite sources. That’s not just valuable training, but also a topic that is actively evolving: How do you cite a tweet, or an archived version of a Wikipedia page, or a post on a decentralized social network like Mastodon?

Dragging the sciences for supposedly resembling the humanities has not grown more endearing since 1996.

All this came up in the context of physics being done by artificial intelligence. If anything, the idea of “machines replacing physicists” is less plausible to me now than it was two decades ago, because back then, there was at least a chance that AI would have had something to do with understanding how human minds work, rather than just throwing a bunch of GPUs at a problem and calling the result “machine learning”. This perspective is informed in part by long talks with a friend whose research area is machine learning, and who is quite dissatisfied with the common approach to it. Specifically, they work in computer vision, where the top-notch algorithms still identify the Queen’s crown as a shower cap and can be fooled into calling a panda a vulture. People have problems, but not those problems.

What research is it that has prompted the specter of the Machine solving the Theory of Everything? Honestly, the “machine learning in the string landscape” language sounds to me like a new coat of paint over a general approach that physicists have been using for as long as we’ve had computers. Pose a problem, get your grad students to feed it into the computer, obtain numerical results, see what conjectures those results suggest, and if you’re lucky, prove those conjectures. In this particular case, the conjectures eventually proven might not ultimately connect to experiment, but that’s the old problem of quantum gravity being hard to study, not a new problem about the way physics is being done. And in order to put a question to a computer, you have to (get your grad student to) phrase it very carefully. You can’t ask the computer for a heuristic argument based on conjectural features that a nonperturbative theory of quantum gravity might have, were we to know of one. You have to talk about structures you can define (structures that you might guess are relevant to a nonperturbative theory of quantum gravity). For example, you might define a Calabi–Yau space in terms of an affine cone over a toric variety, which you in turn define in terms of a convex lattice polygon, etc., eventually converting the problem into one that you can code up. There’s still the big gap between your work and experiment, and there’s still the lack of a well-defined over-arching theory, but you’ve made your little corner of “experimental mathematics” less vague.

You might even be moving science in a healthy direction, by taking some of the ideas that have grown under the “string theory” umbrella and making them less a matter for physicists, and more a concern for the people who like to map extraordinarily complex mathematical structures for their own sake — the people who, for example, stay up late thinking about the centralizers of the Monster group. That shift could be a mutually beneficial development.