Category Archives: String Theory

The Rise of Ironic Physics and/or Machine Physicists?

CONTENT ADVISORY: old-fashioned blog snarkery about broad trends in physics.

Over on his blog, Peter Woit quotes a scene from the imagination of John Horgan, whose The End of Science (1996) visualized physics falling into a twilight:

A few diehards dedicated to truth rather than practicality will practice physics in a nonempirical, ironic mode, plumbing the magical realm of superstrings and other esoterica and fret­ting about the meaning of quantum mechanics. The conferences of these ironic physicists, whose disputes cannot be experimentally resolved, will become more and more like those of that bastion of literary criticism, the Modern Language Association.

OK (*cracks knuckles*), a few points. First, “fretting about the meaning of quantum mechanics” has, historically, been damn important. A lot of quantum information theory came out of people doing exactly that, just with equations. The productive way of “fretting” involves plumbing the meaning of quantum mechanics by finding what new capabilities quantum mechanics can give you. Let’s take one of the least blue-sky applications of quantum information science: securing communications with quantum key distribution. Why trust the security of quantum key distribution? There’s a whole theory behind the idea, one which depends upon the quantum de Finetti theorem. Why is there a quantum de Finetti theorem in a form that physicists could understand and care about? Because Caves, Fuchs and Schack wanted to prove that the phrase “unknown quantum state” has a well-defined meaning for personalist Bayesians.

This example could be augmented with many others. (I selfishly picked one where I could cite my own collaborator.)

It’s illuminating to quote the passage from Horgan’s book just before the one that Woit did:

This is the fate of physics. The vast majority of physicists, those employed in industry and even academia, will continue to apply the knowledge they already have in hand—inventing more versatile lasers and superconductors and computing devices—without worrying about any underlying philosophical issues.

But there just isn’t a clean dividing line between “underlying philosophical issues” and “more versatile computing devices”! In fact, the foundational question of what the nature of “quantum states” really are overlaps with the question of which quantum computations can be emulated on a classical computer, and how some preparations are better resources for quantum computers than others. Flagrantly disregarding attempts to draw a boundary line between “foundations” and “applications” is my day job now, but quantum information was already getting going in earnest during the mid-1990s, so this isn’t a matter of hindsight. (Feynman wasn’t the first to talk about quantum computing, but he was certainly influential, and the motivations he spelled out were pretty explicitly foundational. Benioff, who preceded Feynman, was also interested in foundational matters, and even said as much while building quantum Hamiltonians for Turing machines.) And since Woit’s post was about judging whether a prediction held up or not, I feel pretty OK applying a present-day standard anyway.

In short: Meaning matters.

But then, Horgan’s book gets the Einstein–Podolsky—Rosen thought-experiment completely wrong, and I should know better than to engage with what any book like that on the subject of what quantum mechanics might mean.

To be honest, Horgan is unfair to the Modern Language Association. Their convention program for January 2019 indicates a community that is actively engaged in the world, with sessions about the changing role of journalism, how the Internet has enabled a new kind of “public intellectuals”, how to bring African-American literature into summer reading, the dynamics of organized fandoms, etc. In addition, they plainly advertise sessions as open to the public, which I can only barely imagine a physics conference doing more than a nominal jab at. Their public sessions include a film screening of a documentary about the South African writer and activist Peter Abrahams, as well as workshops on practical skills like how to cite sources. That’s not just valuable training, but also a topic that is actively evolving: How do you cite a tweet, or an archived version of a Wikipedia page, or a post on a decentralized social network like Mastodon?

Dragging the sciences for supposedly resembling the humanities has not grown more endearing since 1996.
Continue reading The Rise of Ironic Physics and/or Machine Physicists?

Delayed Gratification

A post today by PZ Myers nicely expresses something which has been frustrating me about people who, in arguing over what can be a legitimate subject of “scientific” study, play the “untestable claim” card.

Their ideal is the experiment that, in one session, shoots down a claim cleanly and neatly. So let’s bring in dowsers who claim to be able to detect water flowing underground, set up control pipes and water-filled pipes, run them through their paces, and see if they meet reasonable statistical criteria. That’s science, it works, it effectively addresses an individual’s very specific claim, and I’m not saying that’s wrong; that’s a perfectly legitimate scientific experiment.

I’m saying that’s not the whole operating paradigm of all of science.

Plenty of scientific ideas are not immediately testable, or directly testable, or testable in isolation. For example: the planets in our solar system aren’t moving the way Newton’s laws say they should. Are Newton’s laws of gravity wrong, or are there other gravitational influences which satisfy the Newtonian equations but which we don’t know about? Once, it turned out to be the latter (the discovery of Neptune), and once, it turned out to be the former (the precession of Mercury’s orbit, which required Einstein’s general relativity to explain).

There are different mathematical formulations of the same subject which give the same predictions for the outcomes of experiments, but which suggest different new ideas for directions to explore. (E.g., Newtonian, Lagrangian and Hamiltonian mechanics; or density matrices and SIC-POVMs.) There are ideas which are proposed for good reason but hang around for decades awaiting a direct experimental test—perhaps one which could barely have been imagined when the idea first came up. Take directed percolation: a simple conceptual model for fluid flow through a randomized porous medium. It was first proposed in 1957. The mathematics necessary to treat it cleverly was invented (or, rather, adapted from a different area of physics) in the 1970s…and then forgotten…and then rediscovered by somebody else…connections with other subjects were made… Experiments were carried out on systems which almost behaved like the idealization, but always turned out to differ in some way… until 2007, when the behaviour was finally caught in the wild. And the experiment which finally observed a directed-percolation-class phase transition with quantitative exactness used a liquid crystal substance which wasn’t synthesized until 1969.

You don’t need to go dashing off to quantum gravity to find examples of ideas which are hard to test in the laboratory, or where mathematics long preceded experiment. (And if you do, don’t forget the other applications being developed for the mathematics invented in that search.) Just think very hard about the water dripping through coffee grounds to make your breakfast.

Of Predators and Pomerons

Consider the Lagrangian density

\[ \mathcal{L} (\tilde{\phi},\phi) = \tilde{\phi}\left((\partial_t + D_A(r_A – \nabla^2)\right)\phi – u\tilde{\phi}(\tilde{\phi} – \phi)\phi + \tau \tilde{\phi}^2\phi^2. \]

Particle physicists of the 1970s would recognize this as the Lagrangian for a Reggeon field theory with triple- and quadruple-Pomeron interaction vertices. In the modern literature on theoretical ecology, it encodes the behaviour of a spatially distributed predator-prey system near the predator extinction threshold.

Such is the perplexing unity of mathematical science: formula X appears in widely separated fields A and Z. Sometimes, this is a sign that a common effect is at work in the phenomena of A and those of Z; or, it could just mean that scientists couldn’t think of anything new and kept doing whatever worked the first time. Wisdom lies in knowing which is the case on any particular day.

[Reposted from the archives, in the light of John Baez’s recent writings.]

Know Thy Audience?

D. W. Logan et al. have an editorial in PLoS Computational Biology giving advice for scientists who want to become active Wikipedia contributors. I was one, for a couple years (cue the “I got better”); judging from my personal experience, most of their advice is pretty good, save for item four:

Wikipedia is not primarily aimed at experts; therefore, the level of technical detail in its articles must be balanced against the ability of non-experts to understand those details. When contributing scientific content, imagine you have been tasked with writing a comprehensive scientific review for a high school audience. It can be surprisingly challenging explaining complex ideas in an accessible, jargon-free manner. But it is worth the perseverance. You will reap the benefits when it comes to writing your next manuscript or teaching an undergraduate class.

Come again?

Whether Wikipedia as a whole is “primarily aimed at experts” or not is irrelevant for the scientist wishing to edit the article on a particular technical subject. Plenty of articles — e.g., Kerr/CFT correspondence or Zamolodchikov c-theorem — have vanishingly little relevance to a “high school audience.” Even advanced-placement high-school physics doesn’t introduce quantum field theory, let alone renormalization-group methods, centrally extended Virasoro algebras and the current frontiers of gauge/gravity duality research. Popularizing these topics may be possible, although even the basic ideas like critical points and universality have been surprisingly poorly served in that department so far. While it’s pretty darn evident for these examples, the same problem holds true more generally. If you do try to set about that task, the sheer amount of new invention necessary — the cooking-up of new analogies and metaphors, the construction of new simplifications and toy examples, etc. — will run you slap-bang into Wikipedia’s No Original Research policy.

Even reducing a topic from the graduate to the undergraduate level can be a highly nontrivial task. (I was a beta-tester for Zwiebach’s First Course in String Theory, so I would know.) And, writing for undergrads who already have Maxwell and Schrödinger Equations under their belts is not at all the same as writing for high-school juniors (or for your poor, long-suffering parents who’ve long since given up asking what you learned in school today). Why not try that sort of thing out on another platform first, like a personal blog, and then port it over to Wikipedia after receiving feedback? Citing your own work in the third person, or better yet recruiting other editors to help you adapt your content, is much more in accord with the letter and with the spirit of Wikipedia policy, than is inventing de novo great globs of pop science.

Popularization is hard. When you make a serious effort at it, let yourself get some credit.

Know Thy Audience, indeed: sometimes, your reader won’t be a high-school sophomore looking for homework help, but is much more likely to be a fellow researcher checking to see where the minus signs go in a particular equation, or a graduate student looking to catch up on the historical highlights of their lab group’s research topic. Vulgarized vagueness helps the latter readers not at all, and gives the former only a gentle illusion of learning. Precalculus students would benefit more if we professional science people worked on making articles like Trigonometric functions truly excellent than if we puttered around making up borderline Original Research about our own abstruse pet projects.

ARTICLE COMMENTED UPON

  • Logan DW, Sandal M, Gardner PP, Manske M, Bateman A, 2010 Ten Simple Rules for Editing Wikipedia. PLoS Comput Biol 6(9): e1000941. doi:10.1371/journal.pcbi.1000941

Curses Curses Squared, Google Books Edition

And speaking of things which we couldn’t even have complained about a few short years ago, you know what bugs me? Reading through a chapter of something interesting — say, David Berenstein‘s “Large-N field theories and geometry” — and having pages missing from the middle, at the whim of Google Books. It’s like a game to test one’s knowledge: if page 260 ends with the Polyakov action in the conformal gauge, and page 263 has what looks to be a Virasoro constraint in light-cone coordinates, what could have gone between? Of course, this doesn’t work so well if the missing page has something new you’d like to learn . . . .

Interlude, with Cat

Want to know why I never get anything done? It’s not just because I find myself volunteered to write a one-act musical entitled Harry Crocker and the Plot of Holes. It’s also because Sean Carroll linked to a whole bunch of physics blogs, mine included, thereby obligating me to read through all their archives, and in the backblog of High Energy Mayhem I found a pointer to a talk by Krishna Rajagopal (my professor for third-term quantum — small world) on applying gauge/gravity duality to strongly coupled liquids like RHIC’s quark-gluon soups and cold fermionic atoms tuned to a Feshbach resonance. It still counts as “work” if the videos I’m watching online are about science, right? Look, if you use the “Flash presentation” option, it plays the video in one box and shows the slides in another! (Seriously, that’s a simple idea which is a very cool thing.)

Anyway, while I stuff my head with ideas I barely have the background to understand, and while I’m revising a paper so that it (superficially) meets PNAS standards, and while I try to re-learn the kinetic theory I forgot after that exam a few years back. . . Here’s a cat!

\"Extra credit\"? Professor Cat is amused.

(This one is for Zeno, and was recaptioned from here.)

Currently Reading

Random fun items currently floating up through the arXivotubes include the following. Exercise: find the shortest science-fiction story which can connect all these e-prints, visiting each node only once.

Robert H. Brandenberger, “String Gas Cosmology” (arXiv:0808.0746).

String gas cosmology is a string theory-based approach to early universe cosmology which is based on making use of robust features of string theory such as the existence of new states and new symmetries. A first goal of string gas cosmology is to understand how string theory can effect the earliest moments of cosmology before the effective field theory approach which underlies standard and inflationary cosmology becomes valid. String gas cosmology may also provide an alternative to the current standard paradigm of cosmology, the inflationary universe scenario. Here, the current status of string gas cosmology is reviewed.

Dimitri Skliros, Mark Hindmarsh, “Large Radius Hagedorn Regime in String Gas Cosmology” (arXiv:0712.1254, to be published in Phys. Rev. D).
Continue reading Currently Reading

String Theory and Atomic Physics

Physics, as Clifford Johnson recently reminded us, has a strongly pragmatic side to its personality: “If that ten dimensional scenario describes your four dimensional physics and helps you understand your experiments, and there’s no sign of something simpler that’s doing as good a job, what do you care?” As that “ten dimensional” bit might suggest, the particular subject in question involves string theory, and whether tools from that field can be applied in places where they were not originally expected to work. From one perspective, this is almost like payback time: the first investigations of string theory, back in the 1970s, were trying to understand nuclear physics, and only later were their results discovered to be useful in attacking the quantum gravity problem. Now that the mathematical results of quantum-gravity research have been turned around and applied to nuclear physics again, it’s like coming home — déjà vu, with a twist.

This is quintessential science history: tangled up, twisted around and downright weird. Naturally, I love it.

Shamit Kachru (Stanford University) has an article on this issue in the American Physical Society’s new online publication, called simply Physics, a journal intended to track trends and illustrate highlights of interdisciplinary research. Kachru’s essay, “Glimmers of a connection between string theory and atomic physics,” does not focus on the nuclear physics applications currently being investigated, but rather explores a more recent line of inquiry: the application of string theory to phase transitions in big aggregates of atoms. Screwing around with lithium atoms in a magnetic trap is, by most standards, considerably more convenient than building a giant particle accelerator, so if you can get your math to cough up predictions, you can test them with a tabletop experiment.

(Well, maybe you’ll need a large table.)

If you’ve grown used to hearing string theory advertised as a way to solve quantum gravity, this might sound like cheating. Justification-by-spinoff is always a risky approach. It’s as if NASA said, “We’re still stalled on that going-to-the-Moon business, but — hey — here’s TANG!” But, if your spinoff involves something like understanding high-temperature superconductivity, one might argue that a better analogy would be trying for the Moon and getting weather satellites and GPS along the way.

Moreover, one should not forget that without Tang, we could not have invented the Buzzed Aldrin.

Meanwhile, on the Intertubes

The evilutionary superscientist P-Zed has been trying to drive the riffraff away from his website by writing about biology. First we had “Epigenetics,” and now we’ve got “Snake segmentation.” Meanwhile, Clifford Johnson is telling us about “Atoms and Strings in the Laboratory” (with bonus musical accompaniment). Stick around for stupid questions from me in the comments!

(Everything I know is really just the sum total of answers I’ve received for stupid questions.)

The Necessity of Mathematics

Today, everything from international finance to teenage sexuality flows on a global computer network which depends upon semiconductor technology which, in turn, could not have been developed without knowledge of the quantum principles of solid-state physics. Today, we are damaging our environment in ways which require all our fortitude and ingenuity just to comprehend, let alone resolve. More and more people are becoming convinced that our civilization requires wisdom in order to survive, the sort of wisdom which can only come from scientific literacy; thus, an increasing number of observers are trying to figure out why science has been taught so poorly and how to fix that state of affairs. Charles Simonyi draws a distinction between those who merely “popularize” a science and those who promote the public understanding of it. We might more generously speak of bad popularizers and good ones, but the distinction between superficiality and depth is a real one, and we would do well to consider what criteria separate the two.

Opinions on how to communicate science are as diverse as the communicators. In this Network age, anyone with a Web browser and a little free time can join the conversation and become part of the problem — or part of the solution, if you take an optimistic view of these newfangled media. Certain themes recur, and tend to drive people into one or another loose camp of like-minded fellows: what do you do when scientific discoveries clash with someone’s religious beliefs? Why do news stories sensationalize or distort scientific findings, and what can we do about it? What can we do when the truth, as best we can discern it, is simply not politic?

Rather than trying to find a new and juicy angle on these oft-repeated questions, this essay will attempt to explore another direction, one which I believe has received insufficient attention. We might grandiosely call this a foray into the philosophy of science popularization. The topic I wish to explore is the role mathematics plays in understanding and doing science, and how we disable ourselves if our “explanations” of science do not include mathematics. The fact that too many people don’t know statistics has already been mourned, but the problem runs deeper than that. To make my point clear, I’d like to focus on a specific example, one drawn from classical physics. Once we’ve explored the idea in question, extensions to other fields of inquiry will be easier to make. To make life as easy as possible, we’re going to step back a few centuries and look at a development which occurred when the modern approach to natural science was in its infancy.

Our thesis will be the following: that if one does not understand or refuses to deal with mathematics, one has fatally impaired one’s ability to follow the physics, because not only are the ideas of the physics expressed in mathematical form, but also the relationships among those ideas are established with mathematical reasoning.

This is a strong assertion, and a rather pessimistic one, so we turn to a concrete example to investigate what it means. Our example comes from the study of planetary motion and begins with Kepler’s Three Laws.

KEPLER’S THREE LAWS

Johannes Kepler (1571–1630) discovered three rules which described the motions of the planets. He distilled them from the years’ worth of data collected by his contemporary, the Danish astronomer Tycho Brahe (1546–1601). The story of their professional relationship is one of clashing personalities, set against a backdrop of aristocracy, ruin and war. From that drama, we boil away the biography and extract some items of geometry:
Continue reading The Necessity of Mathematics

What Can the LHC Tell Us?

What can the LHC tell us, and how long will we have to wait to find out?

Over at Symmetry Breaking, David Harris has a timeline for when the amount of data gathered at the LHC will be large enough to detect particular exciting bits of physics which we expect might be lurking in wait, at high-energy realms we can’t currently reach. (The figures come from Abe Seiden’s presentation at the April 2008 meeting of the American Physical Society.) Assuming the superconducting cables — all 7000 kilometers of them! — get chilled down to their operating temperatures by mid-June and particles start whirling around the ring on schedule after that, then we could hope to spot the Higgs boson as early as 2009.
Continue reading What Can the LHC Tell Us?

An Unusual Occurrence

So there I was, quietly standing in Lobby 10, queuing to buy myself and a few friends advance tickets to Neil Gaiman’s forthcoming speech at MIT, when a strange odor proturbed onto my awareness. “That’s odd,” thought I, “it smells like backstage at my high school’s auditorium. [snif snif] Or the bathroom at Quiz Bowl state finals. . . And it’s not even 4:20. Something very unusual is going on, here on this university campus.”

I became aware of a, well, perhaps a presence would be the best way to describe it: the sort of feeling which people report when their temporal and parietal lobes are stimulated by magnetic fields. Something tall and imposing was standing. . . just. . . over. . . my. . . right. . . shoulder! But when I turned to see, I saw nothing there.

Feeling a little perturbed, I bought my tickets and tried to shrug it off. Not wanting to deal with the wet and yucky weather currently sticking down upon Cambridge, I descended the nearest staircase and began to work my way eastward through MIT’s tunnel system, progressing through the “zeroth floors” of the classroom and laboratory buildings, heading for Kendall Square and the T station. Putting my unusual experience in the ticket queue out of my mind, I returned to contemplating the junction of physics and neuroscience:

“So, based on the power-law behavior of cortical avalanches, we’d guess that the cortex is positioned at a phase transition, a critical point between, well, let’s call them quiescence and epileptic madness. This would allow the cortex to sustain a wide variety of functional patterns. . . but at a critical point, the Wilson-Cowan equations should yield a conformal field theory in two spatial dimensions. . . .

“But if you reinterpret the classical partition function as a quantum path integral, a field theory in 2D becomes a quantum field theory in one spatial and one temporal dimension. And the central charge of the quantum conformal field theory is equal to the normalized entropy density. . . so we should be able to apply gauge/gravity duality and model the cortex as a black hole in anti-de Sitter spacetime —”

Suddenly, a tentacle wrapped around my chest, and constricted, and pulled, and lifted — not up, but in a direction I had never moved before. Like a square knocked out of Flatland, I had been displaced.
Continue reading An Unusual Occurrence

The Dark Universe

Digging through my drafts pile to find something to post that doesn’t require too much extra writing, I found that I hadn’t yet released this item into the tubes. After The Halting Oracle and The Leech Lattice comes the third volume in our saga of good fantasy-novel titles, Lambda and The Dark Universe.

A few weeks back, Edward Kolb gave a series of talks at CERN on dark matter and dark energy, and how they fit into the standard “ΛCDM” model of the Universe. The abstract is as follows:

According to the standard cosmological model, 95% of the present mass density of the universe is dark: roughly 70% of the total in the form of dark energy and 25% in the form of dark matter. In a series of four lectures, I will begin by presenting a brief review of cosmology, and then I will review the observational evidence for dark matter and dark energy. I will discuss some of the proposals for dark matter and dark energy, and connect them to high-energy physics. I will also present an overview of an observational program to quantify the properties of dark energy.

Kolb’s presentations are, I found, entertaining and informative. At least, I laughed at his jokes — take that as you will. Much of the technical content can also be found in written form in, e.g., Cliff Burgess’ “Lectures on Cosmic Inflation and its Potential Stringy Realizations” (2007).

(Tip o’ the fedora to Jester.)

UPDATE (5 March): the newest figures, from the five-year WMAP results, are that the Universe is 72.1% dark energy, 23.3% dark matter, and 4.62% — everything else.

Carnival of Mathematics #24

The twenty-fourth Carnival of Mathematics is online at Ars Mathematica. To Ars, 24 is a special number because the Leech lattice lives in 24 dimensions (and, really, what could be creepier than a lattice of leeches? — methinks that could be the title for the sequel to The Halting Oracle). It’s also interesting to the secretive, cultish cabal of quantum gravity research, since it’s 26 (the critical dimension for bosonic string theory) minus 2 (the dimensionality of the string world-sheet). These two occurrences are actually related, although there’s a reason the relationship falls in a domain called Moonshine theory. . . .

Quote of the Moment

Jacques Distler:

The quark gluon plasma studied at RHIC is the least viscous fluid known to man.

Just how non-viscous is it? Well, the folks in the STAR Collaboration tell us that for the quark-gluon plasma, the ratio of shear viscosity to entropy density, [tex]\eta / s[/tex], is more than one hundred times smaller than that of water.

Barton Zwiebach has some videos on this subject which may serve as a good introduction for those with a moderate physics background.