Skip navigation

Category Archives: Statistical mechanics

“You’ll get so preoccupied with equations that you forget to eat!” #BadWaysToPromoteScienceToYoungWomen
Read More »

A post today by PZ Myers nicely expresses something which has been frustrating me about people who, in arguing over what can be a legitimate subject of “scientific” study, play the “untestable claim” card.

Their ideal is the experiment that, in one session, shoots down a claim cleanly and neatly. So let’s bring in dowsers who claim to be able to detect water flowing underground, set up control pipes and water-filled pipes, run them through their paces, and see if they meet reasonable statistical criteria. That’s science, it works, it effectively addresses an individual’s very specific claim, and I’m not saying that’s wrong; that’s a perfectly legitimate scientific experiment.

I’m saying that’s not the whole operating paradigm of all of science.

Plenty of scientific ideas are not immediately testable, or directly testable, or testable in isolation. For example: the planets in our solar system aren’t moving the way Newton’s laws say they should. Are Newton’s laws of gravity wrong, or are there other gravitational influences which satisfy the Newtonian equations but which we don’t know about? Once, it turned out to be the latter (the discovery of Neptune), and once, it turned out to be the former (the precession of Mercury’s orbit, which required Einstein’s general relativity to explain).

There are different mathematical formulations of the same subject which give the same predictions for the outcomes of experiments, but which suggest different new ideas for directions to explore. (E.g., Newtonian, Lagrangian and Hamiltonian mechanics; or density matrices and SIC-POVMs.) There are ideas which are proposed for good reason but hang around for decades awaiting a direct experimental test—perhaps one which could barely have been imagined when the idea first came up. Take directed percolation: a simple conceptual model for fluid flow through a randomized porous medium. It was first proposed in 1957. The mathematics necessary to treat it cleverly was invented (or, rather, adapted from a different area of physics) in the 1970s…and then forgotten…and then rediscovered by somebody else…connections with other subjects were made… Experiments were carried out on systems which almost behaved like the idealization, but always turned out to differ in some way… until 2007, when the behaviour was finally caught in the wild. And the experiment which finally observed a directed-percolation-class phase transition with quantitative exactness used a liquid crystal substance which wasn’t synthesized until 1969.

You don’t need to go dashing off to quantum gravity to find examples of ideas which are hard to test in the laboratory, or where mathematics long preceded experiment. (And if you do, don’t forget the other applications being developed for the mathematics invented in that search.) Just think very hard about the water dripping through coffee grounds to make your breakfast.

T. Biancalani, D. Fanelli and F. Di Patti (2010), “Stochastic Turing patterns in the Brusselator modelPhysical Review E 81, 4: 046215, arXiv:0910.4984 [cond-mat.stat-mech].

Abstract:

A stochastic version of the Brusselator model is proposed and studied via the system size expansion. The mean-field equations are derived and shown to yield to organized Turing patterns within a specific parameters region. When determining the Turing condition for instability, we pay particular attention to the role of cross-diffusive terms, often neglected in the heuristic derivation of reaction-diffusion schemes. Stochastic fluctuations are shown to give rise to spatially ordered solutions, sharing the same quantitative characteristic of the mean-field based Turing scenario, in term of excited wavelengths. Interestingly, the region of parameter yielding to the stochastic self-organization is wider than that determined via the conventional Turing approach, suggesting that the condition for spatial order to appear can be less stringent than customarily believed.

See also the commentary by Mehran Kardar.

A. Franceschini et al. (2011), “Transverse Alignment of Fibers in a Periodically Sheared Suspension: An Absorbing Phase Transition with a Slowly Varying Control Parameter” Physical Review Letters 107, 25: 250603. DOI: 10.1103/PhysRevLett.107.250603.

Abstract: Shearing solutions of fibers or polymers tends to align fiber or polymers in the flow direction. Here, non-Brownian rods subjected to oscillatory shear align perpendicular to the flow while the system undergoes a nonequilibrium absorbing phase transition. The slow alignment of the fibers can drive the system through the critical point and thus promote the transition to an absorbing state. This picture is confirmed by a universal scaling relation that collapses the data with critical exponents that are consistent with conserved directed percolation.

Last October, a paper I co-authored hit the arXivotubes (1110.3845, to be specific). This was, on reflection, one of the better things which happened to me last October. (It was, as the song sez, a lonesome month in a rather immemorial year.) Since then, more relevant work from other people has appeared. I’m collecting pointers here, most of them to freely available articles.

I read this one a while ago in non-arXiv preprint form, but now it’s on the arXiv. M. Raghib et al. (2011), “A Multiscale maximum entropy moment closure for locally regulated space-time point process models of population dynamics”, Journal of Mathematical Biology 62, 5: 605–53. arXiv:1202.6092 [q-bio].

Abstract: The pervasive presence spatial and size structure in biological populations challenges fundamental assumptions at the heart of continuum models of population dynamics based on mean densities (local or global) only. Individual-based models (IBM’s) were introduced over the last decade in an attempt to overcome this limitation by following explicitly each individual in the population. Although the IBM approach has been quite insightful, the capability to follow each individual usually comes at the expense of analytical tractability, which limits the generality of the statements that can be made. For the specific case of spatial structure in populations of sessile (and identical) organisms, space-time point processes with local regulation seem to cover the middle ground between analytical tractability and a higher degree of biological realism. Continuum approximations of these stochastic processes distill their fundamental properties, but they often result in infinite hierarchies of moment equations. We use the principle of constrained maximum entropy to derive a closure relationship for one such hierarchy truncated at second order using normalization and the product densities of first and second orders as constraints. The resulting `maxent’ closure is similar to the Kirkwood superposition approximation, but it is complemented with previously unknown correction terms that depend on on the area for which third order correlations are irreducible. This region also serves as a validation check, since it can only be found if the assumptions of the closure are met. Comparisons between simulations of the point process, alternative heuristic closures, and the maxent closure show significant improvements in the ability of the maxent closure to predict equilibrium values for mildly aggregated spatial patterns.

Read More »

In the appendix to a paper I am currently co-authoring, I recently wrote the following within a parenthetical excursus:

When talking of dynamical systems, our probability assignments really carry two time indices: one for the time our betting odds are chosen, and the other for the time the bet concerns.

A parenthesis in an appendix is already a pretty superfluous thing. Treating this as the jumping-off point for further discussion merits the degree of obscurity which only a lengthy post on a low-traffic blog can afford.

Read More »

The question came up while discussing the grand canonical ensemble the other day of just where the word fugacity came from. Having a couple people in the room who received the “benefits of a classical education” (Gruber 1988), we guessed that the root was the Latin fugere, “to flee” — the same verb which appears in the saying tempus fugit. Turns out, the Oxford English Dictionary sides with us, stating that fugacity was formed from fugacious plus the common +ty suffix, and that fugacious (meaning “apt to flee away”) goes back to the Latin root we’d guessed.

Gilbert N. Lewis appears to have introduced the word in “The Law of Physico-Chemical Change”, which appeared in the Proceedings of the American Academy of Arts and Sciences 37 (received 6 April 1901).
Read More »

On occasion, somebody voices the idea that in year [tex]N[/tex], physicists thought they had everything basically figured out, and that all they had to do was compute more decimal digits. I won’t pretend to know whether this is actually true for any values of [tex]N[/tex] — when did one old man’s grumpiness become the definitive statement about a scientific age? — but it’s interesting that not every physicist with an interest in history has supported the claim.

One classic illustration of how the old guys with the beards knew their understanding of physics was incomplete involves the specific heats of gases. How much does a gas warm up when a given amount of energy is poured into it? The physics of the 1890s was unable to resolve this problem. The solution, achieved in the next century, required quantum mechanics, but the problem was far from unknown in the years before 1900. Quoting Richard Feynman’s Lectures on Physics (1964), volume 1, chapter 40, with hyperlinks added by me:
Read More »

In the wake of ScienceOnline2011, at which the two sessions I co-moderated went pleasingly well, my Blogohedron-related time and energy has largely gone to doing the LaTeXnical work for this year’s Open Laboratory anthology. I have also made a few small contributions to the Azimuth Project, including a Python implementation of a stochastic Hopf bifurcation model.

I continue to fall behind in writing the book reviews I have promised (to myself, if to nobody else). At ScienceOnline, I scored a free copy of Greg Gbur’s new textbook, Mathematical Methods for Optical Physics and Engineering. Truth be told, at the book-and-author shindig where they had the books written by people attending the conference all laid out and wrapped in anonymizing brown paper, I gauged which one had the proper size and weight for a mathematical-methods textbook and snarfed that. On the logic, you see, that if anyone who was not a physics person drew that book from the pile, they’d probably be sad. (The textbook author was somewhat complicit in this plan.) I am happy to report that I’ve found it a good textbook; it should be useful for advanced undergraduates, procrastinating graduate students and those seeking a clear introduction to techniques used in optics but not commonly addressed in broad-spectrum mathematical-methods books.

D. W. Logan et al. have an editorial in PLoS Computational Biology giving advice for scientists who want to become active Wikipedia contributors. I was one, for a couple years (cue the “I got better”); judging from my personal experience, most of their advice is pretty good, save for item four:

Wikipedia is not primarily aimed at experts; therefore, the level of technical detail in its articles must be balanced against the ability of non-experts to understand those details. When contributing scientific content, imagine you have been tasked with writing a comprehensive scientific review for a high school audience. It can be surprisingly challenging explaining complex ideas in an accessible, jargon-free manner. But it is worth the perseverance. You will reap the benefits when it comes to writing your next manuscript or teaching an undergraduate class.

Come again?

Whether Wikipedia as a whole is “primarily aimed at experts” or not is irrelevant for the scientist wishing to edit the article on a particular technical subject. Plenty of articles — e.g., Kerr/CFT correspondence or Zamolodchikov c-theorem — have vanishingly little relevance to a “high school audience.” Even advanced-placement high-school physics doesn’t introduce quantum field theory, let alone renormalization-group methods, centrally extended Virasoro algebras and the current frontiers of gauge/gravity duality research. Popularizing these topics may be possible, although even the basic ideas like critical points and universality have been surprisingly poorly served in that department so far. While it’s pretty darn evident for these examples, the same problem holds true more generally. If you do try to set about that task, the sheer amount of new invention necessary — the cooking-up of new analogies and metaphors, the construction of new simplifications and toy examples, etc. — will run you slap-bang into Wikipedia’s No Original Research policy.

Even reducing a topic from the graduate to the undergraduate level can be a highly nontrivial task. (I was a beta-tester for Zwiebach’s First Course in String Theory, so I would know.) And, writing for undergrads who already have Maxwell and Schrödinger Equations under their belts is not at all the same as writing for high-school juniors (or for your poor, long-suffering parents who’ve long since given up asking what you learned in school today). Why not try that sort of thing out on another platform first, like a personal blog, and then port it over to Wikipedia after receiving feedback? Citing your own work in the third person, or better yet recruiting other editors to help you adapt your content, is much more in accord with the letter and with the spirit of Wikipedia policy, than is inventing de novo great globs of pop science.

Popularization is hard. When you make a serious effort at it, let yourself get some credit.

Know Thy Audience, indeed: sometimes, your reader won’t be a high-school sophomore looking for homework help, but is much more likely to be a fellow researcher checking to see where the minus signs go in a particular equation, or a graduate student looking to catch up on the historical highlights of their lab group’s research topic. Vulgarized vagueness helps the latter readers not at all, and gives the former only a gentle illusion of learning. Precalculus students would benefit more if we professional science people worked on making articles like Trigonometric functions truly excellent than if we puttered around making up borderline Original Research about our own abstruse pet projects.

ARTICLE COMMENTED UPON

  • Logan DW, Sandal M, Gardner PP, Manske M, Bateman A, 2010 Ten Simple Rules for Editing Wikipedia. PLoS Comput Biol 6(9): e1000941. doi:10.1371/journal.pcbi.1000941

By Gad, the future is an amazing place to live.

Where else could you buy this?

Self-Organized Criticality:  Now on a Mug!

Or this?

The Zachary Karate Club network:  If your method doesn\'t work on this, then go home.

(Via Clauset and Shalizi, naturally.)

I have a confession to make: Once, when I had to give a talk on network theory to a seminar full of management people, I wrote a genetic algorithm to optimize the Newman-Girvan Q index and divide the Zachary Karate Club network into modules before their very eyes. I made Movie Science happen in the real world; peccavi.

By the way, what I have just outlined is what I call a “physicist’s history of physics,” which is never correct. What I am telling you is a sort of conventionalized myth-story that the physicists tell to their students, and those students tell to their students, and is not necessarily related to the actual historical development, which I do not really know!

Richard Feynman

Back when Brian Switek was a college student, he took on the unenviable task of pointing out when his professors were indulging in “scientist’s history of science”: attributing discoveries to the wrong person, oversimplifying the development of an idea, retelling anecdotes which are more amusing than true, and generally chewing on the textbook cardboard. The typical response? “That’s interesting, but I’m still right.”

Now, he’s a palaeontology person, and I’m a physics boffin, so you’d think I could get away with pretending that we don’t have that problem in this Department, but I started this note by quoting Feynman’s QED: The Strange Theory of Light and Matter (1986), so that’s not really a pretence worth keeping up. When it comes to formal education, I only have systematic experience with one field; oh, I took classes in pure mathematics and neuroscience and environmental politics and literature and film studies, but I won’t presume to speak in depth about how those subjects are taught.

So, with all those caveats stated, I can at least sketch what I suspect to be a contributing factor (which other sciences might encounter to a lesser extent or in a different way).

Suppose I want to teach a classful of college sophomores the fundamentals of quantum mechanics. There’s a standard “physicist’s history” which goes along with this, which touches on a familiar litany of famous names: Max Planck, Albert Einstein, Niels Bohr, Louis de Broglie, Werner Heisenberg, Ernst Schrödinger. We like to go back to the early days and follow the development forward, because the science was simpler when it got started, right?

The problem is that all of these men were highly trained, professional physicists who were thoroughly conversant with the knowledge of their time — well, naturally! But this means that any one of them knew more classical physics than a modern college sophomore. They would have known Hamiltonian and Lagrangian mechanics, for example, in addition to techniques of statistical physics (calculating entropy and such). Unless you know what they knew, you can’t really follow their thought processes, and we don’t teach big chunks of what they knew until after we’ve tried to teach what they figured out! For example, if you don’t know thermodynamics and statistical mechanics pretty well, you won’t be able to follow why Max Planck proposed the blackbody radiation law he did, which was a key step in the development of quantum theory.

Consequently, any “historical” treatment at the introductory level will probably end up “conventionalized.” One has to step extremely carefully! Strip the history down to the point that students just starting to learn the science can follow it, and you might not be portraying the way the people actually did their work. That’s not so bad, as far as learning the facts and formulæ is concerned, but you open yourself up to all sorts of troubles when you get to talking about the process of science. Are we doing physics differently than folks did N or 2N years ago? If we are, or if we aren’t, is that a problem? Well, we sure aren’t doing it like they did in chapter 1 of this textbook here. . . .

I noticed this one when it first hit the arXivotubes a while back; now that it’s been officially published, it caught my eye again.

G. Rozhnova and A. Nunes, “Population dynamics on random networks: simulations and analytical models” Eur. Phys. J. B 74, 2 (2010): 235–42. arXiv:0907.0335.

Abstract: We study the phase diagram of the standard pair approximation equations for two different models in population dynamics, the susceptible-infective-recovered-susceptible model of infection spread and a predator-prey interaction model, on a network of homogeneous degree [tex]k[/tex]. These models have similar phase diagrams and represent two classes of systems for which noisy oscillations, still largely unexplained, are observed in nature. We show that for a certain range of the parameter [tex]k[/tex] both models exhibit an oscillatory phase in a region of parameter space that corresponds to weak driving. This oscillatory phase, however, disappears when [tex]k[/tex] is large. For [tex]k=3, 4[/tex], we compare the phase diagram of the standard pair approximation equations of both models with the results of simulations on regular random graphs of the same degree. We show that for parameter values in the oscillatory phase, and even for large system sizes, the simulations either die out or exhibit damped oscillations, depending on the initial conditions. We discuss this failure of the standard pair approximation model to capture even the qualitative behavior of the simulations on large regular random graphs and the relevance of the oscillatory phase in the pair approximation diagrams to explain the cycling behavior found in real populations.

Want to know why I never get anything done? It’s not just because I find myself volunteered to write a one-act musical entitled Harry Crocker and the Plot of Holes. It’s also because Sean Carroll linked to a whole bunch of physics blogs, mine included, thereby obligating me to read through all their archives, and in the backblog of High Energy Mayhem I found a pointer to a talk by Krishna Rajagopal (my professor for third-term quantum — small world) on applying gauge/gravity duality to strongly coupled liquids like RHIC’s quark-gluon soups and cold fermionic atoms tuned to a Feshbach resonance. It still counts as “work” if the videos I’m watching online are about science, right? Look, if you use the “Flash presentation” option, it plays the video in one box and shows the slides in another! (Seriously, that’s a simple idea which is a very cool thing.)

Anyway, while I stuff my head with ideas I barely have the background to understand, and while I’m revising a paper so that it (superficially) meets PNAS standards, and while I try to re-learn the kinetic theory I forgot after that exam a few years back. . . Here’s a cat!

\"Extra credit\"? Professor Cat is amused.

(This one is for Zeno, and was recaptioned from here.)

Physics, as Clifford Johnson recently reminded us, has a strongly pragmatic side to its personality: “If that ten dimensional scenario describes your four dimensional physics and helps you understand your experiments, and there’s no sign of something simpler that’s doing as good a job, what do you care?” As that “ten dimensional” bit might suggest, the particular subject in question involves string theory, and whether tools from that field can be applied in places where they were not originally expected to work. From one perspective, this is almost like payback time: the first investigations of string theory, back in the 1970s, were trying to understand nuclear physics, and only later were their results discovered to be useful in attacking the quantum gravity problem. Now that the mathematical results of quantum-gravity research have been turned around and applied to nuclear physics again, it’s like coming home — déjà vu, with a twist.

This is quintessential science history: tangled up, twisted around and downright weird. Naturally, I love it.

Shamit Kachru (Stanford University) has an article on this issue in the American Physical Society’s new online publication, called simply Physics, a journal intended to track trends and illustrate highlights of interdisciplinary research. Kachru’s essay, “Glimmers of a connection between string theory and atomic physics,” does not focus on the nuclear physics applications currently being investigated, but rather explores a more recent line of inquiry: the application of string theory to phase transitions in big aggregates of atoms. Screwing around with lithium atoms in a magnetic trap is, by most standards, considerably more convenient than building a giant particle accelerator, so if you can get your math to cough up predictions, you can test them with a tabletop experiment.

(Well, maybe you’ll need a large table.)

If you’ve grown used to hearing string theory advertised as a way to solve quantum gravity, this might sound like cheating. Justification-by-spinoff is always a risky approach. It’s as if NASA said, “We’re still stalled on that going-to-the-Moon business, but — hey — here’s TANG!” But, if your spinoff involves something like understanding high-temperature superconductivity, one might argue that a better analogy would be trying for the Moon and getting weather satellites and GPS along the way.

Moreover, one should not forget that without Tang, we could not have invented the Buzzed Aldrin.

The evilutionary superscientist P-Zed has been trying to drive the riffraff away from his website by writing about biology. First we had “Epigenetics,” and now we’ve got “Snake segmentation.” Meanwhile, Clifford Johnson is telling us about “Atoms and Strings in the Laboratory” (with bonus musical accompaniment). Stick around for stupid questions from me in the comments!

(Everything I know is really just the sum total of answers I’ve received for stupid questions.)