# Less Heteronormative Homework

A few weeks ago, I found an old physics book on a colleague’s “miscellaneous” shelf: University of Chicago Graduate Problems in Physics, by Cronin, Greenberg and Telegdi (Addison-Wesley, 1967). It looked like fun, so I started working through some of it.

Physics problems age irregularly. Topics fall out of vogue as the frontier of knowledge moves on, and sometimes, the cultural milieu of the time when the problem was written pokes through. Take the first problem in the “statistical physics” chapter. It begins, “A young man, who lives at location $A$ of the city street plan shown in the figure, walks daily to the home of his fiancee…”

No, no, no, that just won’t do any more. Let us set up the problem properly:

Asami is meeting Korra for lunch downtown. Korra is $E$ blocks east and $N$ blocks north of Asami, on the rectangular street grid of downtown Republic City. Because Asami is eager to meet Korra, her path never doubles back. That is, each move Asami takes must bring her closer to Korra on the street grid. How many different routes can Asami take to meet Korra?

Solution below the fold.

# Multiscale Structure via Information Theory

We have scienced:

B. Allen, B. C. Stacey and Y. Bar-Yam, “An Information-Theoretic Formalism for Multiscale Structure in Complex Systems” [arXiv:1409.4708].

We develop a general formalism for representing and understanding structure in complex systems. In our view, structure is the totality of relationships among a system’s components, and these relationships can be quantified using information theory. In the interest of flexibility we allow information to be quantified using any function, including Shannon entropy and Kolmogorov complexity, that satisfies certain fundamental axioms. Using these axioms, we formalize the notion of a dependency among components, and show how a system’s structure is revealed in the amount of information assigned to each dependency. We explore quantitative indices that summarize system structure, providing a new formal basis for the complexity profile and introducing a new index, the “marginal utility of information”. Using simple examples, we show how these indices capture intuitive ideas about structure in a quantitative way. Our formalism also sheds light on a longstanding mystery: that the mutual information of three or more variables can be negative. We discuss applications to complex networks, gene regulation, the kinetic theory of fluids and multiscale cybernetic thermodynamics.

There’s much more to do, but for the moment, let this indicate my mood:

“You’ll get so preoccupied with equations that you forget to eat!” #BadWaysToPromoteScienceToYoungWomen

# Delayed Gratification

A post today by PZ Myers nicely expresses something which has been frustrating me about people who, in arguing over what can be a legitimate subject of “scientific” study, play the “untestable claim” card.

Their ideal is the experiment that, in one session, shoots down a claim cleanly and neatly. So let’s bring in dowsers who claim to be able to detect water flowing underground, set up control pipes and water-filled pipes, run them through their paces, and see if they meet reasonable statistical criteria. That’s science, it works, it effectively addresses an individual’s very specific claim, and I’m not saying that’s wrong; that’s a perfectly legitimate scientific experiment.

I’m saying that’s not the whole operating paradigm of all of science.

Plenty of scientific ideas are not immediately testable, or directly testable, or testable in isolation. For example: the planets in our solar system aren’t moving the way Newton’s laws say they should. Are Newton’s laws of gravity wrong, or are there other gravitational influences which satisfy the Newtonian equations but which we don’t know about? Once, it turned out to be the latter (the discovery of Neptune), and once, it turned out to be the former (the precession of Mercury’s orbit, which required Einstein’s general relativity to explain).

There are different mathematical formulations of the same subject which give the same predictions for the outcomes of experiments, but which suggest different new ideas for directions to explore. (E.g., Newtonian, Lagrangian and Hamiltonian mechanics; or density matrices and SIC-POVMs.) There are ideas which are proposed for good reason but hang around for decades awaiting a direct experimental test—perhaps one which could barely have been imagined when the idea first came up. Take directed percolation: a simple conceptual model for fluid flow through a randomized porous medium. It was first proposed in 1957. The mathematics necessary to treat it cleverly was invented (or, rather, adapted from a different area of physics) in the 1970s…and then forgotten…and then rediscovered by somebody else…connections with other subjects were made… Experiments were carried out on systems which almost behaved like the idealization, but always turned out to differ in some way… until 2007, when the behaviour was finally caught in the wild. And the experiment which finally observed a directed-percolation-class phase transition with quantitative exactness used a liquid crystal substance which wasn’t synthesized until 1969.

You don’t need to go dashing off to quantum gravity to find examples of ideas which are hard to test in the laboratory, or where mathematics long preceded experiment. (And if you do, don’t forget the other applications being developed for the mathematics invented in that search.) Just think very hard about the water dripping through coffee grounds to make your breakfast.

T. Biancalani, D. Fanelli and F. Di Patti (2010), “Stochastic Turing patterns in the Brusselator modelPhysical Review E 81, 4: 046215, arXiv:0910.4984 [cond-mat.stat-mech].

Abstract:

A stochastic version of the Brusselator model is proposed and studied via the system size expansion. The mean-field equations are derived and shown to yield to organized Turing patterns within a specific parameters region. When determining the Turing condition for instability, we pay particular attention to the role of cross-diffusive terms, often neglected in the heuristic derivation of reaction-diffusion schemes. Stochastic fluctuations are shown to give rise to spatially ordered solutions, sharing the same quantitative characteristic of the mean-field based Turing scenario, in term of excited wavelengths. Interestingly, the region of parameter yielding to the stochastic self-organization is wider than that determined via the conventional Turing approach, suggesting that the condition for spatial order to appear can be less stringent than customarily believed.

A. Franceschini et al. (2011), “Transverse Alignment of Fibers in a Periodically Sheared Suspension: An Absorbing Phase Transition with a Slowly Varying Control Parameter” Physical Review Letters 107, 25: 250603. DOI: 10.1103/PhysRevLett.107.250603.

Abstract: Shearing solutions of fibers or polymers tends to align fiber or polymers in the flow direction. Here, non-Brownian rods subjected to oscillatory shear align perpendicular to the flow while the system undergoes a nonequilibrium absorbing phase transition. The slow alignment of the fibers can drive the system through the critical point and thus promote the transition to an absorbing state. This picture is confirmed by a universal scaling relation that collapses the data with critical exponents that are consistent with conserved directed percolation.

Last October, a paper I co-authored hit the arXivotubes (1110.3845, to be specific). This was, on reflection, one of the better things which happened to me last October. (It was, as the song sez, a lonesome month in a rather immemorial year.) Since then, more relevant work from other people has appeared. I’m collecting pointers here, most of them to freely available articles.

I read this one a while ago in non-arXiv preprint form, but now it’s on the arXiv. M. Raghib et al. (2011), “A Multiscale maximum entropy moment closure for locally regulated space-time point process models of population dynamics”, Journal of Mathematical Biology 62, 5: 605–53. arXiv:1202.6092 [q-bio].

Abstract: The pervasive presence spatial and size structure in biological populations challenges fundamental assumptions at the heart of continuum models of population dynamics based on mean densities (local or global) only. Individual-based models (IBM’s) were introduced over the last decade in an attempt to overcome this limitation by following explicitly each individual in the population. Although the IBM approach has been quite insightful, the capability to follow each individual usually comes at the expense of analytical tractability, which limits the generality of the statements that can be made. For the specific case of spatial structure in populations of sessile (and identical) organisms, space-time point processes with local regulation seem to cover the middle ground between analytical tractability and a higher degree of biological realism. Continuum approximations of these stochastic processes distill their fundamental properties, but they often result in infinite hierarchies of moment equations. We use the principle of constrained maximum entropy to derive a closure relationship for one such hierarchy truncated at second order using normalization and the product densities of first and second orders as constraints. The resulting `maxent’ closure is similar to the Kirkwood superposition approximation, but it is complemented with previously unknown correction terms that depend on on the area for which third order correlations are irreducible. This region also serves as a validation check, since it can only be found if the assumptions of the closure are met. Comparisons between simulations of the point process, alternative heuristic closures, and the maxent closure show significant improvements in the ability of the maxent closure to predict equilibrium values for mildly aggregated spatial patterns.

# Of Two Time Indices

In the appendix to a paper I am currently co-authoring, I recently wrote the following within a parenthetical excursus:

When talking of dynamical systems, our probability assignments really carry two time indices: one for the time our betting odds are chosen, and the other for the time the bet concerns.

A parenthesis in an appendix is already a pretty superfluous thing. Treating this as the jumping-off point for further discussion merits the degree of obscurity which only a lengthy post on a low-traffic blog can afford.

# Fugacity

The question came up while discussing the grand canonical ensemble the other day of just where the word fugacity came from. Having a couple people in the room who received the “benefits of a classical education” (Gruber 1988), we guessed that the root was the Latin fugere, “to flee” — the same verb which appears in the saying tempus fugit. Turns out, the Oxford English Dictionary sides with us, stating that fugacity was formed from fugacious plus the common +ty suffix, and that fugacious (meaning “apt to flee away”) goes back to the Latin root we’d guessed.

Gilbert N. Lewis appears to have introduced the word in “The Law of Physico-Chemical Change”, which appeared in the Proceedings of the American Academy of Arts and Sciences 37 (received 6 April 1901).

# “More Decimal Digits”

On occasion, somebody voices the idea that in year $$N$$, physicists thought they had everything basically figured out, and that all they had to do was compute more decimal digits. I won’t pretend to know whether this is actually true for any values of $$N$$ — when did one old man’s grumpiness become the definitive statement about a scientific age? — but it’s interesting that not every physicist with an interest in history has supported the claim.

One classic illustration of how the old guys with the beards knew their understanding of physics was incomplete involves the specific heats of gases. How much does a gas warm up when a given amount of energy is poured into it? The physics of the 1890s was unable to resolve this problem. The solution, achieved in the next century, required quantum mechanics, but the problem was far from unknown in the years before 1900. Quoting Richard Feynman’s Lectures on Physics (1964), volume 1, chapter 40, with hyperlinks added by me:

In the wake of ScienceOnline2011, at which the two sessions I co-moderated went pleasingly well, my Blogohedron-related time and energy has largely gone to doing the LaTeXnical work for this year’s Open Laboratory anthology. I have also made a few small contributions to the Azimuth Project, including a Python implementation of a stochastic Hopf bifurcation model.

I continue to fall behind in writing the book reviews I have promised (to myself, if to nobody else). At ScienceOnline, I scored a free copy of Greg Gbur’s new textbook, Mathematical Methods for Optical Physics and Engineering. Truth be told, at the book-and-author shindig where they had the books written by people attending the conference all laid out and wrapped in anonymizing brown paper, I gauged which one had the proper size and weight for a mathematical-methods textbook and snarfed that. On the logic, you see, that if anyone who was not a physics person drew that book from the pile, they’d probably be sad. (The textbook author was somewhat complicit in this plan.) I am happy to report that I’ve found it a good textbook; it should be useful for advanced undergraduates, procrastinating graduate students and those seeking a clear introduction to techniques used in optics but not commonly addressed in broad-spectrum mathematical-methods books.

# Know Thy Audience?

D. W. Logan et al. have an editorial in PLoS Computational Biology giving advice for scientists who want to become active Wikipedia contributors. I was one, for a couple years (cue the “I got better”); judging from my personal experience, most of their advice is pretty good, save for item four:

Wikipedia is not primarily aimed at experts; therefore, the level of technical detail in its articles must be balanced against the ability of non-experts to understand those details. When contributing scientific content, imagine you have been tasked with writing a comprehensive scientific review for a high school audience. It can be surprisingly challenging explaining complex ideas in an accessible, jargon-free manner. But it is worth the perseverance. You will reap the benefits when it comes to writing your next manuscript or teaching an undergraduate class.

Come again?

Whether Wikipedia as a whole is “primarily aimed at experts” or not is irrelevant for the scientist wishing to edit the article on a particular technical subject. Plenty of articles — e.g., Kerr/CFT correspondence or Zamolodchikov c-theorem — have vanishingly little relevance to a “high school audience.” Even advanced-placement high-school physics doesn’t introduce quantum field theory, let alone renormalization-group methods, centrally extended Virasoro algebras and the current frontiers of gauge/gravity duality research. Popularizing these topics may be possible, although even the basic ideas like critical points and universality have been surprisingly poorly served in that department so far. While it’s pretty darn evident for these examples, the same problem holds true more generally. If you do try to set about that task, the sheer amount of new invention necessary — the cooking-up of new analogies and metaphors, the construction of new simplifications and toy examples, etc. — will run you slap-bang into Wikipedia’s No Original Research policy.

Popularization is hard. When you make a serious effort at it, let yourself get some credit.

Know Thy Audience, indeed: sometimes, your reader won’t be a high-school sophomore looking for homework help, but is much more likely to be a fellow researcher checking to see where the minus signs go in a particular equation, or a graduate student looking to catch up on the historical highlights of their lab group’s research topic. Vulgarized vagueness helps the latter readers not at all, and gives the former only a gentle illusion of learning. Precalculus students would benefit more if we professional science people worked on making articles like Trigonometric functions truly excellent than if we puttered around making up borderline Original Research about our own abstruse pet projects.

ARTICLE COMMENTED UPON

• Logan DW, Sandal M, Gardner PP, Manske M, Bateman A, 2010 Ten Simple Rules for Editing Wikipedia. PLoS Comput Biol 6(9): e1000941. doi:10.1371/journal.pcbi.1000941

# Complexity Swag

By Gad, the future is an amazing place to live.

Where else could you buy this?

Or this?

(Via Clauset and Shalizi, naturally.)

I have a confession to make: Once, when I had to give a talk on network theory to a seminar full of management people, I wrote a genetic algorithm to optimize the Newman-Girvan Q index and divide the Zachary Karate Club network into modules before their very eyes. I made Movie Science happen in the real world; peccavi.

# Textbook Cardboard and Physicist’s History

By the way, what I have just outlined is what I call a “physicist’s history of physics,” which is never correct. What I am telling you is a sort of conventionalized myth-story that the physicists tell to their students, and those students tell to their students, and is not necessarily related to the actual historical development, which I do not really know!

Richard Feynman

Back when Brian Switek was a college student, he took on the unenviable task of pointing out when his professors were indulging in “scientist’s history of science”: attributing discoveries to the wrong person, oversimplifying the development of an idea, retelling anecdotes which are more amusing than true, and generally chewing on the textbook cardboard. The typical response? “That’s interesting, but I’m still right.”

Now, he’s a palaeontology person, and I’m a physics boffin, so you’d think I could get away with pretending that we don’t have that problem in this Department, but I started this note by quoting Feynman’s QED: The Strange Theory of Light and Matter (1986), so that’s not really a pretence worth keeping up. When it comes to formal education, I only have systematic experience with one field; oh, I took classes in pure mathematics and neuroscience and environmental politics and literature and film studies, but I won’t presume to speak in depth about how those subjects are taught.

So, with all those caveats stated, I can at least sketch what I suspect to be a contributing factor (which other sciences might encounter to a lesser extent or in a different way).

Suppose I want to teach a classful of college sophomores the fundamentals of quantum mechanics. There’s a standard “physicist’s history” which goes along with this, which touches on a familiar litany of famous names: Max Planck, Albert Einstein, Niels Bohr, Louis de Broglie, Werner Heisenberg, Ernst Schrödinger. We like to go back to the early days and follow the development forward, because the science was simpler when it got started, right?

The problem is that all of these men were highly trained, professional physicists who were thoroughly conversant with the knowledge of their time — well, naturally! But this means that any one of them knew more classical physics than a modern college sophomore. They would have known Hamiltonian and Lagrangian mechanics, for example, in addition to techniques of statistical physics (calculating entropy and such). Unless you know what they knew, you can’t really follow their thought processes, and we don’t teach big chunks of what they knew until after we’ve tried to teach what they figured out! For example, if you don’t know thermodynamics and statistical mechanics pretty well, you won’t be able to follow why Max Planck proposed the blackbody radiation law he did, which was a key step in the development of quantum theory.

Consequently, any “historical” treatment at the introductory level will probably end up “conventionalized.” One has to step extremely carefully! Strip the history down to the point that students just starting to learn the science can follow it, and you might not be portraying the way the people actually did their work. That’s not so bad, as far as learning the facts and formulæ is concerned, but you open yourself up to all sorts of troubles when you get to talking about the process of science. Are we doing physics differently than folks did N or 2N years ago? If we are, or if we aren’t, is that a problem? Well, we sure aren’t doing it like they did in chapter 1 of this textbook here. . . .

I noticed this one when it first hit the arXivotubes a while back; now that it’s been officially published, it caught my eye again.

G. Rozhnova and A. Nunes, “Population dynamics on random networks: simulations and analytical models” Eur. Phys. J. B 74, 2 (2010): 235–42. arXiv:0907.0335.

Abstract: We study the phase diagram of the standard pair approximation equations for two different models in population dynamics, the susceptible-infective-recovered-susceptible model of infection spread and a predator-prey interaction model, on a network of homogeneous degree $$k$$. These models have similar phase diagrams and represent two classes of systems for which noisy oscillations, still largely unexplained, are observed in nature. We show that for a certain range of the parameter $$k$$ both models exhibit an oscillatory phase in a region of parameter space that corresponds to weak driving. This oscillatory phase, however, disappears when $$k$$ is large. For $$k=3, 4$$, we compare the phase diagram of the standard pair approximation equations of both models with the results of simulations on regular random graphs of the same degree. We show that for parameter values in the oscillatory phase, and even for large system sizes, the simulations either die out or exhibit damped oscillations, depending on the initial conditions. We discuss this failure of the standard pair approximation model to capture even the qualitative behavior of the simulations on large regular random graphs and the relevance of the oscillatory phase in the pair approximation diagrams to explain the cycling behavior found in real populations.

# Interlude, with Cat

Want to know why I never get anything done? It’s not just because I find myself volunteered to write a one-act musical entitled Harry Crocker and the Plot of Holes. It’s also because Sean Carroll linked to a whole bunch of physics blogs, mine included, thereby obligating me to read through all their archives, and in the backblog of High Energy Mayhem I found a pointer to a talk by Krishna Rajagopal (my professor for third-term quantum â€” small world) on applying gauge/gravity duality to strongly coupled liquids like RHIC’s quark-gluon soups and cold fermionic atoms tuned to a Feshbach resonance. It still counts as “work” if the videos I’m watching online are about science, right? Look, if you use the “Flash presentation” option, it plays the video in one box and shows the slides in another! (Seriously, that’s a simple idea which is a very cool thing.)

Anyway, while I stuff my head with ideas I barely have the background to understand, and while I’m revising a paper so that it (superficially) meets PNAS standards, and while I try to re-learn the kinetic theory I forgot after that exam a few years back. . . Here’s a cat!

(This one is for Zeno, and was recaptioned from here.)