Category Archives: Plectics


In the wake of ScienceOnline2011, at which the two sessions I co-moderated went pleasingly well, my Blogohedron-related time and energy has largely gone to doing the LaTeXnical work for this year’s Open Laboratory anthology. I have also made a few small contributions to the Azimuth Project, including a Python implementation of a stochastic Hopf bifurcation model.

I continue to fall behind in writing the book reviews I have promised (to myself, if to nobody else). At ScienceOnline, I scored a free copy of Greg Gbur’s new textbook, Mathematical Methods for Optical Physics and Engineering. Truth be told, at the book-and-author shindig where they had the books written by people attending the conference all laid out and wrapped in anonymizing brown paper, I gauged which one had the proper size and weight for a mathematical-methods textbook and snarfed that. On the logic, you see, that if anyone who was not a physics person drew that book from the pile, they’d probably be sad. (The textbook author was somewhat complicit in this plan.) I am happy to report that I’ve found it a good textbook; it should be useful for advanced undergraduates, procrastinating graduate students and those seeking a clear introduction to techniques used in optics but not commonly addressed in broad-spectrum mathematical-methods books.

REPOST: Scathing Review Fail

A discussion elsewhere on the ‘tubes this morning reminded me of this, so I decided to dig it out of my archives. Short version: people complaining that something sounds silly got it coming right back at them because they have no clue what they’re talking about.

I haven’t yet seen the remake of The Day the Earth Stood Still. Generally speaking, I haven’t been terribly speedy about seeing movies as they come out; sometimes, I just wait until they’re available on mplayer. The reviews have not been kind, but on the flipside, not all the reviews have been particularly insightful. To wit, here is Alonso Duralde at

The new “Day” can’t be bothered to include the thought-provoking dialogue of the original, choosing instead to bury the audience with special effects that are visually impressive but no substitute for an actual script. And what words do remain are so exquisitely awful that they provide some of the season’s biggest laughs.

OK, bring it.

My personal favorite? Astro-biologist Helen Benson (Jennifer Connelly) takes alien Klaatu (Keanu Reeves) to see a Nobel Prize-winning scientist and notes that her colleague was honored “for his work in biological altruism.” What would that entail, exactly? Helping frogs cross the street?

The sound you hear is my palm hitting my forehead, rather emphatically, followed by a howl from deep within my thorax: “Learn to [expletive deleted] Google, you [anatomically uncomplimentary compound noun]!” Just because Chris Tucker of the Daily Mail can’t do a simple web search doesn’t give you an excuse.

Claudia Puig at USA Today is no better:

What, exactly, would that entail? It sounds like something Cleese and his fellow Monty Python wits might have dreamed up.

You ignorant [epithet derived from SF television show]. Why don’t you go [verb unsuitable for a family blog] with Dave White, who apparently thinks that the mere mention of an actual scientific subject makes a movie instant MST3K fodder.

In Scientific American, Michael Shermer gives the movie a mostly positive review, and indicates that “biological altruism” is a real subject. Kenneth Turan of the LA Times is also mostly happy with the film, and he doesn’t crack wise about the “biological altruism” business, though I’m not sure about his grasp of it:

Aside from Klaatu and Gort, the “Day” team claims to have retained the original’s snappy catchphrase, “Klaatu barada nikto,” but it’s so hard to hear that viewers will be forgiven if they miss it. Also still around is the charming blackboard scene, in which Klaatu solves an equation for Professor Barnhardt (John Cleese), a man smart enough to have won the nonexistent but indisputably high-minded Nobel Prize for biological altruism.

Supposing that Barnhardt did work in the field of kin recognition, evolutionary ecology or some such topic which was honoured with a Nobel Prize, it wouldn’t be “the Nobel Prize for biological altruism”, but rather the Nobel Prize in Physiology or Medicine or, possibly, Economics (if Barnhardt’s research focused on, say, evolutionary game theory).

MTV’s Kurt Loder calls the film “boldly mediocre” but says that “biological altruism” is “a very Pythonian name for an actual subject of scientific inquiry”. Stephen D. Greydanus has a similar attitude. The recapper at the Agony Booth was also underwhelmed, by this part and by the rest of the movie:

Helen explains that Karl won the Nobel “for his work in biological altruism.” This sounds like something goofy they made up to make Karl sound noble, but in fact it’s a real field of philosophic study that investigates why, in times of limited resources, individual organisms throughout the animal kingdom occasionally produce fewer offspring (which, in Darwinian terms, is self-abnegation) for the good of the community. Which is great, but since it’s not explained, most of the audience is left to think that it’s something goofy the filmmakers made up.

So, I guess you can still dislike the movie after you’ve looked up the relevant science.

Colloquium on Complex Networks

I might be going to this, because it’s in the neighbourhood and I suppose I ought to see what colourful examples other people use in these situations, having given similar talks a couple times myself.

MIT Physics Department Colloquium: Jennifer Chayes

“Interdisciplinarity in the Age of Networks”

Everywhere we turn these days, we find that dynamical random networks have become increasingly appropriate descriptions of relevant interactions. In the high tech world, we see mobile networks, the Internet, the World Wide Web, and a variety of online social networks. In economics, we are increasingly experiencing both the positive and negative effects of a global networked economy. In epidemiology, we find disease spreading over our ever growing social networks, complicated by mutation of the disease agents. In problems of world health, distribution of limited resources, such as water, quickly becomes a problem of finding the optimal network for resource allocation. In biomedical research, we are beginning to understand the structure of gene regulatory networks, with the prospect of using this understanding to manage the many diseases caused by gene mis-regulation. In this talk, I look quite generally at some of the models we are using to describe these networks, and at some of the methods we are developing to indirectly infer network structure from measured data. In particular, I will discuss models and techniques which cut across many disciplinary boundaries.

9 September 2010, 16:15 o’clock, Room 10-250.

Complexity Swag

By Gad, the future is an amazing place to live.

Where else could you buy this?

Self-Organized Criticality:  Now on a Mug!

Or this?

The Zachary Karate Club network:  If your method doesn\'t work on this, then go home.

(Via Clauset and Shalizi, naturally.)

I have a confession to make: Once, when I had to give a talk on network theory to a seminar full of management people, I wrote a genetic algorithm to optimize the Newman-Girvan Q index and divide the Zachary Karate Club network into modules before their very eyes. I made Movie Science happen in the real world; peccavi.

How Not to be a Network-Theory n00b

Copied from my old ScienceBlogs site to test out the mathcache JavaScript tool.

Ah, complex networks: manufacturing centre for the textbook cardboard of tomorrow!

When you work in the corner of science where I do, you hear a lot of “sales talk” — claims that, thanks to the innovative research of so-and-so, the paradigms are shifting under the feet of the orthodox. It’s sort of a genre convention. To stay sane, it helps to have an antidote at hand (“The paradigm works fast, Dr. Jones!”).

For example, everybody loves “scale-free networks”: collections of nodes and links in which the probability that a node has $k$ connections falls off as a power-law function of $k$. In the jargon, the “degree” of a node is the number of links it has, so a “scale-free” network has a power-law degree distribution.
Continue reading How Not to be a Network-Theory n00b

Currently (Re)reading

I noticed this one when it first hit the arXivotubes a while back; now that it’s been officially published, it caught my eye again.

G. Rozhnova and A. Nunes, “Population dynamics on random networks: simulations and analytical models” Eur. Phys. J. B 74, 2 (2010): 235–42. arXiv:0907.0335.

Abstract: We study the phase diagram of the standard pair approximation equations for two different models in population dynamics, the susceptible-infective-recovered-susceptible model of infection spread and a predator-prey interaction model, on a network of homogeneous degree [tex]k[/tex]. These models have similar phase diagrams and represent two classes of systems for which noisy oscillations, still largely unexplained, are observed in nature. We show that for a certain range of the parameter [tex]k[/tex] both models exhibit an oscillatory phase in a region of parameter space that corresponds to weak driving. This oscillatory phase, however, disappears when [tex]k[/tex] is large. For [tex]k=3, 4[/tex], we compare the phase diagram of the standard pair approximation equations of both models with the results of simulations on regular random graphs of the same degree. We show that for parameter values in the oscillatory phase, and even for large system sizes, the simulations either die out or exhibit damped oscillations, depending on the initial conditions. We discuss this failure of the standard pair approximation model to capture even the qualitative behavior of the simulations on large regular random graphs and the relevance of the oscillatory phase in the pair approximation diagrams to explain the cycling behavior found in real populations.

Currently Reading

Random fun items currently floating up through the arXivotubes include the following. Exercise: find the shortest science-fiction story which can connect all these e-prints, visiting each node only once.

Robert H. Brandenberger, “String Gas Cosmology” (arXiv:0808.0746).

String gas cosmology is a string theory-based approach to early universe cosmology which is based on making use of robust features of string theory such as the existence of new states and new symmetries. A first goal of string gas cosmology is to understand how string theory can effect the earliest moments of cosmology before the effective field theory approach which underlies standard and inflationary cosmology becomes valid. String gas cosmology may also provide an alternative to the current standard paradigm of cosmology, the inflationary universe scenario. Here, the current status of string gas cosmology is reviewed.

Dimitri Skliros, Mark Hindmarsh, “Large Radius Hagedorn Regime in String Gas Cosmology” (arXiv:0712.1254, to be published in Phys. Rev. D).
Continue reading Currently Reading

Squaring Numbers Near Fifty

And now, a brief break from non-blogging:

Today, I’d like to start with a specific example and move on to a general point. The specific example is a way to approximate the squares of numbers and then refine those approximations to get exact answers, and the general point concerns the place such techniques should have in mathematics education.

My last calculator broke years ago, so when I have to do a spot of ciphering, I have to work the answer out in my head or push a pencil. (If the calculation involves more numbers than can fit on the back of an envelope, then it’s probably a data-analysis job which is being done on a computer anyway.) Every once in a while, the numbers teach you a lesson, in their own sneaky way.

It’s easy to square a smallish multiple of 10. We all learned our times tables, so squaring a number from 1 to 9 is a doddle, and the two factors of 10 just shift the decimal point over twice. Thus, 502 is 2500, no thinking needed.

Now, what if we want to square an integer which is near 50? We have a trick for this, a stunt which first yields an answer “close enough for government work,” and upon refinement gives the exact value. (I use the “close enough for government” line advisedly, as this was a trick Richard Feynman learned from Hans Bethe while they were calculating the explosive power of the first atomic bomb, at Los Alamos.) To get your first approximation, find the difference between your number and 50, and add that many hundreds to 2500. The correction, if you need it, is to add the difference squared. Thus, 482 is roughly 2300 and exactly 2304, while 532 is roughly 2800 and exactly 2809.

I wouldn’t advise teaching this as “the way to multiply,” first because its applicability is limited and second because it’s, well, arcane. What a goofy sequence of steps! Surely, if we’re drilling our children on an algorithm, it should be one which works on any numbers you give it. The situation changes, though, after you’ve seen a little algebra, and you realize where this trick comes from. It’s just squaring a binomial:
Continue reading Squaring Numbers Near Fifty

The Strident and The Shrill

BPSDBRichard Dawkins and PZ Myers had a lengthy, informal chat during the 2008 American Atheists conference in Minneapolis, and a recording of their conversation is now available on DVD and in the video tubes. They discuss the fight against pseudoscience as well as several interesting topics in good science.

I did my best to summarize the kin-vs.-group business in this book review. Among the “glimmerings” which suggest there’s a better way to think about some evolutionary processes (name for that better way still to be defined) are, I think, the epidemiological simulations in which fitness of a genotype is clearly a function of ecology and thus strongly time-dependent, and consequently existing analysis techniques are likely to fail. Assuming this kind of thing happens in the real world, it might be better to speak of “extending the evolutionary stable strategies concept” or “temporally extended phenotypes” than to have yet another largely semantic argument over “group selection.”

Also of note:

When Dawkins spoke at the first artificial life conference in Los Alamos, New Mexico, in 1987, he delivered a paper on “The Evolution of Evolvability.” This essay argues that evolvability is a trait that can be (and has been) selected for in evolution. The ability to be genetically responsive to the environment through such a mechanism as, say, sex, has an enormous impact on one’s evolutionary fitness. Dawkins’s paper has become essential reading in the artificial life community.

Anyway, on with the show.

P-Zed wrote an introduction to allometry a little over a year ago.
Continue reading The Strident and The Shrill

Currently Reading

Xiaotie Denga and Li-Sha Huang (2006), “On the complexity of market equilibria with maximum social welfare” Information Processing Letters 97, 1: 4–11 [DOI] [PDF].

We consider the computational complexity of the market equilibrium problem by exploring the structural properties of the Leontief exchange economy. We prove that, for economies guaranteed to have a market equilibrium, finding one with maximum social welfare or maximum individual welfare is NP-hard. In addition, we prove that counting the number of equilibrium prices is #P-hard.

Found by citation-hopping from here.

What Science Blogs Can’t Do

No cosmic law says that when you gaze into your navel, you have to like what you find.

My thesis is that it’s not yet possible to get a science education from reading science blogs, and a major reason for this is because bloggers don’t have the incentive to write the kinds of posts which are necessary. Furthermore, when we think in terms of incentive and motivation, the limitations upon the effects of online science writing become disquietingly clear. The problem, phrased without too much exaggeration, is that science blogs cannot teach science, nor can they change the world.


Notice how short the “basic concepts in science” list is, compared to the “basic concepts” which we know are the foundation of our fields? It has eleven entries — count them — for all of physics. Translated into lectures, that might be a couple weeks of class time. Chemistry is even worse off, and while the biology section is big, it’s also remarkably scatter-shot. Such introductory lessons as get written don’t get catalogued, and thus become damnably difficult to find again.

And, the problem hardly stops there. As the magician Andrew Mayne recently pointed out,

People only know what they can understand. There’s a lot of great information out there, but not enough is being doing to make it widely accessible to the masses. Most science entries in Wikipedia read like they’re written by graduate students for other graduate students. Even the basic science stuff is written that way.

We need to put ourselves into the perspective of someone who hasn’t had the science exposure that we’ve had and find ways to help make this information more accessible.

Why is introductory material so poorly represented?

Well, what do we science bloggers write about, anyway? This is how I caricature what I see:

0. Fun posts about random non-science stuff — entertaining, humanizing, but not the subject I’m focusing on right now.

1. Reactions to creationists and other pseudo-scientists.

2. Reactions to stories in the mainstream media, often in the “My God, how did they screw up so badly” genre.

3. Reports on peer-reviewed research.
Continue reading What Science Blogs Can’t Do

Genetics of Brain Evolution

Even buried as I am under a stack of PDFs talking about PDEs, I would be remiss if I didn’t point out some juicy videos describing actual, factual cutting-edge science, namely the talks from Rockefeller’s recent evolution symposium. I’m currently in the middle of Bruce T. Lahn’s (U Chicago) talk, “Probing Human Brain Evolution at the Genetic Level.” Click here and scroll down to find the link. What could be more appropriate for an elitist bastard than an explanation of genes which control brain size?

(Thanks go out to Abbie.)

Currently Reading

The most dangerous aspect of being trapped in the digital library’s virtual basement stacks is that you don’t want to come out.

Simon A. Levin (1992), “The Problem of Pattern and Scale in Ecology” Ecology 73, 6: pp. 1943–67. [JSTOR] [PDF].

It is argued that the problem of pattern and scale is the central problem in ecology, unifying population biology and ecosystems science, and marrying basic and applied ecology. Applied challenges, such as the prediction of the ecological causes and consequences of global climate change, require the interfacing of phenomena that occur on very different scales of space, time, and ecological organization. Furthermore, there is no single natural scale at which ecological phenomena should be studied; systems generally show characteristic variability on a range of spatial, temporal, and organizational scales. The observer imposes a perceptual bias, a filter through which the system is viewed. This has fundamental evolutionary significance, since every organism is an “observer” of the environment, and life history adaptations such as dispersal and dormancy alter the perceptual scales of the species, and the observed variability. It likewise has fundamental significance for our own study of ecological systems, since the patterns that are unique to any range of scales will have unique causes and biological consequences. The key to prediction and understanding lies in the elucidation of mechanisms underlying observed patterns. Typically, these mechanisms operate at different scales than those on which the patterns are observed; in some cases, the patterns must be understood as emerging form the collective behaviors of large ensembles of smaller scale units. In other cases, the pattern is imposed by larger scale constraints. Examination of such phenomena requires the study of how pattern and variability change with the scale of description, and the development of laws for simplification, aggregation, and scaling. Examples are given from the marine and terrestrial literatures.

Gyorgy Szabo, Gabor Fath (2007), “Evolutionary games on graphs” Physics Reports 446, 4-6: 97–216. [DOI] [arXiv].

Game theory is one of the key paradigms behind many scientific disciplines from biology to behavioral sciences to economics. In its evolutionary form and especially when the interacting agents are linked in a specific social network the underlying solution concepts and methods are very similar to those applied in non-equilibrium statistical physics. This review gives a tutorial-type overview of the field for physicists. The first three sections introduce the necessary background in classical and evolutionary game theory from the basic definitions to the most important results. The fourth section surveys the topological complications implied by non-mean-field-type social network structures in general. The last three sections discuss in detail the dynamic behavior of three prominent classes of models: the Prisoner’s Dilemma, the Rock-Scissors-Paper game, and Competing Associations. The major theme of the review is in what sense and how the graph structure of interactions can modify and enrich the picture of long term behavioral patterns emerging in evolutionary games.

Sébastien Lion, Minus van Baalen (2007), “From Infanticide to Parental Care: Why Spatial Structure Can Help Adults Be Good Parents” American Naturalist 170: E26–E46. [HTML] [PDF].
Continue reading Currently Reading

Currently Reading

Juan A. Bonachela, Haye Hinrichsen, Miguel A. Munoz, “Entropy estimates of small data sets” J. Phys. A: Math. Theor. 41 (2008). arXiv: 0804.4561.

Estimating entropies from limited data series is known to be a non-trivial task. Naive estimations are plagued with both systematic (bias) and statistical errors. Here, we present a new “balanced estimator” for entropy functionals (Shannon, Rényi and Tsallis) specially devised to provide a compromise between low bias and small statistical errors, for short data series. This new estimator out-performs other currently available ones when the data sets are small and the probabilities of the possible outputs of the random variable are not close to zero. Otherwise, other well-known estimators remain a better choice. The potential range of applicability of this estimator is quite broad specially for biological and digital data series.

As an exercise, discuss the relation of this approach to the coincidence-based methods of Ma, Bialas et al.

Liveblagging: Geoffrey West

I’m sitting in MIT’s lecture hall 34-101, where a Venerable Personage is introducing today’s physics colloquium speaker, Geoffrey West (Santa Fe Institute). Like most colloquium speakers (or so it seems to me) West has a string of academic honors to his name; perhaps more unusual is his membership in Time magazine’s “100 most influential people” list, for which he was profiled by Murray Gell-Mann. (At that, he had more luck than Richard Dawkins.) West’s talk will concern scaling laws in living systems, and its abstract is as follows:

Life is very likely the most complex phenomenon in the Universe manifesting an extraordinary diversity of form and function over an enormous range. Yet, many of its most fundamental and complex phenomena scale with size in a surprisingly simple fashion. For example, metabolic rate scales as the 3/4-power of mass over 27 orders of magnitude from complex molecules up to the largest multicellular organisms. Similarly, time-scales, such as lifespans and growth-rates, increase with exponents which are typically simple powers of 1/4. It will be shown how these “universal” 1/4 power scaling laws follow from fundamental properties of the networks that sustain life, leading to a general quantitative, predictive theory that captures the essential features of many diverse biological systems. Examples will include animal and plant vascular systems, growth, cancer, aging and mortality, sleep, DNA nucleotide substitution rates. These ideas will be extended to social organisations: to what extent are these an extension of biology? Is a city, for example, “just” a very large organism? Analogous scaling laws reflecting underlying social network structure point to general principles of organization common to all cities, but, counter to biological systems, the pace of social life systematically increases with size. This has dramatic implications for growth, development and sustainability: innovation and wealth creation that fuel social systems, if left unchecked, potentially sow the seeds for their inevitable collapse.

Now, let’s see if I can keep up!


“I think it’s patently obvious that I’m not one of the hundred most influential people in the world,” West says, “which should be obvious after I’ve finished my talk.” There follows an amount of fumbling as West and the distinguished personage try to turn on the overhead projector — “We need an experimentalist!” — before the big red button is found, and the projector screen glows into life.
Continue reading Liveblagging: Geoffrey West

Connections, Episode 10

This is the sort of thing which tends to get taken off the Network once the Powers Which Be notice that it exists, so we should enjoy it now. Here and there, in chunks of different sizes, we can find James Burke’s original Connections (1978) TV series. Embedded on this page is the tenth and last episode, “Yesterday, Tomorrow and You.” I could say many things about it, but for now, I’ll just note that “network robustness” has become a subject of quantitative investigation, that I can’t escape the feeling the arguments which perennially perturb the science-blogging orbit still aren’t addressing the points which Burke raised thirty years ago, and that you can’t go wrong with Ominous Latin Chanting.

Continue reading Connections, Episode 10