Category Archives: Network theory

On the arXivotubes

If I didn’t have to be modifying and debugging a network-growth simulation today, I’d be reading these papers. The first is not too far from one subject I’m researching right now. Axelsen et al. (arXiv:0711.2208) write about “a tool-based view of regulatory network topology”:

The relationship between the regulatory design and the functionality of molecular networks is a key issue in biology. Modules and motifs have been associated to various cellular processes, thereby providing anecdotal evidence for performance based localization on molecular networks. To quantify structure-function relationship we investigate similarities of proteins which are close in the regulatory network of the yeast Saccharomyces cerevisiae. We find that the topology of the regulatory network show weak remnants of its history of network reorganizations, but strong features of co-regulated proteins associated to similar tasks. This suggests that local topological features of regulatory networks, including broad degree distributions, emerge as an implicit result of matching a number of needed processes to a finite toolbox of proteins.

The second, Gaume and Forterre’s “A viscoelastic deadly fluid in carnivorous pitcher plants” (arXiv:0711.4724, also in PLoS ONE), is pertinent to my eventual life goal of building an army of atomic super-plants.
Continue reading On the arXivotubes

Yawn: More Abuse of the Quantum

Binocular rivalry is a phenomenon which occurs when conflicting information is presented to each of our two eyes, and the brain has to cope with the contradiction. Instead of seeing a superimposition or “average” of the two, our perceptual machinery entertains both possibilities in turn, randomly flickering from one to the other. This presents an interesting way to stress-test our visual system and see how vision works. Unfortunately, talk of “perception” leads to talk of “consciousness,” and once “consciousness” has been raised, an invocation of quantum mechanics can’t be too far behind.

I’m late to join the critical party surrounding E. Manousakis’ paper, “Quantum theory, consciousness and temporal perception: Binocular rivalry,” recently uploaded to the arXiv and noticed by Mo at Neurophilosophy. Manousakis applies “quantum theory” (there’s a reason for those scare quotes) to the problem of binocular rivalry and from this hat pulls a grandiose claim that quantum physics is relevant for human consciousness.


First, we observe that there is a healthy literature on this phenomenon, work done by computational neuroscience people who aren’t invoking quantum mechanics in their explanations.

Second, one must carefully distinguish a model of a phenomenon which actually uses quantum physics from a model in which certain mathematical tools are applicable. Linear algebra is a mathematical tool used in quantum physics, but describing a system with linear algebra does not make it quantum-mechanical. Long division and the extraction of square roots can also appear in the solution of a quantum problem, but this does not make dividing 420 lollipops among 25 children a correlate of quantum physics.

Just because the same equation applies doesn’t mean the same physics is at work. An electrical circuit containing a capacitor, an inductor and a resistor obeys the same differential equation as a mass on a spring: capacitance corresponds to “springiness,” inductance to inertia and resistance to friction. This does not mean that an electrical circuit is the same thing as a rock glued to a slinky.


One interesting thing about this paper is that the hypothesis is really only half quantum, at best. In fact, three of the four numbers fed into Manousakis’ hypothesis pertain to a classical phenomenon, and here’s why:

Manousakis invokes the formalism of the quantum two-state system, saying that the perception of (say) the image seen by the left eye is one state and that from the right eye is the other. The upshot of this is that the probability of seeing the illusion one way — say, the left-eye version — oscillates over time as

[tex]P(t) = \cos^2(\omega t),[/tex]

where [tex]\omega[/tex] is some characteristic frequency of the perceptual machinery. The oscillation is always going, swaying back and forth, but every once in a while, it gets “observed,” which forces the brain into either the clockwise or the counter-clockwise state, from which the oscillation starts again.

The quantum two-state system just provides an oscillating probability of favoring one perception, one which goes as the square of [tex]\cos(\omega t)[/tex]. Three of the four parameters fed into the Monte Carlo simulation actually pertain to how often this two-state system is “observed” and “collapsed”. These parameters describe a completely classical pulse train — click, click, click, pause, click click click click, etc.

What’s more, the classical part is the higher-level one, the one which intrudes on the low-level processing. Crudely speaking, it’s like saying there’s a quantum two-state system back in the visual cortex, but all the processing up in the prefrontal lobes is purely classical.
Continue reading Yawn: More Abuse of the Quantum

Currently Reading

Oliver Johnson, Christophe Vignat (2006). Some results concerning maximum Renyi entropy distributions.

We consider the Student-t and Student-r distributions, which maximise Renyi entropy under a covariance condition. We show that they have information-theoretic properties which mirror those of the Gaussian distributions, which maximise Shannon entropy under the same condition. We introduce a convolution which preserves the Renyi maximising family, and show that the Renyi maximisers are the case of equality in a version of the Entropy Power Inequality. Further, we show that the Renyi maximisers satisfy a version of the heat equation, motivating the definition of a generalized Fisher information.

Luciano da F. Costa, Francisco A. Rodrigues, Gonzalo Travieso, P. R. Villas Boas (2006). Characterization of complex networks: A survey of measurements.
Continue reading Currently Reading

Words You Can’t Say in an Obituary

This is from the Washington Post obit of Norman Mailer:

After the war ended, he served with occupation forces in Japan, then returned to the United States in May 1946. He spent the rest of the year in a bungalow near Provincetown, Mass., transmuting his military experiences into “The Naked and the Dead.”

His publisher, Stanley Rinehart, insisted that he clean up the language in the book. Mailer complied by inventing the word “fug” as a substitute for an expletive that the publisher found offensive.

Now. . . .
Continue reading Words You Can’t Say in an Obituary

ICCS: Time-Dependent Networks

Blogging on Peer-Reviewed ResearchYesterday, the International Conference on Complex Systems wrapped up with five talks on networks. For me, the most interesting was that by Dan Braha, who spoke about what happens when you analyze a system as a network which changes over time, instead of the aggregate network formed by lumping all the timesteps together. Imagine a system made out of a whole pile of parts. At time [tex]t[/tex], part number [tex]i[/tex] might or might not be interacting with part number [tex]j[/tex], which we could represent as a time-varying matrix [tex]C_{ij}(t)[/tex]. Many studies of network-related phenomena obscure the time-dependence part. For example, in a living cell, genes are switching on and off, concentrations of enzymes are going up and down, and all sorts of stuff is changing over time. You can mix proteins A, B and C in a test tube; perhaps A bonds both to B and to C. You’d then draw an interaction network with links connecting A to B and to C — but what if B and C are never present in the cell at the same time?

Braha and company looked at a collection of e-mails sent over 113 days, exchanged among 57,138 users. (The data comes from arXiv:cond-mat/0201476v2, published five years ago in Phys. Rev. E, and were gathered at Kiel University.) A node is an individual e-mail address, and a link is established when a message is sent from one address to another. They found, among other things, that whether or not a particular node is a “hub” changes over time: popular today, an outcast tomorrow. Moreover, a node which is in the top 1000 most connected on one day may or may not be in the top 1000 for the aggregate network. Furthermoreover, when the window of aggregation is gradually increased — from one day to two days, to a week, up to the entire time period — the similarity to the total aggregate network increases, as you’d expect, but without any threshold.

In the last few minutes of his talk, Braha did a brief overview of a related investigation, in which they studied a “social network” derived from Bluetooth devices. If my Bluetooth gizmo is within two meters of yours, we’ll call that a link. The network of Bluetooth devices will naturally change over time, so we can do the same comparison between the graphs observed at short timesteps to the graph formed by aggregating all connections. During the Q&A session afterwards — before I had to, ironically enough, run off to find my cell phone — I pointed out something which it appears Braha hadn’t fully grasped.
Continue reading ICCS: Time-Dependent Networks

Don’t Make Baby Gauss Cry

Cosma Shalizi writes of “Power-Law Distributions in Empirical Data“:

Because this is, of course, what everyone ought to do with a computational paper, we’ve put our code online, so you can check our calculations, or use these methods on your own data, without having to implement them from scratch. I trust that I will no longer have to referee papers where people use GnuPlot to draw lines on log-log graphs, as though that meant something, and that in five to ten years even science journalists and editors of Wired will begin to get the message.

Mark Liberman is not optimistic (we’ve got a long way to go).

Among several important take-home points, I found the following particularly amusing:
Continue reading Don’t Make Baby Gauss Cry

Preferential Attachment in Python

A few posts ago, I mentioned the model of network growth by preferential attachment. This is a big enough topic in network theory that it’s worth poking in detail. I discussed some of the subject’s history in this paper (1.2 Mb PDF, plus presentation), which they tell me passed some stage of review and will appear in the print volume of the conference proceedings, i.e., an overpriced Springer book nobody will actually buy. But in addition to learning the names and dates involved in the story, we would like to play around with the ideas ourselves.

The other day, I realized how easy this would be, and now that I’ve actually presented this to a classroom full of people, I’d like to write about it. Today, I’ll present a Python program for growing a network by the “rich get richer” effect known as preferential attachment.
Continue reading Preferential Attachment in Python

Power-law Distributions in Empirical Data

Throughout many fields of science, one finds quantities which behave (or are claimed to behave) according to a power-law distribution. That is, one quantity of interest, y, scales as another number x raised to some exponent:

[tex] y \propto x^{-\alpha}.[/tex]

Power-law distributions made it big in complex systems when it was discovered (or rather re-discovered) that a simple procedure for growing a network, called “preferential attachment,” yields networks in which the probability of finding a node with exactly k other nodes connected to it falls off as k to some exponent:

[tex]p(k) \propto k^{-\gamma}.[/tex]

The constant γ is typically found to be between 2 and 3. Now, from my parenthetical remarks, the Gentle Reader may have gathered that the story is not quite a simple one. There are, indeed, many complications and subtleties, one of which is an issue which might sound straightforward: how do we know a power-law distribution when we see one? Can we just plot our data on a log-log graph and see if it falls on a straight line? Well, as Eric and I are fond of saying, “You can hide a multitude of sins on a log-log graph.”

Via Dave Bacon comes word of a review article on this very subject. Clauset, Shalizi and Newman offer us “Power-law distributions in empirical data” (7 June 2007), whose abstract reads as follows:
Continue reading Power-law Distributions in Empirical Data

Info Theory and DNA/Evolution

We’re reviewing the IEEE papers.

After reviewing the first batch of papers, we came up with some questions to answer in the future, in order of difficulty/open-ness:

1. Given the robustness of a code (e.g. due to a many-to-one codon->AA mapping), can we calculate bounds on the channel capacity of such a code? How does the empirical R (info transmission rate) of the codon->AA code compare with the empirical C (channel capacity, e.g. from mutation rates)?

2. How does the length/entropy/complexity of code-bits (e.g. parity bits, non-message bits used for error correcting) relate to the complexity of the error-correcting task, and e.g. the entropy and length of the data-bit sections (e.g. the actual message you’re sending) to satisfy R≤C?

– Is the von Neumann entropy,
[tex]H = {\rm Tr\,} \rho\log\rho = \sum\lambda_i\log\lambda_i [/tex]
where {λ} are the eigenvalues of the matrix, useful for discussing network robustness? (There’s a paper where they use Kolmogorov-Sinai/Shannon entropy to do this, which Blake has somewhere…) If so, then can we apply this to a genetic-regulatory network, and tie in the error-correcting or homeostatic abilities of such a network with VNE or other network metrics?

Next week:

We meet Monday at BU for group theory. Ben will be discussing SU(2) and SO(3) from Artin. Friday, Blake will present on the symmetry-breaking-in-genetics paper, and possibly on the information-theoretic considerations for BLAST.

BLAKE SEZ: I believe the paper which caught my eye was L. Demetrius and T. Manke (15 February 2005) “Robustness and network evolution — an entropic principlePhysica A 346 (3–4): 682–96.

Emotions of Small-World Networks

Neil Gaiman:

I’m in the UK right now, and it’s a long way away, and I’m reading about what happened in newspapers (because I don’t turn on TVs in hotel rooms. I don’t know why this is, but I don’t), still managing to think of this as something that happened, tragically, to Other People. And then I see this, and my heart sinks, because this is the Michael Bishop who I met in 1999 when we were Guests of Honour at World Horror, whose son was a Sandman fan and oh god, and then I click on this, and I get my nose rubbed hard and painfully in the fact that there are no Other People. It’s just us.

Friday fun in Network Theory

Via Backreaction, I just heard about this intriguing exploration in applied network theory:

For the first time, sociologists have mapped the romantic and sexual relationships of an entire high school over 18 months, providing evidence that these adolescent networks may be structured differently than researchers previously thought.

The results showed that, unlike many adult networks, there was no core group of very sexually active people at the high school. There were not many students who had many partners and who provided links to the rest of the community.

Instead, the romantic and sexual network at the school created long chains of connections that spread out through the community, with few places where students directly shared the same partners with each other. But they were indirectly linked, partner to partner to partner. One component of the network linked 288 students — more than half of those who were romantically active at the school — in one long chain.

Out of about 1,000 students in the school, the researchers interviewed 832 and found that slightly more than half reported having had sexual intercourse. They found 63 “simple pairs”, i.e., students whose only pairings were with each other. 189 students (35% of the romantically active population) belonged to networks containing three or fewer nodes. And then, if you want some really interesting topology,

The most striking feature of the network was a single component that connected 52 percent (288) of the romantically involved students at Jefferson. This means student A had relations with student B, who had relations with student C and so on, connecting all 288 of these students.

While this component is large, it has numerous short branches and is very broad – the two most distant individuals are 37 steps apart. (Or to use a currently popular term, there were 37 degrees of separation between the two most-distant students.)

“From a student’s perspective, a large chain like this would boggle the mind,” [Ohio State professor James] Moody said. “They might know that their partner had a previous partner. But they don’t think about the fact that this partner had a previous partner, who had a partner, and so on.

“What this shows, for the first time, is that there are many of these links in a chain, going far beyond what anyone could see and hold in their head.”

This caught my eye because I’ve actually dabbled in networks, studying protein structures using “motifs” — small sub-graphs with particular connection patterns whose preponderances we examine statistically. I’d be interested in getting the actual connection data, running them through MFinder and checking out their superfamily values.