Category Archives: Plectics

As We Would Not Actually think

There’s an aspect of Vannevar Bush’s “memex” that, I think, would still be difficult to achieve with current software, and that is its intensely personal character. The memex that his 1945 essay “As We May Think” imagined was to be “an enlarged intimate supplement to [the user’s] memory.”

A modern analogue would have to be something like a personal wiki, hybridized with a social-media platform. Every post you make is intended to be retrievable: cross-indexed, hyperlinked. Like, if every time you posted to your Mastodon instance, it was also added as a page to your own MediaWiki setup. And you could share pages from your MediaWiki with just a few clicks, sending any set of them you wish to another Mastodon user. Instead of just sharing a news story, you could pull up every news story you ever shared, along with whatever comments you made about them, and all the ways that you had decided to tag them.

It’s not beyond what software can do, but we don’t generally seem to have worked toward what Vannevar Bush had in mind. There wasn’t supposed to be just one Memex for everybody.

The bits and pieces are present, but there hasn’t been the drive to put them together in a way that makes the package readily usable. We have software for sharing personal records and observations (social media), and we have platforms for making association trails (e.g., Wikipedia, TV Tropes, etc.). But the Memex that VB envisioned was an individual possession that facilitated social exchanges. In slogan form: The memex was like building your own Wikipedia, with adjustable privacy settings, one blog or microblog post at a time.

New Papers Dance

In spite of the “everything, etc.” that is life these days, I’ve managed to do a bit of science here and there, which has manifested as two papers. First, there’s the one about quantum physics, written with the QBism group at UMass Boston:

J. B. DeBrota, C. A. Fuchs and B. C. Stacey, “Symmetric Informationally Complete Measurements Identify the Essential Difference between Classical and Quantum” [arXiv:1805.08721].

We describe a general procedure for associating a minimal informationally-complete quantum measurement (or MIC) and a set of linearly independent post-measurement quantum states with a purely probabilistic representation of the Born Rule. Such representations are motivated by QBism, where the Born Rule is understood as a consistency condition between probabilities assigned to the outcomes of one experiment in terms of the probabilities assigned to the outcomes of other experiments. In this setting, the difference between quantum and classical physics is the way their physical assumptions augment bare probability theory: Classical physics corresponds to a trivial augmentation — one just applies the Law of Total Probability (LTP) between the scenarios — while quantum theory makes use of the Born Rule expressed in one or another of the forms of our general procedure. To mark the essential difference between quantum and classical, one should seek the representations that minimize the disparity between the expressions. We prove that the representation of the Born Rule obtained from a symmetric informationally-complete measurement (or SIC) minimizes this distinction in at least two senses—the first to do with unitarily invariant distance measures between the rules, and the second to do with available volume in a reference probability simplex (roughly speaking a new kind of uncertainty principle). Both of these arise from a significant majorization result. This work complements recent studies in quantum computation where the deviation of the Born Rule from the LTP is measured in terms of negativity of Wigner functions.

To get an overall picture of our results without diving into the theorem-proving, you can watch John DeBrota give a lecture about our work.

Second, there’s the more classical (in the physicist’s sense, if not the economist’s):

B. C. Stacey and Y. Bar-Yam, “The Stock Market Has Grown Unstable Since February 2018” [arXiv:1806.00529].

On the fifth of February, 2018, the Dow Jones Industrial Average dropped 1,175.21 points, the largest single-day fall in history in raw point terms. This followed a 666-point loss on the second, and another drop of over a thousand points occurred three days later. It is natural to ask whether these events indicate a transition to a new regime of market behavior, particularly given the dramatic fluctuations — both gains and losses — in the weeks since. To illuminate this matter, we can apply a model grounded in the science of complex systems, a model that demonstrated considerable success at unraveling the stock-market dynamics from the 1980s through the 2000s. By using large-scale comovement of stock prices as an early indicator of unhealthy market dynamics, this work found that abrupt drops in a certain parameter U provide an early warning of single-day panics and economic crises. Decreases in U indicate regimes of “high co-movement”, a market behavior that is not the same as volatility, though market volatility can be a component of co-movement. Applying the same analysis to stock-price data from the beginning of 2016 until now, we find that the U value for the period since 5 February is significantly lower than for the period before. This decrease entered the “danger zone” in the last week of May, 2018.

Multiscale Structure of More-than-Binary Variables

When I face a writing task, my two big failure modes are either not starting at all and dragging my feet indefinitely, or writing far too much and having to cut it down to size later. In the latter case, my problem isn’t just that I go off on tangents. I try to answer every conceivable objection, including those that only I would think of. As a result, I end up fighting a rhetorical battle that only I know about, and the prose that emerges is not just overlong, but arcane and obscure. Furthermore, if the existing literature on a subject is confusing to me, I write a lot in the course of figuring it out, and so I end up with great big expository globs that I feel obligated to include with my reporting on what I myself actually did. That’s why my PhD thesis set the length record for my department by a factor of about three.

I have been experimenting with writing scientific pieces that are deliberately bite-sized to begin with. The first such experiment that I presented to the world, “Sporadic SICs and the Normed Division Algebras,” was exactly two pages long in its original form. The version that appeared in a peer-reviewed journal was slightly longer; I added a paragraph of context and a few references.

My latest attempt at a mini-paper (articlet?) is based on a blog post from a few months back. I polished it up, added some mathematical details, and worked in a comparison with other research that was published since I posted that blog item. The result is still fairly short:

Multiscale Structure, Information Theory, Explosions

I’d like to talk a bit about using information theory to quantify the intuition that a complex system exhibits structure at multiple scales of organization. My friend and colleague Ben Allen wrote an introduction to this a while ago:

Ben’s blog post is a capsule introduction to this article that he and I wrote with Yaneer Bar-Yam:

I also cover this topic, as well as a fair bit of background on how to relate probability and information, in my PhD thesis:

In this post, I’ll carry the ideas laid out in these sources a little bit farther in a particular direction.
Continue reading Multiscale Structure, Information Theory, Explosions

Multiscale Structure in Eco-Evolutionary Dynamics

I finally have my thesis in a shape that I feel like sharing. Yes, this took over three months after my committee gave their approval. Blame my desire to explain the background material, and the background to the background….

In a complex system, the individual components are neither so tightly coupled or correlated that they can all be treated as a single unit, nor so uncorrelated that they can be approximated as independent entities. Instead, patterns of interdependency lead to structure at multiple scales of organization. Evolution excels at producing such complex structures. In turn, the existence of these complex interrelationships within a biological system affects the evolutionary dynamics of that system. I present a mathematical formalism for multiscale structure, grounded in information theory, which makes these intuitions quantitative, and I show how dynamics defined in terms of population genetics or evolutionary game theory can lead to multiscale organization. For complex systems, “more is different,” and I address this from several perspectives. Spatial host–consumer models demonstrate the importance of the structures which can arise due to dynamical pattern formation. Evolutionary game theory reveals the novel effects which can result from multiplayer games, nonlinear payoffs and ecological stochasticity. Replicator dynamics in an environment with mesoscale structure relates to generalized conditionalization rules in probability theory.

The idea of natural selection “acting at multiple levels” has been mathematized in a variety of ways, not all of which are equivalent. We will face down the confusion, using the experience developed over the course of this thesis to clarify the situation.

(PDF, arXiv:1509.02958)

My Year in Publications

This is, apparently, a time for reflection. What have I been up to?

And so this is Korrasmas
Things have been Done
Kuvira is fallen
A new ‘ship just begun

Kor-ra-sa-mi
We all knew it
Kor-ra-sa-mi
now-ow-ow-owwwwwww

Well, other than watching cartoons?

At the very beginning of 2014, I posted a substantial revision of “Eco-Evolutionary Feedback in Host–Pathogen Spatial Dynamics,” which we first put online in 2011 (late in the lonesome October of my most immemorial year, etc.).

In January, Chris Fuchs and I finished up an edited lecture transcript, “Some Negative Remarks on Operational Approaches to Quantum Theory.” My next posting was a solo effort, “SIC-POVMs and Compatibility among Quantum States,” which made for a pretty good follow-on, and picked up a pleasantly decent number of scites.

Then, we stress-tested the arXiv.

By mid-September, Ben Allen, Yaneer Bar-Yam and I had completed “An Information-Theoretic Formalism for Multiscale Structure in Complex Systems,” a work very long in the cooking.

Finally, I rang in December with “Von Neumann was Not a Quantum Bayesian,” which demonstrates conclusively that I can write 24 pages with 107 references in response to one sentence on Wikipedia.

Lacking Tonka

Dawkins claims that Hölldobler has “no truck with group selection”. Wilson and Hölldobler (2005) proposes, in the first sentence of its abstract, that “group selection is the strong binding force in eusocial evolution”. Later, Hölldobler (with Reeve) voiced support for the “trait-group selection and individual selection/inclusive fitness models are interconvertible” attitude. Hölldobler’s book with Wilson, The Superorganism: The Beauty, Elegance, and Strangeness of Insect Societies (2008), maintains this tone. Quoting from page 35:

It is important to keep in mind that mathematical gene-selectionist (inclusive fitness) models can be translated into multilevel selection models and vice versa. As Lee Dugatkin, Kern Reeve, and several others have demonstrated, the underlying mathematics is exactly the same; it merely takes the same cake and cuts it at different angles. Personal and kin components are distinguished in inclusive fitness theory; within-group and between-group components are distinguished in group selection theory. One can travel back and forth between these theories with the point of entry chosen according to the problem being addressed.

This is itself a curtailed perspective, whose validity is restricted to a narrow class of implementations of the “multilevel selection” idea. (Yeah, the terminology in this corner of science is rather confused, which doesn’t make talking about it easier.) Regardless, I cannot think of a way in which this can be construed as having “no truck with group selection”. The statement “method A is no better or worse than method B” is a far cry from “method A is worthless and only method B is genuinely scientific”.

If Dawkins has some personal information to which the published record is not privy, that’s fine, but even if that were the case, his statements could not be taken as a fair telling of the story.

EDIT TO ADD (21 November 2014): I forgot this 2010 solo-author piece by Hölldobler, in a perspective printed in Social Behaviour: Genes, Ecology and Evolution (T. Székely et al., eds). Quoting from page 127:

I was, and continue to be, intrigued by the universal observation that wherever social life in groups evolved on this planet, we encounter (with only a few exceptions) a striking correlation: the more tightly organized within-group cooperation and cohesion, the stronger the between-group discrimination and hostility. Ants, again, are excellent model systems for studying the transition from primitive eusocial systems, characterized by considerable within-group reproductive competition and conflict, and poorly developed reciprocal communication and cooperation, and little or no between-group competition, one one side, to the ultimate superorganisms (such as the gigantic colonies of the Atta leafcutter ants) with little or no within-group conflict, pronounced caste systems, elaborate division of labour, complex reciprocal communication, and intense between-group competition, on the other side (Hölldobler & Wilson 2008 [the book quoted above]).

And, a little while later, on p. 130:

In such advanced eusocial organisations the colony effectively becomes a main target of selection […] Selection therefore optimises caste demography, patterns of division of labour and communication systems at the colony level. For example, colonies that employ the most effective recruitment system to retrieve food, or that exhibit the most powerful colony defence against enemies and predators, will be able to raise the largest number of reproductive females and males each year and thus will have the greatest fitness within the population of colonies.

Google Scholar Irregularities

Google Scholar is definitely missing citations to my papers.

The cited-by results for “Some Negative Remarks on Operational Approaches to Quantum Theory” [arXiv:1401.7254] on Google Scholar and on INSPIRE are completely nonoverlapping. Google Scholar can tell that “An Information-Theoretic Formalism for Multiscale Structure in Complex Systems” [arXiv:1409.4708] cites “Eco-Evolutionary Feedback in Host–Pathogen Spatial Dynamics” [arXiv:1110.3845] but not that it cites My Struggles with the Block Universe [arXiv:1405.2390]. Meanwhile, the SAO/NASA Astrophysics Data System catches both.

This would be a really petty thing to complain about, if people didn’t seemingly rely on such metrics.

EDIT TO ADD (17 November 2014): Google Scholar also misses that David Mermin cites MSwtBU in his “Why QBism is not the Copenhagen interpretation and what John Bell might have thought of it” [arXiv:1409.2454]. This maybe has something to do with being worse at detecting citations in footnotes than in endnotes.

Multiscale Structure via Information Theory

We have scienced:

B. Allen, B. C. Stacey and Y. Bar-Yam, “An Information-Theoretic Formalism for Multiscale Structure in Complex Systems” [arXiv:1409.4708].

We develop a general formalism for representing and understanding structure in complex systems. In our view, structure is the totality of relationships among a system’s components, and these relationships can be quantified using information theory. In the interest of flexibility we allow information to be quantified using any function, including Shannon entropy and Kolmogorov complexity, that satisfies certain fundamental axioms. Using these axioms, we formalize the notion of a dependency among components, and show how a system’s structure is revealed in the amount of information assigned to each dependency. We explore quantitative indices that summarize system structure, providing a new formal basis for the complexity profile and introducing a new index, the “marginal utility of information”. Using simple examples, we show how these indices capture intuitive ideas about structure in a quantitative way. Our formalism also sheds light on a longstanding mystery: that the mutual information of three or more variables can be negative. We discuss applications to complex networks, gene regulation, the kinetic theory of fluids and multiscale cybernetic thermodynamics.

There’s much more to do, but for the moment, let this indicate my mood:

Delayed Gratification

A post today by PZ Myers nicely expresses something which has been frustrating me about people who, in arguing over what can be a legitimate subject of “scientific” study, play the “untestable claim” card.

Their ideal is the experiment that, in one session, shoots down a claim cleanly and neatly. So let’s bring in dowsers who claim to be able to detect water flowing underground, set up control pipes and water-filled pipes, run them through their paces, and see if they meet reasonable statistical criteria. That’s science, it works, it effectively addresses an individual’s very specific claim, and I’m not saying that’s wrong; that’s a perfectly legitimate scientific experiment.

I’m saying that’s not the whole operating paradigm of all of science.

Plenty of scientific ideas are not immediately testable, or directly testable, or testable in isolation. For example: the planets in our solar system aren’t moving the way Newton’s laws say they should. Are Newton’s laws of gravity wrong, or are there other gravitational influences which satisfy the Newtonian equations but which we don’t know about? Once, it turned out to be the latter (the discovery of Neptune), and once, it turned out to be the former (the precession of Mercury’s orbit, which required Einstein’s general relativity to explain).

There are different mathematical formulations of the same subject which give the same predictions for the outcomes of experiments, but which suggest different new ideas for directions to explore. (E.g., Newtonian, Lagrangian and Hamiltonian mechanics; or density matrices and SIC-POVMs.) There are ideas which are proposed for good reason but hang around for decades awaiting a direct experimental test—perhaps one which could barely have been imagined when the idea first came up. Take directed percolation: a simple conceptual model for fluid flow through a randomized porous medium. It was first proposed in 1957. The mathematics necessary to treat it cleverly was invented (or, rather, adapted from a different area of physics) in the 1970s…and then forgotten…and then rediscovered by somebody else…connections with other subjects were made… Experiments were carried out on systems which almost behaved like the idealization, but always turned out to differ in some way… until 2007, when the behaviour was finally caught in the wild. And the experiment which finally observed a directed-percolation-class phase transition with quantitative exactness used a liquid crystal substance which wasn’t synthesized until 1969.

You don’t need to go dashing off to quantum gravity to find examples of ideas which are hard to test in the laboratory, or where mathematics long preceded experiment. (And if you do, don’t forget the other applications being developed for the mathematics invented in that search.) Just think very hard about the water dripping through coffee grounds to make your breakfast.

Modern Evolutionary Theory Reading List

The following is a selection of interesting papers on the theory of evolutionary dynamics. One issue addressed is that of “levels of selection” in biological evolution. I have tried to arrange them in an order such that the earlier ones provide a good context for the ones listed later.

I’ve met, corresponded with and in a couple cases collaborated with authors of these papers, but I’ve had no input on writing or peer-reviewing any of them.
Continue reading Modern Evolutionary Theory Reading List

Currently Reading

T. Biancalani, D. Fanelli and F. Di Patti (2010), “Stochastic Turing patterns in the Brusselator modelPhysical Review E 81, 4: 046215, arXiv:0910.4984 [cond-mat.stat-mech].

Abstract:

A stochastic version of the Brusselator model is proposed and studied via the system size expansion. The mean-field equations are derived and shown to yield to organized Turing patterns within a specific parameters region. When determining the Turing condition for instability, we pay particular attention to the role of cross-diffusive terms, often neglected in the heuristic derivation of reaction-diffusion schemes. Stochastic fluctuations are shown to give rise to spatially ordered solutions, sharing the same quantitative characteristic of the mean-field based Turing scenario, in term of excited wavelengths. Interestingly, the region of parameter yielding to the stochastic self-organization is wider than that determined via the conventional Turing approach, suggesting that the condition for spatial order to appear can be less stringent than customarily believed.

See also the commentary by Mehran Kardar.

Currently Reading

A. Franceschini et al. (2011), “Transverse Alignment of Fibers in a Periodically Sheared Suspension: An Absorbing Phase Transition with a Slowly Varying Control Parameter” Physical Review Letters 107, 25: 250603. DOI: 10.1103/PhysRevLett.107.250603.

Abstract: Shearing solutions of fibers or polymers tends to align fiber or polymers in the flow direction. Here, non-Brownian rods subjected to oscillatory shear align perpendicular to the flow while the system undergoes a nonequilibrium absorbing phase transition. The slow alignment of the fibers can drive the system through the critical point and thus promote the transition to an absorbing state. This picture is confirmed by a universal scaling relation that collapses the data with critical exponents that are consistent with conserved directed percolation.

Adaptive Networks

In network science, one can study the dynamics of a network — nodes being added or removed, edges being rewired — or the dynamics on the network — spins flipping from up to down in an Ising model, traffic flow along subway routes, an infection spreading through a susceptible population, etc. These have often been studied separately, on the rationale that they occur at different timescales. For example, the traffic load on the different lines of the Boston subway network changes on an hourly basis, but the plans to extend the Green Line into Medford have been deliberated since World War II.

In the past few years, increasing attention has been focused on adaptive networks, in which the dynamics of and the dynamics on can occur at comparable timescales and feed back on one another. Useful references:
Continue reading Adaptive Networks

Of Predators and Pomerons

Consider the Lagrangian density

\[ \mathcal{L} (\tilde{\phi},\phi) = \tilde{\phi}\left((\partial_t + D_A(r_A – \nabla^2)\right)\phi – u\tilde{\phi}(\tilde{\phi} – \phi)\phi + \tau \tilde{\phi}^2\phi^2. \]

Particle physicists of the 1970s would recognize this as the Lagrangian for a Reggeon field theory with triple- and quadruple-Pomeron interaction vertices. In the modern literature on theoretical ecology, it encodes the behaviour of a spatially distributed predator-prey system near the predator extinction threshold.

Such is the perplexing unity of mathematical science: formula X appears in widely separated fields A and Z. Sometimes, this is a sign that a common effect is at work in the phenomena of A and those of Z; or, it could just mean that scientists couldn’t think of anything new and kept doing whatever worked the first time. Wisdom lies in knowing which is the case on any particular day.

[Reposted from the archives, in the light of John Baez’s recent writings.]