Category Archives: Information Theory

New Paper Dance

M. Appleby, C. A. Fuchs, B. C. Stacey and H. Zhu, “Introducing the Qplex: A Novel Arena for Quantum Theory,” arXiv:1612.03234 [quant-ph] (2016).

Abstract:

We reconstruct quantum theory starting from the premise that, as Asher Peres remarked, “Unperformed experiments have no results.” The tools of modern quantum information theory, and in particular the symmetric informationally complete (SIC) measurements, provide a concise expression of how exactly Peres’s dictum holds true. That expression is a constraint on how the probability distributions for outcomes of different, mutually exclusive experiments mesh together, a type of constraint not foreseen in classical thinking. Taking this as our foundational principle, we show how to reconstruct the formalism of quantum theory in finite-dimensional Hilbert spaces. Along the way, we derive a condition for the existence of a $d$-dimensional SIC.

Also available through SciRate, where I have a whole profile.

New Paper Dance

B. C. Stacey, “Geometric and Information-Theoretic Properties of the Hoggar Lines” (2016), arXiv:1609.03075 [quant-ph].

We take a tour of a set of equiangular lines in eight-dimensional Hilbert space. This structure defines an informationally complete measurement, that is, a way to represent all quantum states of three-qubit systems as probability distributions. Investigating the shape of this representation of state space yields a pattern of connections among a remarkable spread of mathematical constructions. In particular, studying the Shannon entropy of probabilistic representations of quantum states leads to an intriguing link between the questions of real and of complex equiangular lines. Furthermore, we will find relations between quantum information theory and mathematical topics like octonionic integers and the 28 bitangents to a quartic curve.

All things told, SIC-POVMs are just about the damnedest things I’ve ever studied in mathematics.

(Also listed on SciRate.)

Multiscale Structure via Information Theory

We have scienced:

B. Allen, B. C. Stacey and Y. Bar-Yam, “An Information-Theoretic Formalism for Multiscale Structure in Complex Systems” [arXiv:1409.4708].

We develop a general formalism for representing and understanding structure in complex systems. In our view, structure is the totality of relationships among a system’s components, and these relationships can be quantified using information theory. In the interest of flexibility we allow information to be quantified using any function, including Shannon entropy and Kolmogorov complexity, that satisfies certain fundamental axioms. Using these axioms, we formalize the notion of a dependency among components, and show how a system’s structure is revealed in the amount of information assigned to each dependency. We explore quantitative indices that summarize system structure, providing a new formal basis for the complexity profile and introducing a new index, the “marginal utility of information”. Using simple examples, we show how these indices capture intuitive ideas about structure in a quantitative way. Our formalism also sheds light on a longstanding mystery: that the mutual information of three or more variables can be negative. We discuss applications to complex networks, gene regulation, the kinetic theory of fluids and multiscale cybernetic thermodynamics.

There’s much more to do, but for the moment, let this indicate my mood:

#WhyTheQuantum

One day, I’ll be able to explain the story behind how I got into this, but looking back on all the oddities of it, I’m not sure that a medium other than manga could do it justice.

My Struggles with the Block Universe [arXiv:1405.2390]

Christopher A. Fuchs, Maximilian Schlosshauer (foreword), Blake C. Stacey (editor)

This document is the second installment of three in the Cerro Grande Fire Series. Like its predecessor arXiv:quant-ph/0105039, “Notes on a Paulian Idea,” it is a collection of letters written to various friends and colleagues, most of whom regularly circuit this archive. The unifying theme of all the letters is that each has something to do with the quantum. Particularly, the collection chronicles the emergence of Quantum Bayesianism as a robust view of quantum theory, eventually evolving into the still-more-radical “QBism” (with the B standing for no particular designation anymore), as it took its most distinctive turn away from various Copenhagen Interpretations. Included are many anecdotes from the history of quantum information theory: for instance, the story of the origin of the terms “qubit” and “quantum information” from their originator’s own mouth, a copy of a rejection letter written by E. T. Jaynes for one of Rolf Landauer’s original erasure-cost principle papers, and much more. Specialized indices are devoted to historical, technical, and philosophical matters. More roundly, the document is an attempt to provide an essential ingredient, unavailable anywhere else, for turning QBism into a live option within the vast spectrum of quantum foundational thought.

As the comment field says, “CAUTION, do not unthinkingly print from a printer: 2,348 pages, 4 indices, 6 figures, with extensive hyperlinking.”

MSwtBU was originally submitted to the arXiv on 10 May 2014, the anniversary of the predecessor volume and before that of the Cerro Grande Fire, which started the whole business. To my knowledge, it is the longest item currently on the arXiv.

omg 2000+ pages. There goes my free time.
— Dave Bacon, via Twitter

Currently Reading

Xiaotie Denga and Li-Sha Huang (2006), “On the complexity of market equilibria with maximum social welfare” Information Processing Letters 97, 1: 4–11 [DOI] [PDF].

We consider the computational complexity of the market equilibrium problem by exploring the structural properties of the Leontief exchange economy. We prove that, for economies guaranteed to have a market equilibrium, finding one with maximum social welfare or maximum individual welfare is NP-hard. In addition, we prove that counting the number of equilibrium prices is #P-hard.

Found by citation-hopping from here.

Currently Reading

Juan A. Bonachela, Haye Hinrichsen, Miguel A. Munoz, “Entropy estimates of small data sets” J. Phys. A: Math. Theor. 41 (2008). arXiv: 0804.4561.

Estimating entropies from limited data series is known to be a non-trivial task. Naive estimations are plagued with both systematic (bias) and statistical errors. Here, we present a new “balanced estimator” for entropy functionals (Shannon, Rényi and Tsallis) specially devised to provide a compromise between low bias and small statistical errors, for short data series. This new estimator out-performs other currently available ones when the data sets are small and the probabilities of the possible outputs of the random variable are not close to zero. Otherwise, other well-known estimators remain a better choice. The potential range of applicability of this estimator is quite broad specially for biological and digital data series.

As an exercise, discuss the relation of this approach to the coincidence-based methods of Ma, Bialas et al.

Categorical Information Theory

Ben Allen is now on the arXivotubes, with a category-theoretic arithmetic of information.

The concept of information has found application across the sciences. However, the conventional measures of information are not appropriate for all situations, and a more general mathematical concept is needed. In this work we give axioms that characterize the arithmetic of information, i.e. the way that pieces of information combine with each other. These axioms allow for a general notion of information functions, which quantify the information transmitted by a communication system. In our formalism, communication systems are represented as category-theoretic morphisms between information sources and destinations. Our framework encompasses discrete, continuous, and quantum information measures, as well as familiar mathematical functions that are not usually associated with information. We discuss these examples and prove basic results on the general behavior of information.

It looks like a discussion about this is starting over at the n-Category Café. If I didn’t have to spend today cutting down a 12-page paper to eight pages for an overpriced book of conference proceedings which nobody will read, I’d totally be writing more about it!

Aaronson on Turing, Gödel and Searle

Computer-science guru Scott Aaronson has begun to put lecture notes for his “Great Ideas In Theoretical Computer Science” course online. (That’s 6.080 in MIT-speak: words are for the weak.) Prepared by the students and edited by the instructors, the notes are terse but clear and enjoyable. I particularly like lecture 6, “Minds and Machines,” of which the following is a taste:

We already established that no Turing Machine exists can solve the halting problem. If we had a proof system that could analyze any Turing machine and prove if it halted or did not halt, we could use this system to solve the halting problem by brute force (also known as the “British Museum algorithm”) by trying every possible string that might be a proof. You would either terminate and find a proof that it halts or terminate and find a proof that it doesn’t halt. If this proof system was sound and complete, it would violate the unsolvability of the halting problem. Therefore, there is no such sound and complete proof system.

I also like this bit:
Continue reading Aaronson on Turing, Gödel and Searle

Shorter Uncommon Descent

BPSDBTyler DiPietro finds a fresh example of steaming creationist nonsense at the weblog of Dembski and sycophants, Uncommon Descent. I summarize for the busy reader:

Because running an image of a face through a software filter makes it look less like a face, Darwin was a fuddy-duddy and materialism is on its last leg.

The cream on the cocoa is, however, this bit, from a commenter:

I’m not sure about the CSI [complex specified information] decreasing, however. Anyone able to get values of the CSI for each picture?

Gee, given that “complex specified information” has never been consistently defined, I wonder how one could compute it.

Currently Reading

Oliver Johnson, Christophe Vignat (2006). Some results concerning maximum Renyi entropy distributions.

We consider the Student-t and Student-r distributions, which maximise Renyi entropy under a covariance condition. We show that they have information-theoretic properties which mirror those of the Gaussian distributions, which maximise Shannon entropy under the same condition. We introduce a convolution which preserves the Renyi maximising family, and show that the Renyi maximisers are the case of equality in a version of the Entropy Power Inequality. Further, we show that the Renyi maximisers satisfy a version of the heat equation, motivating the definition of a generalized Fisher information.

Luciano da F. Costa, Francisco A. Rodrigues, Gonzalo Travieso, P. R. Villas Boas (2006). Characterization of complex networks: A survey of measurements.
Continue reading Currently Reading

Info Theory and DNA/Evolution

We’re reviewing the IEEE papers.

After reviewing the first batch of papers, we came up with some questions to answer in the future, in order of difficulty/open-ness:

1. Given the robustness of a code (e.g. due to a many-to-one codon->AA mapping), can we calculate bounds on the channel capacity of such a code? How does the empirical R (info transmission rate) of the codon->AA code compare with the empirical C (channel capacity, e.g. from mutation rates)?

2. How does the length/entropy/complexity of code-bits (e.g. parity bits, non-message bits used for error correcting) relate to the complexity of the error-correcting task, and e.g. the entropy and length of the data-bit sections (e.g. the actual message you’re sending) to satisfy R≤C?

– Is the von Neumann entropy,
[tex]H = {\rm Tr\,} \rho\log\rho = \sum\lambda_i\log\lambda_i [/tex]
where {λ} are the eigenvalues of the matrix, useful for discussing network robustness? (There’s a paper where they use Kolmogorov-Sinai/Shannon entropy to do this, which Blake has somewhere…) If so, then can we apply this to a genetic-regulatory network, and tie in the error-correcting or homeostatic abilities of such a network with VNE or other network metrics?

Next week:

We meet Monday at BU for group theory. Ben will be discussing SU(2) and SO(3) from Artin. Friday, Blake will present on the symmetry-breaking-in-genetics paper, and possibly on the information-theoretic considerations for BLAST.

BLAKE SEZ: I believe the paper which caught my eye was L. Demetrius and T. Manke (15 February 2005) “Robustness and network evolution — an entropic principlePhysica A 346 (3–4): 682–96.

Entropy for Non-Majors

Every once in a while (well, actually, pretty frequently) I see a post out there in the Blagopelago which makes me feel bad about ranting so much and discussing science so little. Today’s entry in this category is Jacques Distler’s treatment of Boltzmann entropy. He explains his motivation as follows:

This semester, I’ve been teaching a Physics for non-Science majors (mostly Business School students) class.

Towards the end of the semester, we turned to Thermodynamics and, in particular, the subject of Entropy. The textbook had a discussion of ideal gases and of heat engines and whatnot. But, somewhere along the line, they made a totally mysterious leap to Boltzmann’s definition of Entropy. As important as Boltzmann’s insight is, it was presented in a fashion totally disconnected from Thermodynamics, or anything else that came before.

So, equipped with the Ideal Gas Law, and a little baby kinetic theory, I decided to see if I could present the argument leading to Boltzmann’s definition.

Continue reading Entropy for Non-Majors

A Lower Bound

This is how Carl Sagan begins the introduction to The Varieties of Scientific Experience.

In these lectures I would like, following the wording of the Gifford Trust, to tell you something of my views on what at least used to be called natural theology, which, as I understand it, is everything about the world not supplied by revelation. This is a very large subject, and I will necessarily have to pick and choose topics. I want to stress that what I will be saying are my own personal views on this boundary area between science and religion. The amount that has been written on the subject is enormous, certainly more than 10 million pages, or roughly 1011 bits of information. That’s a very low lower limit. And nevertheless no one can claim to have read even a tiny fraction of that body of literature or even a representative fraction. So it is only in the hope that much that has been written is unnecessary to be read that one can approach the subject at all.

This arithmetic does, I think, shed an interesting light on the Courtier’s Reply.

It’s rather common practice in some domains of the Blagnet to list one’s current mood or the music to which one is currently listening. I don’t have a handy collection of mood-marking icons (and if I did, I’d break them, because I’m a proud iconoclast), but I should note that my stereo is currently playing Infected Mushroom‘s 2003 album Converting Vegetarians. Stored in MP3 format, the two discs of this album occupy 239,317,690 bytes of hard-drive real estate. That’s roughly 1.9 × 109 bits, about one fiftieth of Sagan’s estimate for all theological writing, for two and a half hours of music. So, one hundred CDs of psy-trance (which could in principle include Goa and Suomisaundi) would take us into the regime of natural theology, content-wise.

The Psychedelic Mind Expander lists 636 different CDs released during 2006 alone, lumping together the Ambient, Breaks, Drum & Bass, Goa, Progressive, Psychedelic, Techno and Trance sub-genres (I get the feeling nobody else knows how to assign these labels, either — has anybody actually conducted a blind discrimination test between Drum & Bass and Neurofunk?).

It’s no wonder I have a hard time keeping up with divinity studies. Good thing we have theologians like Ishkur to organize the information for us.