New Paper Dance, Encore

This time, it’s another solo-author outing.

B. C. Stacey, “Is the SIC Outcome There When Nobody Looks?” [arXiv:1807.07194].

Informationally complete measurements are a dramatic discovery of quantum information science, and the symmetric IC measurements, known as SICs, are in many ways optimal among them. Close study of three of the “sporadic SICs” reveals an illuminating relation between different ways of quantifying the extent to which quantum theory deviates from classical expectations.

New Papers Dance

In spite of the “everything, etc.” that is life these days, I’ve managed to do a bit of science here and there, which has manifested as two papers. First, there’s the one about quantum physics, written with the QBism group at UMass Boston:

J. B. DeBrota, C. A. Fuchs and B. C. Stacey, “Symmetric Informationally Complete Measurements Identify the Essential Difference between Classical and Quantum” [arXiv:1805.08721].

We describe a general procedure for associating a minimal informationally-complete quantum measurement (or MIC) and a set of linearly independent post-measurement quantum states with a purely probabilistic representation of the Born Rule. Such representations are motivated by QBism, where the Born Rule is understood as a consistency condition between probabilities assigned to the outcomes of one experiment in terms of the probabilities assigned to the outcomes of other experiments. In this setting, the difference between quantum and classical physics is the way their physical assumptions augment bare probability theory: Classical physics corresponds to a trivial augmentation — one just applies the Law of Total Probability (LTP) between the scenarios — while quantum theory makes use of the Born Rule expressed in one or another of the forms of our general procedure. To mark the essential difference between quantum and classical, one should seek the representations that minimize the disparity between the expressions. We prove that the representation of the Born Rule obtained from a symmetric informationally-complete measurement (or SIC) minimizes this distinction in at least two senses—the first to do with unitarily invariant distance measures between the rules, and the second to do with available volume in a reference probability simplex (roughly speaking a new kind of uncertainty principle). Both of these arise from a significant majorization result. This work complements recent studies in quantum computation where the deviation of the Born Rule from the LTP is measured in terms of negativity of Wigner functions.

To get an overall picture of our results without diving into the theorem-proving, you can watch John DeBrota give a lecture about our work.

Second, there’s the more classical (in the physicist’s sense, if not the economist’s):

B. C. Stacey and Y. Bar-Yam, “The Stock Market Has Grown Unstable Since February 2018” [arXiv:1806.00529].

On the fifth of February, 2018, the Dow Jones Industrial Average dropped 1,175.21 points, the largest single-day fall in history in raw point terms. This followed a 666-point loss on the second, and another drop of over a thousand points occurred three days later. It is natural to ask whether these events indicate a transition to a new regime of market behavior, particularly given the dramatic fluctuations — both gains and losses — in the weeks since. To illuminate this matter, we can apply a model grounded in the science of complex systems, a model that demonstrated considerable success at unraveling the stock-market dynamics from the 1980s through the 2000s. By using large-scale comovement of stock prices as an early indicator of unhealthy market dynamics, this work found that abrupt drops in a certain parameter U provide an early warning of single-day panics and economic crises. Decreases in U indicate regimes of “high co-movement”, a market behavior that is not the same as volatility, though market volatility can be a component of co-movement. Applying the same analysis to stock-price data from the beginning of 2016 until now, we find that the U value for the period since 5 February is significantly lower than for the period before. This decrease entered the “danger zone” in the last week of May, 2018.

Recent Advances in Packing

The weekend before last, I overcame my reluctance to travel and went to a mathematics conference, the American Mathematical Society’s Spring Central Sectional Meeting. I gave a talk in the “Recent Advances in Packing” session, spreading the word about SICs. My talk followed those by Steve Flammia and Marcus Appleby, who spoke about the main family of known SIC solutions while I covered the rest (the sporadic SICs). The co-organizer of that session, Dustin Mixon, has posted an overall summary and the speakers’ slides over at his blog.

No, That Viral Video Does Not Contain the Mathematical Secret of Reality

This is what I get for skimming an entertainment website for a momentary diversion.

So, everybody’s seen the cool new video, “‘Cantina Theme’ played by a pencil and a girl with too much time on her hands,” right?

And we’ve heard the claim, via Mashable and thence The AV Club, that the formula “can actually be used to determine the speed of light,” yes?

It’s a joke. The “proof” is words thrown into a box and filled with numbers so that nobody reads it too carefully. The algebra isn’t even right — hell, it does FOIL wrong — but that’s just a detail. I tried to think of a way to use it as a hook to explain some real science, as I’ve tried before upon occasion, but there just wasn’t any there there. The whole thing is goofing off.

Obvious goofing off, I would have thought. Somewhere south of a Star Trek Voyager technobabble speech. But no, never underestimate the ability of numbers to make a brain shut down.

Two Recent Items Concerning Wikipedia

A few years ago, I found a sentence in a Wikipedia page that irritated me so much, I wrote a 25-page article about it. Eventually, I got that article published in the Philosophical Transactions of the Royal Society. On account of all this, friends and colleagues sometimes send me news about Wikipedia, or point me to strange things they’ve found there. A couple such items have recently led me to Have Thoughts, which I share below.

This op-ed on the incomprehensibility of Wikipedia science articles puts a finger on a real problem, but its attempt at explanation assumes malice rather than incompetence. Yes, Virginia, the science and mathematics articles are often baffling and opaque. The Vice essay argues that the writers of Wikipedia’s science articles use the incomprehensibility of their prose as a shield to keep out the riffraff and maintain the “elite” status of their subject. I don’t buy it. In my opinion, this hypothesis does not account for the intrinsic difficulty of explaining science, nor for the incentive structures at work. Wikipedia pages grow by bricolage, small pieces of cruft accumulating over time. “Oh, this thing says [citation needed]. I’ll go find a citation to fill it in, while my coffee is brewing.” This is not conducive to clean pedagogy, or to a smooth transition from general-audience to specialist interest.

Have no doubt that a great many scientists are terrible at communication, but we can also imagine a world in which Wikipedia would attract the scientists that actually are good at communication.

There’s communication, and then there’s communication. (We scientists usually get formal training in neither.) I know quite a few scientists who are good at outreach. They work hard at it, because they believe it matters and they know that’s what it takes. Almost none of them have ever mentioned editing Wikipedia (even the one who used his science blog in his tenure portfolio). Thanks to the pressures of academia, the calculation always favors a mode of outreach where it’s easier to point to what you did, so you can get appropriate credit for it.

Thus, there might be a momentary impulse to make small-scale improvements, but there’s almost no incentive to effect changes that are structured on a larger scale — paragraphs, sections, organization among articles. This is a good incentive system for filling articles with technical minutiae, like jelly babies into a bag, but it’s not a way to plan a curriculum.

The piece in Vice says of a certain physics article,

I have no idea who the article exists for because I’m not sure that person actually exists: someone with enough knowledge to comprehend dense physics formulations that doesn’t also already understand the electroweak interaction or that doesn’t already have, like, access to a textbook about it.

You’d be surprised. It’s fairly common to remember the broad strokes of a subject but need a reference for the fiddly little details.

Writers don’t just dip in, produce some Wikipedia copy, and bounce.

I’m pretty sure this is … actually not borne out by the data? Like, many contributors just add little bits when they are strongly motivated, while the smaller active core of persistent editors clean up the content, get involved in article-improvement drives, wrangle behind the scenes, etc.

[EDIT TO ADD (24 November): To say it another way, both the distribution of edits per article and edits per editor are “fat tailed, which implies that even editors and articles with small numbers of edits should not be neglected.” Furthermore, most edits do not change an article’s length, or change it by only a small amount. The seeming tendency for “fewer editors gaining an ever more dominant role” is a real concern, but I doubt the opacity of technical articles is itself a tool of oligarchy. Indeed, I suspect that other factors contribute to the “core editor” group becoming more insular, one being the ease with which policies originally devised for good reasons can be weaponized.]

If you want “elitism,” you shouldn’t look in the technical prose on the project’s front end. Instead, you should go into the backroom. From what I’ve seen and heard, it’s very easy to run afoul of an editor who wants to lord over their tiny domain, and who will sling around policies and abbreviations and local jargon to get their way. Any transgression, or perceived transgression, is an excuse to revert.

Just take a look at “WP:PROF” — the “notability guideline” for evaluating whether a scholar merits a Wikipedia page. It’s almost 3500 words, laying out criteria and then expounding upon their curlicues. And if you create an article and someone else decides it should be deleted, you had better be familiar with the Guide to deletion (roughly 6700 words), which overlaps with the Deletion process documentation (another 4700 words). More than enough regulations for anyone to petulantly sling around until they get their way!

And on the subject of deletion, over on Mastodon the other day I got into a chat about the story of Günter Bechly, a paleontologist who went creationist and whose Wikipedia page was recently toasted. The incident was described by Haaretz thusly:

If Bechly’s article was originally introduced due to his scientific work, it was deleted due to his having become a poster child for the creationist movement.

I strongly suspect that it would have been deleted if it had been brought to anyone’s attention for any other reason, even if Bechly hadn’t gone creationist. His scientific work just doesn’t add up to what Wikipedia considers “notability,” the standard codified by the WP:PROF rulebook mentioned above. Nor were there adequate sources to write about his career in Wikipedia’s regulation flat, footnoted way. The project is clearly willing to have articles on creationists, if the claims in them can be sourced to their standards of propriety: Just look at their category of creationists! Bechly’s problem was that he was only mentioned in passing or written up in niche sources that were deemed unreliable.

If you poke around that deletion discussion for Bechly’s page, you’ll find it links to a rolling list of such discussions for “Academics and educators,” many of whom seem to be using Wikipedia as a LinkedIn substitute. It’s a mundane occurrence for the project.

And another thing about the Haaretz article. It mentions sockpuppets arriving to speak up in support of keeping Bechly’s page:

These one-time editors’ lack of experience became clear when they began voting in favor of keeping the article on Wikipedia – a practice not employed in the English version of Wikipedia since 2016, when editors voted to exchange the way articles are deleted for a process of consensus-based decision through discussion.

Uh, that’s been the rule since 2005 at least. Not the most impressive example of Journalisming.

To Thems That Have

Occasionally, I think of burning my opportunities of advancing in the physics profession — or, more likely, just burning my bridges with Geek Culture(TM) — by writing a paper entitled, “Richard Feynman’s Greatest Mistake”.

I did start drafting an essay I call “To Thems That Have, Shall Be Given More”. There are a sizable number of examples where Feynman gets credit for an idea that somebody else discovered first. It’s the rich-get-richer of science.
Continue reading To Thems That Have

Greetings from Massachusetts (No, Really)

There are other people named Blake Stacey around the United States. I know this because (a) I came across their records when opting myself out of person-search websites, and (b) sometimes they use my GMail address when signing up for things. (Or, to be fair, perhaps they write their address in a form and someone else types it incorrectly.) I keep getting customer satisfaction surveys and even credit-card receipts from an auto dealership in a state I haven’t even visited in years.

A friend of mine once inadvertently got access to the Facebook accounts of two total strangers just because airplane WiFi is strange.

Some people are victims of identity theft. Others are the recipients of identity gifting.

Social Media Update

I finally gave up on Twitter. It had been descending into mediocrity and worse for a long time. The provocation that gave me the nudge I needed was dropping in after a few days away and finding my timeline cluttered into uselessness, because their Algorithm (in its ineffable Algorithmhood) had decided to interpret “likes” as retweets. This is a feature they decided the world needed, and they decided that it was so beneficial that there would be no way to turn it off. What’s more, it comes and goes, so one cannot plan around it or adapt one’s habits to it, and when it is present, it is applied stochastically.

Consequently, the meaning of clicking the “like” icon is not constant over time. If you care at all about what your followers experience, you cannot expect taking the same action to have the same result. The software demands, by definition, insanity.

So, now I fill my subway-riding time with paperback books that I’d bought at the Harvard Bookstore warehouse sale and never gotten around to reading.

I’ve also been making a space for myself on the Mastodon decentralized social platform. My primary home in that ecosystem is @bstacey@icosahedron.website. I’m also the Blake Stacey at mastodon.mit.edu (all that tuition had to buy me something), and at the suggestion of Evelyn Lamb, for good measure I claimed Blake Stacey at mathstodon.xyz.

Bogho-A-Lago

The big scandal this weekend: Peter Boghossian and James Lindsay pulled a hoax on a social-science journal by getting a deliberately nonsensical paper published there, and then crowed that this demonstrates the field of gender studies to be “crippled academically.” However, when people with a measure of sense examined B&L’s stunt, they found it to be instead evidence that you can get any crap published if you lower your standards far enough, particularly if you’re willing to pay for the privilege and you find a journal whose raison d’être is to rip people off. Indeed, B&L’s paper (“The conceptual penis as a social construct”) was rejected from the first journal they sent it to, and it got bounced down the line to a new and essentially obscure venue of dubious ethical standing. Specifically, I can’t find anybody who had even heard of Cogent Social Sciences apart from spam emails inviting them to publish there. This kind of bottom-feeding practice has proliferated in the years since Open Access publishing became a thing, to unclear effect. It hasn’t seemed in practice to tarnish the reputation of serious Open Access journals (the PLOS family, Scientific Reports, Physical Review X, Discrete Analysis, etc.). Arguably, once the infrastructure of the Web existed, some variety of pay-to-publish scam was inevitable, since there will always be academics angling for the appearance of success—as long as there are tenure committees.

Boghossian and Lindsay made the triumphant announcement of their hoax in Skeptic, a magazine edited by Michael Shermer. And if you think that I’ll use this as an occasion to voice my grievances at Capital-S Skepticism being a garbage fire of a movement, you’re absolutely correct. I agree with the thesis of Ketan Joshi here:

The article in Skeptic Magazine highlights how regularly people will vastly lower their standards of skepticism and rationality if a piece of information is seen as confirmation of a pre-existing belief – in this instance, the belief that gender studies is fatally compromised by seething man-hate. The standard machinery of rationality would have triggered a moment of doubt – ‘perhaps we’ve not put in enough work to separate the signal from the noise’, or ‘perhaps we need to tease apart the factors more carefully’.

That slow, deliberative mechanism of self-assessment is non-existent in the authorship and sharing of this piece. It seems quite likely that this is due largely to a pre-existing hostility towards gender studies, ‘identity politics’ and the general focus of contemporary progressive America.

Boghossian and Lindsay see themselves as the second coming of Alan Sokal, who successfully fooled Social Text into publishing a parody of postmodern theory-babble back in 1999. But after the fact, Sokal said the publication of his hoax itself didn’t prove much at all, just that a few people happened to be asleep at the wheel. (His words: “From the mere fact of publication of my parody I think that not much can be deduced.”) Then he wrote two books of footnotes and caveats to show that he had lampooned some views he himself held in more moderate form.

Meanwhile, Steven Pinker—who happily boosted the B&L hoax to his 310,000 Twitter followers—strips all the technical content out of physics, mixes the jargon up with trite and folksy “wisdom,” and uses the result to support pompous bloviation.

… Which, funny story, is one of the main things that Alan Sokal was criticizing.

I gotta quote this part of B&L’s boast:
Continue reading Bogho-A-Lago

Simple Equations are No Good When the Variables are Meaningless

A few weeks back, I reflected on why mathematical biology can be so hard to learn—much harder, indeed, than the mathematics itself would warrant.

The application of mathematics to biological evolution is rooted, historically, in statistics rather than in dynamics. Consequently, a lot of model-building starts with tools that belong, essentially, to descriptive statistics (e.g., linear regression). This is fine, but then people turn around and discuss those models in language that implies they have constructed a dynamical system. This makes life quite difficult for the student trying to learn the subject by reading papers! The problem is not the algebra, but the assumptions; not the derivations, but the discourse.

Recently, a colleague of mine, Ben Allen, coauthored a paper that clears up one of the more confusing points.

Hamilton’s rule asserts that a trait is favored by natural selection if the benefit to others, $B$, multiplied by relatedness, $R$, exceeds the cost to self, $C$. Specifically, Hamilton’s rule states that the change in average trait value in a population is proportional to $BR – C$. This rule is commonly believed to be a natural law making important predictions in biology, and its influence has spread from evolutionary biology to other fields including the social sciences. Whereas many feel that Hamilton’s rule provides valuable intuition, there is disagreement even among experts as to how the quantities $B$, $R$, and $C$ should be defined for a given system. Here, we investigate a widely endorsed formulation of Hamilton’s rule, which is said to be as general as natural selection itself. We show that, in this formulation, Hamilton’s rule does not make predictions and cannot be tested empirically. It turns out that the parameters $B$ and $C$ depend on the change in average trait value and therefore cannot predict that change. In this formulation, which has been called “exact and general” by its proponents, Hamilton’s rule can “predict” only the data that have already been given.

(PDF)

Multiscale Structure of More-than-Binary Variables

When I face a writing task, my two big failure modes are either not starting at all and dragging my feet indefinitely, or writing far too much and having to cut it down to size later. In the latter case, my problem isn’t just that I go off on tangents. I try to answer every conceivable objection, including those that only I would think of. As a result, I end up fighting a rhetorical battle that only I know about, and the prose that emerges is not just overlong, but arcane and obscure. Furthermore, if the existing literature on a subject is confusing to me, I write a lot in the course of figuring it out, and so I end up with great big expository globs that I feel obligated to include with my reporting on what I myself actually did. That’s why my PhD thesis set the length record for my department by a factor of about three.

I have been experimenting with writing scientific pieces that are deliberately bite-sized to begin with. The first such experiment that I presented to the world, “Sporadic SICs and the Normed Division Algebras,” was exactly two pages long in its original form. The version that appeared in a peer-reviewed journal was slightly longer; I added a paragraph of context and a few references.

My latest attempt at a mini-paper (articlet?) is based on a blog post from a few months back. I polished it up, added some mathematical details, and worked in a comparison with other research that was published since I posted that blog item. The result is still fairly short:

Social Media Experiment

I decided to give Mastodon a whirl, so a while back I created an account for myself at the icosahedron.website instance. (After all, a big part of my research is to generalize regular icosahedra to higher dimensions and complex coordinates.) There I am: Blake C. Stacey (@bstacey@icosahedron.website). It’s been fun so far.

It seems the best way to explain Mastodon to an old person (like me) is that it’s halfway between social networking, the way big companies do it, and email. You create an account on one server (or “instance”), and from there, you can interact with people who have accounts, even if those accounts are on other servers. Different instances can have different policies about what kinds of content they allow, depending for example on what type of community the administrators of the instance want to cater to.

If I ever administrate a Mastodon instance, I think I’ll make “content warnings” mandatory, but I’ll change the interface so that they’re called “subject lines.”

New Paper Dance Macabre

C. A. Fuchs, M. C. Hoang and B. C. Stacey, “The SIC Question: History and State of Play,” arXiv:1703.07901 [quant-ph] (2017).

Recent years have seen significant advances in the study of symmetric informationally complete (SIC) quantum measurements, also known as maximal sets of complex equiangular lines. Previously, the published record contained solutions up to dimension 67, and was with high confidence complete up through dimension 50. Computer calculations have now furnished solutions in all dimensions up to 151, and in several cases beyond that, as large as dimension 323. These new solutions exhibit an additional type of symmetry beyond the basic definition of a SIC, and so verify a conjecture of Zauner in many new cases. The solutions in dimensions 68 through 121 were obtained by Andrew Scott, and his catalogue of distinct solutions is, with high confidence, complete up to dimension 90. Additional results in dimensions 122 through 151 were calculated by the authors using Scott’s code. We recap the history of the problem, outline how the numerical searches were done, and pose some conjectures on how the search technique could be improved. In order to facilitate communication across disciplinary boundaries, we also present a comprehensive bibliography of SIC research.

Also available via SciRate.

"no matter how gifted, you alone cannot change the world"