We answer several questions that have been Frequently Asked about QBism. These remarks (many of them lighthearted) should be considered supplements to more systematic treatments by the authors and others.
It’s on the department website’s list of staff and faculty, so I guess it’s officially official now: I am a Research Assistant Professor at the University of Massachusetts Boston. Opinions expressed here or on my social media are, of course, my own. Do you know how long it takes to process the paperwork for the University to have an opinion?
Today, in politics….
Continue reading Quick, To The Bat-Fainting-Couch!
This time, it’s another solo-author outing.
Informationally complete measurements are a dramatic discovery of quantum information science, and the symmetric IC measurements, known as SICs, are in many ways optimal among them. Close study of three of the “sporadic SICs” reveals an illuminating relation between different ways of quantifying the extent to which quantum theory deviates from classical expectations.
In spite of the “everything, etc.” that is life these days, I’ve managed to do a bit of science here and there, which has manifested as two papers. First, there’s the one about quantum physics, written with the QBism group at UMass Boston:
J. B. DeBrota, C. A. Fuchs and B. C. Stacey, “Symmetric Informationally Complete Measurements Identify the Essential Difference between Classical and Quantum” [arXiv:1805.08721].
We describe a general procedure for associating a minimal informationally-complete quantum measurement (or MIC) and a set of linearly independent post-measurement quantum states with a purely probabilistic representation of the Born Rule. Such representations are motivated by QBism, where the Born Rule is understood as a consistency condition between probabilities assigned to the outcomes of one experiment in terms of the probabilities assigned to the outcomes of other experiments. In this setting, the difference between quantum and classical physics is the way their physical assumptions augment bare probability theory: Classical physics corresponds to a trivial augmentation — one just applies the Law of Total Probability (LTP) between the scenarios — while quantum theory makes use of the Born Rule expressed in one or another of the forms of our general procedure. To mark the essential difference between quantum and classical, one should seek the representations that minimize the disparity between the expressions. We prove that the representation of the Born Rule obtained from a symmetric informationally-complete measurement (or SIC) minimizes this distinction in at least two senses—the first to do with unitarily invariant distance measures between the rules, and the second to do with available volume in a reference probability simplex (roughly speaking a new kind of uncertainty principle). Both of these arise from a significant majorization result. This work complements recent studies in quantum computation where the deviation of the Born Rule from the LTP is measured in terms of negativity of Wigner functions.
To get an overall picture of our results without diving into the theorem-proving, you can watch John DeBrota give a lecture about our work.
Second, there’s the more classical (in the physicist’s sense, if not the economist’s):
B. C. Stacey and Y. Bar-Yam, “The Stock Market Has Grown Unstable Since February 2018” [arXiv:1806.00529].
On the fifth of February, 2018, the Dow Jones Industrial Average dropped 1,175.21 points, the largest single-day fall in history in raw point terms. This followed a 666-point loss on the second, and another drop of over a thousand points occurred three days later. It is natural to ask whether these events indicate a transition to a new regime of market behavior, particularly given the dramatic fluctuations — both gains and losses — in the weeks since. To illuminate this matter, we can apply a model grounded in the science of complex systems, a model that demonstrated considerable success at unraveling the stock-market dynamics from the 1980s through the 2000s. By using large-scale comovement of stock prices as an early indicator of unhealthy market dynamics, this work found that abrupt drops in a certain parameter U provide an early warning of single-day panics and economic crises. Decreases in U indicate regimes of “high co-movement”, a market behavior that is not the same as volatility, though market volatility can be a component of co-movement. Applying the same analysis to stock-price data from the beginning of 2016 until now, we find that the U value for the period since 5 February is significantly lower than for the period before. This decrease entered the “danger zone” in the last week of May, 2018.
The weekend before last, I overcame my reluctance to travel and went to a mathematics conference, the American Mathematical Society’s Spring Central Sectional Meeting. I gave a talk in the “Recent Advances in Packing” session, spreading the word about SICs. My talk followed those by Steve Flammia and Marcus Appleby, who spoke about the main family of known SIC solutions while I covered the rest (the sporadic SICs). The co-organizer of that session, Dustin Mixon, has posted an overall summary and the speakers’ slides over at his blog.
This is what I get for skimming an entertainment website for a momentary diversion.
So, everybody’s seen the cool new video, “‘Cantina Theme’ played by a pencil and a girl with too much time on her hands,” right?
It’s a joke. The “proof” is words thrown into a box and filled with numbers so that nobody reads it too carefully. The algebra isn’t even right — hell, it does FOIL wrong — but that’s just a detail. I tried to think of a way to use it as a hook to explain some real science, as I’ve tried before upon occasion, but there just wasn’t any there there. The whole thing is goofing off.
Obvious goofing off, I would have thought. Somewhere south of a Star Trek Voyager technobabble speech. But no, never underestimate the ability of numbers to make a brain shut down.
A few years ago, I found a sentence in a Wikipedia page that irritated me so much, I wrote a 25-page article about it. Eventually, I got that article published in the Philosophical Transactions of the Royal Society. On account of all this, friends and colleagues sometimes send me news about Wikipedia, or point me to strange things they’ve found there. A couple such items have recently led me to Have Thoughts, which I share below.
This op-ed on the incomprehensibility of Wikipedia science articles puts a finger on a real problem, but its attempt at explanation assumes malice rather than incompetence. Yes, Virginia, the science and mathematics articles are often baffling and opaque. The Vice essay argues that the writers of Wikipedia’s science articles use the incomprehensibility of their prose as a shield to keep out the riffraff and maintain the “elite” status of their subject. I don’t buy it. In my opinion, this hypothesis does not account for the intrinsic difficulty of explaining science, nor for the incentive structures at work. Wikipedia pages grow by bricolage, small pieces of cruft accumulating over time. “Oh, this thing says . I’ll go find a citation to fill it in, while my coffee is brewing.” This is not conducive to clean pedagogy, or to a smooth transition from general-audience to specialist interest.
Have no doubt that a great many scientists are terrible at communication, but we can also imagine a world in which Wikipedia would attract the scientists that actually are good at communication.
There’s communication, and then there’s communication. (We scientists usually get formal training in neither.) I know quite a few scientists who are good at outreach. They work hard at it, because they believe it matters and they know that’s what it takes. Almost none of them have ever mentioned editing Wikipedia (even the one who used his science blog in his tenure portfolio). Thanks to the pressures of academia, the calculation always favors a mode of outreach where it’s easier to point to what you did, so you can get appropriate credit for it.
Thus, there might be a momentary impulse to make small-scale improvements, but there’s almost no incentive to effect changes that are structured on a larger scale — paragraphs, sections, organization among articles. This is a good incentive system for filling articles with technical minutiae, like jelly babies into a bag, but it’s not a way to plan a curriculum.
The piece in Vice says of a certain physics article,
I have no idea who the article exists for because I’m not sure that person actually exists: someone with enough knowledge to comprehend dense physics formulations that doesn’t also already understand the electroweak interaction or that doesn’t already have, like, access to a textbook about it.
You’d be surprised. It’s fairly common to remember the broad strokes of a subject but need a reference for the fiddly little details.
Writers don’t just dip in, produce some Wikipedia copy, and bounce.
I’m pretty sure this is … actually not borne out by the data? Like, many contributors just add little bits when they are strongly motivated, while the smaller active core of persistent editors clean up the content, get involved in article-improvement drives, wrangle behind the scenes, etc.
[EDIT TO ADD (24 November): To say it another way, both the distribution of edits per article and edits per editor are “fat tailed, which implies that even editors and articles with small numbers of edits should not be neglected.” Furthermore, most edits do not change an article’s length, or change it by only a small amount. The seeming tendency for “fewer editors gaining an ever more dominant role” is a real concern, but I doubt the opacity of technical articles is itself a tool of oligarchy. Indeed, I suspect that other factors contribute to the “core editor” group becoming more insular, one being the ease with which policies originally devised for good reasons can be weaponized.]
If you want “elitism,” you shouldn’t look in the technical prose on the project’s front end. Instead, you should go into the backroom. From what I’ve seen and heard, it’s very easy to run afoul of an editor who wants to lord over their tiny domain, and who will sling around policies and abbreviations and local jargon to get their way. Any transgression, or perceived transgression, is an excuse to revert.
Just take a look at “WP:PROF” — the “notability guideline” for evaluating whether a scholar merits a Wikipedia page. It’s almost 3500 words, laying out criteria and then expounding upon their curlicues. And if you create an article and someone else decides it should be deleted, you had better be familiar with the Guide to deletion (roughly 6700 words), which overlaps with the Deletion process documentation (another 4700 words). More than enough regulations for anyone to petulantly sling around until they get their way!
And on the subject of deletion, over on Mastodon the other day I got into a chat about the story of Günter Bechly, a paleontologist who went creationist and whose Wikipedia page was recently toasted. The incident was described by Haaretz thusly:
If Bechly’s article was originally introduced due to his scientific work, it was deleted due to his having become a poster child for the creationist movement.
I strongly suspect that it would have been deleted if it had been brought to anyone’s attention for any other reason, even if Bechly hadn’t gone creationist. His scientific work just doesn’t add up to what Wikipedia considers “notability,” the standard codified by the WP:PROF rulebook mentioned above. Nor were there adequate sources to write about his career in Wikipedia’s regulation flat, footnoted way. The project is clearly willing to have articles on creationists, if the claims in them can be sourced to their standards of propriety: Just look at their category of creationists! Bechly’s problem was that he was only mentioned in passing or written up in niche sources that were deemed unreliable.
If you poke around that deletion discussion for Bechly’s page, you’ll find it links to a rolling list of such discussions for “Academics and educators,” many of whom seem to be using Wikipedia as a LinkedIn substitute. It’s a mundane occurrence for the project.
And another thing about the Haaretz article. It mentions sockpuppets arriving to speak up in support of keeping Bechly’s page:
These one-time editors’ lack of experience became clear when they began voting in favor of keeping the article on Wikipedia – a practice not employed in the English version of Wikipedia since 2016, when editors voted to exchange the way articles are deleted for a process of consensus-based decision through discussion.
Uh, that’s been the rule since 2005 at least. Not the most impressive example of Journalisming.
Occasionally, I think of burning my opportunities of advancing in the physics profession — or, more likely, just burning my bridges with Geek Culture(TM) — by writing a paper entitled, “Richard Feynman’s Greatest Mistake”.
I did start drafting an essay I call “To Thems That Have, Shall Be Given More”. There are a sizable number of examples where Feynman gets credit for an idea that somebody else discovered first. It’s the rich-get-richer of science.
Continue reading To Thems That Have
There are other people named Blake Stacey around the United States. I know this because (a) I came across their records when opting myself out of person-search websites, and (b) sometimes they use my GMail address when signing up for things. (Or, to be fair, perhaps they write their address in a form and someone else types it incorrectly.) I keep getting customer satisfaction surveys and even credit-card receipts from an auto dealership in a state I haven’t even visited in years.
A friend of mine once inadvertently got access to the Facebook accounts of two total strangers just because airplane WiFi is strange.
Some people are victims of identity theft. Others are the recipients of identity gifting.
I finally gave up on Twitter. It had been descending into mediocrity and worse for a long time. The provocation that gave me the nudge I needed was dropping in after a few days away and finding my timeline cluttered into uselessness, because their Algorithm (in its ineffable Algorithmhood) had decided to interpret “likes” as retweets. This is a feature they decided the world needed, and they decided that it was so beneficial that there would be no way to turn it off. What’s more, it comes and goes, so one cannot plan around it or adapt one’s habits to it, and when it is present, it is applied stochastically.
Consequently, the meaning of clicking the “like” icon is not constant over time. If you care at all about what your followers experience, you cannot expect taking the same action to have the same result. The software demands, by definition, insanity.
So, now I fill my subway-riding time with paperback books that I’d bought at the Harvard Bookstore warehouse sale and never gotten around to reading.
I’ve also been making a space for myself on the Mastodon decentralized social platform. My primary home in that ecosystem is @email@example.com. I’m also the Blake Stacey at mastodon.mit.edu (all that tuition had to buy me something), and at the suggestion of Evelyn Lamb, for good measure I claimed Blake Stacey at mathstodon.xyz.
I work at a university. I don’t worry about students protesting. I worry when they’re apathetic.
And yes, I’ve seen apathy. I had to fill in for an intro physics lecture at 9 am, fer chrissake.
Continue reading On Student Protest
The big scandal this weekend: Peter Boghossian and James Lindsay pulled a hoax on a social-science journal by getting a deliberately nonsensical paper published there, and then crowed that this demonstrates the field of gender studies to be “crippled academically.” However, when people with a measure of sense examined B&L’s stunt, they found it to be instead evidence that you can get any crap published if you lower your standards far enough, particularly if you’re willing to pay for the privilege and you find a journal whose raison d’être is to rip people off. Indeed, B&L’s paper (“The conceptual penis as a social construct”) was rejected from the first journal they sent it to, and it got bounced down the line to a new and essentially obscure venue of dubious ethical standing. Specifically, I can’t find anybody who had even heard of Cogent Social Sciences apart from spam emails inviting them to publish there. This kind of bottom-feeding practice has proliferated in the years since Open Access publishing became a thing, to unclear effect. It hasn’t seemed in practice to tarnish the reputation of serious Open Access journals (the PLOS family, Scientific Reports, Physical Review X, Discrete Analysis, etc.). Arguably, once the infrastructure of the Web existed, some variety of pay-to-publish scam was inevitable, since there will always be academics angling for the appearance of success—as long as there are tenure committees.
Boghossian and Lindsay made the triumphant announcement of their hoax in Skeptic, a magazine edited by Michael Shermer. And if you think that I’ll use this as an occasion to voice my grievances at Capital-S Skepticism being a garbage fire of a movement, you’re absolutely correct. I agree with the thesis of Ketan Joshi here:
The article in Skeptic Magazine highlights how regularly people will vastly lower their standards of skepticism and rationality if a piece of information is seen as confirmation of a pre-existing belief – in this instance, the belief that gender studies is fatally compromised by seething man-hate. The standard machinery of rationality would have triggered a moment of doubt – ‘perhaps we’ve not put in enough work to separate the signal from the noise’, or ‘perhaps we need to tease apart the factors more carefully’.
That slow, deliberative mechanism of self-assessment is non-existent in the authorship and sharing of this piece. It seems quite likely that this is due largely to a pre-existing hostility towards gender studies, ‘identity politics’ and the general focus of contemporary progressive America.
Boghossian and Lindsay see themselves as the second coming of Alan Sokal, who successfully fooled Social Text into publishing a parody of postmodern theory-babble back in 1999. But after the fact, Sokal said the publication of his hoax itself didn’t prove much at all, just that a few people happened to be asleep at the wheel. (His words: “From the mere fact of publication of my parody I think that not much can be deduced.”) Then he wrote two books of footnotes and caveats to show that he had lampooned some views he himself held in more moderate form.
Meanwhile, Steven Pinker—who happily boosted the B&L hoax to his 310,000 Twitter followers—strips all the technical content out of physics, mixes the jargon up with trite and folksy “wisdom,” and uses the result to support pompous bloviation.
… Which, funny story, is one of the main things that Alan Sokal was criticizing.
I gotta quote this part of B&L’s boast:
Continue reading Bogho-A-Lago
A few weeks back, I reflected on why mathematical biology can be so hard to learn—much harder, indeed, than the mathematics itself would warrant.
The application of mathematics to biological evolution is rooted, historically, in statistics rather than in dynamics. Consequently, a lot of model-building starts with tools that belong, essentially, to descriptive statistics (e.g., linear regression). This is fine, but then people turn around and discuss those models in language that implies they have constructed a dynamical system. This makes life quite difficult for the student trying to learn the subject by reading papers! The problem is not the algebra, but the assumptions; not the derivations, but the discourse.
Hamilton’s rule asserts that a trait is favored by natural selection if the benefit to others, $B$, multiplied by relatedness, $R$, exceeds the cost to self, $C$. Specifically, Hamilton’s rule states that the change in average trait value in a population is proportional to $BR – C$. This rule is commonly believed to be a natural law making important predictions in biology, and its influence has spread from evolutionary biology to other fields including the social sciences. Whereas many feel that Hamilton’s rule provides valuable intuition, there is disagreement even among experts as to how the quantities $B$, $R$, and $C$ should be defined for a given system. Here, we investigate a widely endorsed formulation of Hamilton’s rule, which is said to be as general as natural selection itself. We show that, in this formulation, Hamilton’s rule does not make predictions and cannot be tested empirically. It turns out that the parameters $B$ and $C$ depend on the change in average trait value and therefore cannot predict that change. In this formulation, which has been called “exact and general” by its proponents, Hamilton’s rule can “predict” only the data that have already been given.