Category Archives: Software

Your Password

Your password must contain a pound of flesh. No blood, nor less nor more, but just a pound of flesh.

Your password must contain all passwords which do not contain themselves.

Your password must contain any letter of the alphabet save the second. NOT THE BEES! NOT THE BEES!

Your password must contain a reminder not to read the comments. Really. You’ll thank us.

Your password must have contained the potential within itself all along.

Google Scholar Irregularities

Google Scholar is definitely missing citations to my papers.

The cited-by results for “Some Negative Remarks on Operational Approaches to Quantum Theory” [arXiv:1401.7254] on Google Scholar and on INSPIRE are completely nonoverlapping. Google Scholar can tell that “An Information-Theoretic Formalism for Multiscale Structure in Complex Systems” [arXiv:1409.4708] cites “Eco-Evolutionary Feedback in Host–Pathogen Spatial Dynamics” [arXiv:1110.3845] but not that it cites My Struggles with the Block Universe [arXiv:1405.2390]. Meanwhile, the SAO/NASA Astrophysics Data System catches both.

This would be a really petty thing to complain about, if people didn’t seemingly rely on such metrics.

EDIT TO ADD (17 November 2014): Google Scholar also misses that David Mermin cites MSwtBU in his “Why QBism is not the Copenhagen interpretation and what John Bell might have thought of it” [arXiv:1409.2454]. This maybe has something to do with being worse at detecting citations in footnotes than in endnotes.

Links

Contraception? Evil. Kids getting their hands on Daddy’s guns and blowing holes in each other? The price of Freedom. U.S.A.! U.S.A.!

Can Stephen Wolfram Catch Carmen Sandiego?

Via Chris Granade, I learned we now have an actual implementation of Wolfram Language to play around with. Wolfram lauds the Wolfram Programming Cloud, the first product based on the Wolfram Language:

My goal with the Wolfram Language in general—and Wolfram Programming Cloud in particular—is to redefine the process of programming, and to automate as much as possible, so that once a human can express what they want to do with sufficient clarity, all the details of how it is done should be handled automatically. [my emphasis]

Ah. You mean, like programming?

Wolfram’s example of the Wolfram Programming Cloud is “a piece of code that takes text, figures out what language it’s in, then shows an image based on the flag of the largest country where it’s spoken.” The demo shows how the WPC maps the string good afternoon to the English language, the United States and thence to the modern US flag.

English is an official language of India, which exceeds the US in population size, and of Canada, which exceeds the US in total enclosed area.

The Wolfram Language documentation indicates that “LargestCountry” means “place with most speakers”; by this standard, the US comes out on top (roughly 300 million speakers, versus 125 million for India and 28 million for Canada). But that’s not the problem we were supposed to solve: “place with most speakers” is not the same as “largest country where the language is spoken.”

Even the programming languages which are sold as doing what you mean still just do what you say.

Wolfram Language…

…because nothing says “stable platform for mission-critical applications” like “from the makers of Mathematica!”

Carl Zimmer linked to this VentureBeat piece on Wolfram Language with the remark, “Always interesting to hear what Stephen Wolfram is up to. But this single-source style of tech reporting? Ugh.” I’d go further: the software may well eventually provide an advance in some respect, but the reporting is so bad, we’d never know.

We’re told “a developer can use some natural language.” What, like the GOTO command? That’s English. Shakespearean, even. (“Go to, I’ll no more on’t; it hath made me mad.” Hamlet, act 3, scene 1.) We’re told that “literally anything” will be “usable and malleable as a symbolic expression”—wasn’t that the idea behind LISP? We’re told, awkwardly, that “Questions in a search engine have many answers,” with the implication that this is a bad thing (and that Wolfram Alpha solved that problem). We are informed that “instead of programs being tens of thousands of lines of code, they’re 20 or 200.” Visual Basic could claim much the same. We don’t push functionality “out to libraries and modules”; we use the Wolfram Cloud. It’s very different!

(Mark Chu-Carroll points out, “What’s scary is that he thinks that not pushing things to libraries is good!”)

The “wink, wink, we’re not not comparing Wolfram to Einstein” got old within a sentence, too.

I have actual footage of Wolfram from the Q&A session of that presentation:

“I am my own reality check.”Stephen Wolfram (1997)

Citing Tweets in LaTeX

Need to cite Twitter posts in your LaTeX documents? Of course you do! Want someone else to modify the utphys BibTeX style to add a “@TWEET” option so you don’t have to do it yourself? Of course you do!

Style file:

Example document:

\documentclass[aps,amsmath,amssymb]{revtex4}
\usepackage{amsmath,amssymb,hyperref}

\begin{document}
\bibliographystyle{utphystw}

\title{Test}
\author{Blake C. Stacey}
\date{\today}

\begin{abstract}
Only a test!
\end{abstract}

\maketitle

As indicated, this is only 
a test.\cite{stacey2011,sfi2011}

\bibliography{twtest.bib}

\end{document}

And the example bibliography file:

@TWEET{stacey2011,
       author={Blake Stacey},
       authorid={blakestacey},
       year={2011},
       month={July},
       day={25},
       tweetid={95521600597786624},
       tweetcontent={I find it hard to tell, in some 
                     areas of science, whether I am 
                     a radical or a curmudgeon.}}

@TWEET{sfi2011,
       author={anon},
       authorid={OverheardAtSFI},
       year={2011},
       month={June},
       day={23},
       tweetid={84018131441422336},
       tweetcontent={The brilliance of the word 
                     ``Complexity'' is that it 
                     means just about anything 
                     to anybody.}}

PDF output:

Interactivelearn

A few complaints about the place of computers in physics classrooms.

Every once in a while, I see an enthusiastic discussion somewhere on the Intertubes about bringing new technological toys into physics classrooms. Instead of having one professor lecture at a room of unengaged, unresponsive bodies, why not put tools into the students’ hands and create a new environment full of interactivity and feedback? Put generically like that, it does sound intriguing, and new digital toys are always shiny, aren’t they?

Prototypical among these schemes is MIT’s “Technology Enabled Active Learning” (traditionally and henceforth TEAL), which, again, you’d think I’d love for the whole alma mater patriotism thing. (“Bright college days, O carefree days that fly…”) I went through introductory physics at MIT a few years too early to get the TEAL deal (I didn’t have Walter Lewin as a professor, either, as it happens). For myself, I couldn’t see the point of buying all those computers and then using them in ways which did not reflect the ways working physicists actually use computers. Watching animations? Answering multiple-choice questions? Where was the model-building, the hypothesis-testing through numerical investigation? In 1963, Feynman was able to explain to Caltech undergraduates how one used a numerical simulation to get predictions out of a hypothesis when one didn’t know the advanced mathematics necessary to do so by hand, or if nobody had yet developed the mathematics in question. Surely, forty years and umpteen revolutions in computer technology later, we wouldn’t be moving backward, would we?

Everything I heard about TEAL from the students younger than I — every statement without exception, mind — was that it was a dreadful experience, technological glitz with no substance. Now, I’ll freely admit there was probably a heckuva sampling bias involved here: the people I had a chance to speak with about TEAL were, by and large, other physics majors. That is, they were the ones who survived the first-year classes and dove on in to the rest of the programme. So, (a) one would expect they had a more solid grasp of the essential concepts covered in the first year, all else being equal, and (b) they may have had more prior interest and experience with physics than students who declared other majors. But, if the students who liked physics the most and were the best at it couldn’t find a single good thing to say about TEAL, then TEAL needed work.

If your wonderful new education scheme makes things somewhat better for an “average” student but also makes them significantly worse for a sizeable fraction of students, you’re doing something wrong. The map is not the territory, and the average is not the population.

It’s easy to dismiss such complaints. Here, let me give you a running start: “Those kids are just too accustomed to lectures. They find lecture classes fun, so fun they’re fooled into thinking they’re learning.” (We knew dull lecturers when we had them.) “Look at the improvement in attendance rates!” (Not the most controlled of experiments. At a university where everyone has far too many demands made of their time and absolutely no one can fit everything they ought to do into a day, you learn to slack where you can. If attendance is mandated in one spot, it’ll suffer elsewhere.)

Or, perhaps, one could take the fact that physics majors at MIT loathed the entire TEAL experience as a sign that what TEAL did was not the best for every student involved. If interactivity within the classroom is such a wonderful thing, then is it so hard to wonder if interactivity at a larger scale, at the curricular level, might be advisable, too?

It’s not just a matter of doing one thing for the serious physics enthusiasts and another for the non-majors (to use a scandalously pejorative term).

What I had expected the Technological Enabling of Active Learning to look like is actually more like another project from MIT, StarLogo. Unfortunately, the efforts to build science curricula with StarLogo have been going on mostly at the middle- and high-school level. Their accomplishments and philosophy have not been applied to filling the gaps or shoring up the weak spots in MIT’s own curricula. For example, statistical techniques for data analysis aren’t taught to physics majors until junior year, and then they’re stuffed into Junior Lab, one of the most demanding courses offered at the Institute. To recycle part of an earlier rant:

Now, there’s a great deal to be said for stress-testing your students (putting them through Degree Absolute, as it were). The real problem was that it was hard for all the wrong reasons. Not only were the experiments tricky and the concepts on which they were based abstruse, but also we students had to pick up a variety of skills we’d never needed before, none of them connected to any particular experiment but all of them necessary to get the overall job done. What’s more, all these skills required becoming competent and comfortable with one or more technological tools, mostly of the software persuasion. For example: we had to pick up statistical data analysis, curve fitting and all that pretty much by osmosis: “Here’s a MATLAB script, kids — have at it!” This is the sort of poor training which leads to sinful behaviour on log-log plots in later life. Likewise, we’d never had to write up an experiment in formal journal style, or give a technical presentation. (The few experiences with laboratory work provided in freshman and sophomore years were, to put it simply, a joke.) All this on top of the scientific theory and experimental methods we were ostensibly learning!

Sure, it’s great to throw the kids in the pool to force them to swim, but the water is deep enough already! To my way of thinking, it would make more sense to offload those accessory skills like data description, simulation-building, technical writing and oral presentation to an earlier class, where the scientific content being presented is easier. Own up to the fact that you’re the most intimidating major at an elite technical university: make the sophomore-year classes a little tougher, and junior year can remain just as rough, but be so in a more useful way. We might as well go insane and start hallucinating for the right reason.

Better yet, we might end up teaching these skills to a larger fraction of the students who need them. Why should education from which all scientists could benefit be the exclusive province of experimental physicists? I haven’t the foggiest idea. We have all these topics which ought to go into first- or second-year classes — everyone needs them, they don’t require advanced knowledge in physics itself — but the ways we’ve chosen to rework those introductory classes aren’t helping.

To put it another way: if you’re taking “freshman physics for non-majors,” which will you use more often in life: Lenz’s Law or the concept of an error bar?

Updates

In the wake of ScienceOnline2011, at which the two sessions I co-moderated went pleasingly well, my Blogohedron-related time and energy has largely gone to doing the LaTeXnical work for this year’s Open Laboratory anthology. I have also made a few small contributions to the Azimuth Project, including a Python implementation of a stochastic Hopf bifurcation model.

I continue to fall behind in writing the book reviews I have promised (to myself, if to nobody else). At ScienceOnline, I scored a free copy of Greg Gbur’s new textbook, Mathematical Methods for Optical Physics and Engineering. Truth be told, at the book-and-author shindig where they had the books written by people attending the conference all laid out and wrapped in anonymizing brown paper, I gauged which one had the proper size and weight for a mathematical-methods textbook and snarfed that. On the logic, you see, that if anyone who was not a physics person drew that book from the pile, they’d probably be sad. (The textbook author was somewhat complicit in this plan.) I am happy to report that I’ve found it a good textbook; it should be useful for advanced undergraduates, procrastinating graduate students and those seeking a clear introduction to techniques used in optics but not commonly addressed in broad-spectrum mathematical-methods books.

Gogo Proxy

Modern air travel! The worst trouble I had with the in-flight WiFi service (on my return from Skepticon 3) was that it didn’t work, or, rather, that it worked for less than the time necessary to load a page. A friend of mine travelling on the same day had a more interesting issue: the Internet connection gave him someone else’s identity. He went through the procedure to sign up for the Gogo Inflight Wifi, logged into Facebook and realized he was seeing someone else’s news feed. With someone else’s picture on the page. Using a total stranger’s account. Upon reloading, the same thing happened, but with a second stranger taking the place of the first.

HTTP Proxies are strange and mysterious things.

Python Exercise: The Logistic Map

Nostalgi-O-Vision, activate!

A month or so after I was born, my parents bought an Atari 400 game console. It plugged into the television set, and it had a keyboard with no moving keys, intended to be child- and spill-proof. Thanks to the box of cartridges we had beside it, Asteroids and Centipede were burnt into my brain at a fundamental level. The hours I lost blowing up all my own bases in Star Raiders — for which accomplishment the game awarded you the new rank of “garbage scow captain” — I hesitate to reckon. We also had a Basic XL cartridge and an SIO cassette deck, so you could punch in a few TV screens’ worth of code to make, say, the light-cycle game from TRON, and then save your work to an audio cassette tape.

From my vantage point in the twenty-first century, it seems so strange: you could push in a cartridge, close the little door, turn on your TV set and be able to program.


Continue reading Python Exercise: The Logistic Map

Right Skill, Right Time

OK, first of all, let me say that there exist few better ways to procrastinate than reading an essay on time management. Terry Tao has lots of suggestions; following a fraction of them would probably make me a better human being. One item, though, is worth special attention:

It also makes good sense to invest a serious amount of time and effort into learning any skill that you are likely to use repeatedly in the future. A good example in mathematics is LaTeX: if you plan to write a lot of papers, it makes sense to go beyond the bare minimum of skill needed to jerry-rig whatever you need to write your paper, and go out and seriously learn how to make tables, figures, arrays, etc. Recently I’ve been playing with using prerecorded macros to type out a standard block of LaTeX code (e.g. \begin{theorem} … \end{theorem} \begin{proof} … \end{proof}) in a few keystrokes; the actual time saved per instance is probably minimal, but it presumably adds up over time, and in any event feels like you’re being efficient, which is good for morale (which becomes important when writing a long paper).

The risk is that you might end up a freak like me: after you’ve defined a few macros for moments and cumulants and partial derivatives, you get bitten by a radioactive backslash key and start typing all your class notes in LaTeX while the professor is lecturing. That aside, thinking about the proper time to learn these “accessory skills” puts me in the mood for a rant. (Well, what doesn’t?)

MIT did an exasperating thing with its undergraduate physics programme shortly before my time. The way I heard the story, they’d been afraid of losing students to other majors, so they dumbed down the sophomore-year classes (virtually excising Lagrangian mechanics, for example). We were left with a “waves and vibrations” class which was rather a junk drawer of different examples; a quantum-mechanics course which lacked guts and thus forsook glory; a decent introduction to statistical mechanics; and a relativity class which, hamstrung by fear of sophistication, also suffered because it lacked a singing Max Tegmark.
Continue reading Right Skill, Right Time

A Survey for Curmudgeons

I have a simulation happily grinding away in the background, using one core of my spiffy new dual-core system, doing my work for me, so not only do I have a moment to procrastinate, but also I should be happy about new technology. However, the headphones which came with the iPod nano I got for Christmas picked today to fall apart. The earbug doodad is beside itself with the joy it feels at being part of a cultural icon, I suppose. Given that the iPod itself had to be reformatted twice and connected to three different computers before it was able to receive music, that the interface packs more absurdity into its purported simplicity than I would have imagined possible, and that consequently it has relegated itself to the status “device which plays “Mandelbrot Set” on demand,” having the headphones cheap out on me is rather like salting the fields after Steve Jobs has burnt the city.

All this to say that today I’m in a mood for appreciating old things which work.

Geoffrey Pullum wrote, four years ago,

Shall I tell you how The Cambridge Grammar of English was prepared? (I am not changing the subject; trust me.) The book is huge: 1,859 printed pages. The double-spaced manuscript was about 3,500 pages (yes, it actually had to be printed out and written on by a copy editor the old-fashioned way). It took over ten years to write. And it was done using WordPerfect 6 for DOS. Rodney Huddleston chose to upgrade to that around 1989, wrote a couple of hundred complex macros, and stuck with it. I learned the WP DOS macro language in order to collaborate on the project.

WordPerfect was basically in its final, completed form before Clinton first ran for office. It works. The file format is fine for authors, and records everything we need to record. Rodney and I are still using WP6 file format today to write our planned student’s introduction to English grammar. In all the years since the late 1970s, WordPerfect has not altered the file format: all the largely pointless upgrades in the program have been backward compatible. The format really does the job. But things are different with the WordPerfect program itself. The progress has largely been backward.

The things we have noticed about version differences are minor, but they all tell in the same direction: every upgrade is a downgrade.

Forget the Clinton administration: TeX basically solved the problem of representing mathematical equations as text, during Reagan’s first term. The LaTeX macro language, which handles document-scale organization, is almost as old. Perhaps we’re stuck at a local maximum, and with luck and pluck we could find a better way, and on some days, that seems almost mandatory. Still, we’re at a pretty darn good local maximum, as local extrema go.

(Something deep within me finds a resonance with PyTeX, an attempt to have Python sit on top of TeX the way LaTeX does, but the project seems to be moribund.)

The question for today, then, is the following:

What are your favorite Old Things That Work, and which changeless relics really do need a shake-up?

Previous surveys:

Comments on all the above remain open.

In Which Blake Fails

Not noticing the tiny, unobtrusive switch on the side of your laptop which is labeled “WIRELESS ON / OFF” and wondering why you are no longer detecting any WiFi networks: FAIL.

Trying to troubleshoot your switched-off WiFi by digging through kernel module configurations: FAIL.

Attempting to connect using the Ethernet card and a hub which turns out to be non-functional: FAIL.

Finally switching the WiFi to the ON position, connecting to the Internet and realizing, “Hooray, now I can get back to work on that proceedings book for the conference which happened four years ago” — EPIC FAIL.

Friday Geek Update

My aged and broken laptop is still broken and has not grown any younger. Moreover, the USB key on which I had a decently recent backup of my work appears to have died as well. Furthermoreover, the server on which I also had my work backed up is suffering from a bum RAID array. Mission for today is to extract the drive from the old laptop and wire it directly into the dilithium recrystallization coils — er, I mean, connect it to my new Sony VAIO C420. I note that Micro Center sold me a laptop with Windows Vista on it, but I forgive them, since Ubuntu gutsy (the installation disc I had on hand) installed without any trouble. Audio, wireless and all those goodies worked without extra effort; I haven’t yet had much success with the Bluetooth support it automagically detected, but the only device I’ve had to test it with has been a cell phone which doesn’t play well with anything else, either. I found a status-bar tool which displays the current weather conditions as reported on the Intertubes, and unlike the previous version I’d used, this one can display temperature in kelvins. A year in Lyon followed by a change to my laptop settings went a long way to making me “internally metric”; this may be the logical next step.

(By the way, I booted into Vista just once — so I could say I knew the enemy, and all — and it sucked. It took the duration of an entire Pinky and the Brain episode just to decide how best to phone in to the mothership and report the music library I hadn’t yet put on the blasted thing because I’d just taken it out of the box. Neil Gaiman was right to consider XP an upgrade.)

All that aside, it is now Friday afternoon in Cambridge, Mass. (which is across and down the river from Newton, Mass. — there’s gotta be a physics joke in that). Outside, it’s a partly cloudy 302 kelvins. Inside, it’s time for the Dandy Warhols, with “I am a Scientist.”

Incidentally, we like to have music playing while we cook dinner here at Château Sunclipse, and this was the song we had going when we discovered that enchilada sauce with a dash of hoisin made an excellent base for beef soup.