Category Archives: University education


A few complaints about the place of computers in physics classrooms.

Every once in a while, I see an enthusiastic discussion somewhere on the Intertubes about bringing new technological toys into physics classrooms. Instead of having one professor lecture at a room of unengaged, unresponsive bodies, why not put tools into the students’ hands and create a new environment full of interactivity and feedback? Put generically like that, it does sound intriguing, and new digital toys are always shiny, aren’t they?

Prototypical among these schemes is MIT’s “Technology Enabled Active Learning” (traditionally and henceforth TEAL), which, again, you’d think I’d love for the whole alma mater patriotism thing. (“Bright college days, O carefree days that fly…”) I went through introductory physics at MIT a few years too early to get the TEAL deal (I didn’t have Walter Lewin as a professor, either, as it happens). For myself, I couldn’t see the point of buying all those computers and then using them in ways which did not reflect the ways working physicists actually use computers. Watching animations? Answering multiple-choice questions? Where was the model-building, the hypothesis-testing through numerical investigation? In 1963, Feynman was able to explain to Caltech undergraduates how one used a numerical simulation to get predictions out of a hypothesis when one didn’t know the advanced mathematics necessary to do so by hand, or if nobody had yet developed the mathematics in question. Surely, forty years and umpteen revolutions in computer technology later, we wouldn’t be moving backward, would we?

Everything I heard about TEAL from the students younger than I — every statement without exception, mind — was that it was a dreadful experience, technological glitz with no substance. Now, I’ll freely admit there was probably a heckuva sampling bias involved here: the people I had a chance to speak with about TEAL were, by and large, other physics majors. That is, they were the ones who survived the first-year classes and dove on in to the rest of the programme. So, (a) one would expect they had a more solid grasp of the essential concepts covered in the first year, all else being equal, and (b) they may have had more prior interest and experience with physics than students who declared other majors. But, if the students who liked physics the most and were the best at it couldn’t find a single good thing to say about TEAL, then TEAL needed work.

If your wonderful new education scheme makes things somewhat better for an “average” student but also makes them significantly worse for a sizeable fraction of students, you’re doing something wrong. The map is not the territory, and the average is not the population.

It’s easy to dismiss such complaints. Here, let me give you a running start: “Those kids are just too accustomed to lectures. They find lecture classes fun, so fun they’re fooled into thinking they’re learning.” (We knew dull lecturers when we had them.) “Look at the improvement in attendance rates!” (Not the most controlled of experiments. At a university where everyone has far too many demands made of their time and absolutely no one can fit everything they ought to do into a day, you learn to slack where you can. If attendance is mandated in one spot, it’ll suffer elsewhere.)

Or, perhaps, one could take the fact that physics majors at MIT loathed the entire TEAL experience as a sign that what TEAL did was not the best for every student involved. If interactivity within the classroom is such a wonderful thing, then is it so hard to wonder if interactivity at a larger scale, at the curricular level, might be advisable, too?

It’s not just a matter of doing one thing for the serious physics enthusiasts and another for the non-majors (to use a scandalously pejorative term).

What I had expected the Technological Enabling of Active Learning to look like is actually more like another project from MIT, StarLogo. Unfortunately, the efforts to build science curricula with StarLogo have been going on mostly at the middle- and high-school level. Their accomplishments and philosophy have not been applied to filling the gaps or shoring up the weak spots in MIT’s own curricula. For example, statistical techniques for data analysis aren’t taught to physics majors until junior year, and then they’re stuffed into Junior Lab, one of the most demanding courses offered at the Institute. To recycle part of an earlier rant:

Now, there’s a great deal to be said for stress-testing your students (putting them through Degree Absolute, as it were). The real problem was that it was hard for all the wrong reasons. Not only were the experiments tricky and the concepts on which they were based abstruse, but also we students had to pick up a variety of skills we’d never needed before, none of them connected to any particular experiment but all of them necessary to get the overall job done. What’s more, all these skills required becoming competent and comfortable with one or more technological tools, mostly of the software persuasion. For example: we had to pick up statistical data analysis, curve fitting and all that pretty much by osmosis: “Here’s a MATLAB script, kids — have at it!” This is the sort of poor training which leads to sinful behaviour on log-log plots in later life. Likewise, we’d never had to write up an experiment in formal journal style, or give a technical presentation. (The few experiences with laboratory work provided in freshman and sophomore years were, to put it simply, a joke.) All this on top of the scientific theory and experimental methods we were ostensibly learning!

Sure, it’s great to throw the kids in the pool to force them to swim, but the water is deep enough already! To my way of thinking, it would make more sense to offload those accessory skills like data description, simulation-building, technical writing and oral presentation to an earlier class, where the scientific content being presented is easier. Own up to the fact that you’re the most intimidating major at an elite technical university: make the sophomore-year classes a little tougher, and junior year can remain just as rough, but be so in a more useful way. We might as well go insane and start hallucinating for the right reason.

Better yet, we might end up teaching these skills to a larger fraction of the students who need them. Why should education from which all scientists could benefit be the exclusive province of experimental physicists? I haven’t the foggiest idea. We have all these topics which ought to go into first- or second-year classes — everyone needs them, they don’t require advanced knowledge in physics itself — but the ways we’ve chosen to rework those introductory classes aren’t helping.

To put it another way: if you’re taking “freshman physics for non-majors,” which will you use more often in life: Lenz’s Law or the concept of an error bar?


In the wake of ScienceOnline2011, at which the two sessions I co-moderated went pleasingly well, my Blogohedron-related time and energy has largely gone to doing the LaTeXnical work for this year’s Open Laboratory anthology. I have also made a few small contributions to the Azimuth Project, including a Python implementation of a stochastic Hopf bifurcation model.

I continue to fall behind in writing the book reviews I have promised (to myself, if to nobody else). At ScienceOnline, I scored a free copy of Greg Gbur’s new textbook, Mathematical Methods for Optical Physics and Engineering. Truth be told, at the book-and-author shindig where they had the books written by people attending the conference all laid out and wrapped in anonymizing brown paper, I gauged which one had the proper size and weight for a mathematical-methods textbook and snarfed that. On the logic, you see, that if anyone who was not a physics person drew that book from the pile, they’d probably be sad. (The textbook author was somewhat complicit in this plan.) I am happy to report that I’ve found it a good textbook; it should be useful for advanced undergraduates, procrastinating graduate students and those seeking a clear introduction to techniques used in optics but not commonly addressed in broad-spectrum mathematical-methods books.

“This Room Smells of Mathematics!”

I reposted the previous entry from the depths of the Sunclipse archives because I found the whole “giggling over stuff you don’t understand” theme to be of a piece with this self-indulgently moronic article from New York magazine. The article in question appears to be written by those who, as Greg Egan sez, “have convinced themselves that the particular set of half-digested factoids in their possession perfectly delineates the proper amount of science that can be known by a truly civilised person and discussed in polite company”. Or, as C. P. Snow would have been too quintessentially British to say, we have this Two Cultures nonsense because people are #@!$%ing lazy. I was tempted to rant at some length about it.

But, as it happens, Zeno has done my work for me.

Woo hoo! Now I can get on with more serious matters (and procrastinate in a way which is safer for my blood pressure).

EDIT TO ADD: Stick around for the comments after Zeno’s post. Turns out, the description for the #1 most “ridiculous” mathematics class is a clumsily-concealed quote mine.

Know Thy Audience?

D. W. Logan et al. have an editorial in PLoS Computational Biology giving advice for scientists who want to become active Wikipedia contributors. I was one, for a couple years (cue the “I got better”); judging from my personal experience, most of their advice is pretty good, save for item four:

Wikipedia is not primarily aimed at experts; therefore, the level of technical detail in its articles must be balanced against the ability of non-experts to understand those details. When contributing scientific content, imagine you have been tasked with writing a comprehensive scientific review for a high school audience. It can be surprisingly challenging explaining complex ideas in an accessible, jargon-free manner. But it is worth the perseverance. You will reap the benefits when it comes to writing your next manuscript or teaching an undergraduate class.

Come again?

Whether Wikipedia as a whole is “primarily aimed at experts” or not is irrelevant for the scientist wishing to edit the article on a particular technical subject. Plenty of articles — e.g., Kerr/CFT correspondence or Zamolodchikov c-theorem — have vanishingly little relevance to a “high school audience.” Even advanced-placement high-school physics doesn’t introduce quantum field theory, let alone renormalization-group methods, centrally extended Virasoro algebras and the current frontiers of gauge/gravity duality research. Popularizing these topics may be possible, although even the basic ideas like critical points and universality have been surprisingly poorly served in that department so far. While it’s pretty darn evident for these examples, the same problem holds true more generally. If you do try to set about that task, the sheer amount of new invention necessary — the cooking-up of new analogies and metaphors, the construction of new simplifications and toy examples, etc. — will run you slap-bang into Wikipedia’s No Original Research policy.

Even reducing a topic from the graduate to the undergraduate level can be a highly nontrivial task. (I was a beta-tester for Zwiebach’s First Course in String Theory, so I would know.) And, writing for undergrads who already have Maxwell and Schrödinger Equations under their belts is not at all the same as writing for high-school juniors (or for your poor, long-suffering parents who’ve long since given up asking what you learned in school today). Why not try that sort of thing out on another platform first, like a personal blog, and then port it over to Wikipedia after receiving feedback? Citing your own work in the third person, or better yet recruiting other editors to help you adapt your content, is much more in accord with the letter and with the spirit of Wikipedia policy, than is inventing de novo great globs of pop science.

Popularization is hard. When you make a serious effort at it, let yourself get some credit.

Know Thy Audience, indeed: sometimes, your reader won’t be a high-school sophomore looking for homework help, but is much more likely to be a fellow researcher checking to see where the minus signs go in a particular equation, or a graduate student looking to catch up on the historical highlights of their lab group’s research topic. Vulgarized vagueness helps the latter readers not at all, and gives the former only a gentle illusion of learning. Precalculus students would benefit more if we professional science people worked on making articles like Trigonometric functions truly excellent than if we puttered around making up borderline Original Research about our own abstruse pet projects.


  • Logan DW, Sandal M, Gardner PP, Manske M, Bateman A, 2010 Ten Simple Rules for Editing Wikipedia. PLoS Comput Biol 6(9): e1000941. doi:10.1371/journal.pcbi.1000941

Colloquium on Complex Networks

I might be going to this, because it’s in the neighbourhood and I suppose I ought to see what colourful examples other people use in these situations, having given similar talks a couple times myself.

MIT Physics Department Colloquium: Jennifer Chayes

“Interdisciplinarity in the Age of Networks”

Everywhere we turn these days, we find that dynamical random networks have become increasingly appropriate descriptions of relevant interactions. In the high tech world, we see mobile networks, the Internet, the World Wide Web, and a variety of online social networks. In economics, we are increasingly experiencing both the positive and negative effects of a global networked economy. In epidemiology, we find disease spreading over our ever growing social networks, complicated by mutation of the disease agents. In problems of world health, distribution of limited resources, such as water, quickly becomes a problem of finding the optimal network for resource allocation. In biomedical research, we are beginning to understand the structure of gene regulatory networks, with the prospect of using this understanding to manage the many diseases caused by gene mis-regulation. In this talk, I look quite generally at some of the models we are using to describe these networks, and at some of the methods we are developing to indirectly infer network structure from measured data. In particular, I will discuss models and techniques which cut across many disciplinary boundaries.

9 September 2010, 16:15 o’clock, Room 10-250.

Textbook Cardboard and Physicist’s History

By the way, what I have just outlined is what I call a “physicist’s history of physics,” which is never correct. What I am telling you is a sort of conventionalized myth-story that the physicists tell to their students, and those students tell to their students, and is not necessarily related to the actual historical development, which I do not really know!

Richard Feynman

Back when Brian Switek was a college student, he took on the unenviable task of pointing out when his professors were indulging in “scientist’s history of science”: attributing discoveries to the wrong person, oversimplifying the development of an idea, retelling anecdotes which are more amusing than true, and generally chewing on the textbook cardboard. The typical response? “That’s interesting, but I’m still right.”

Now, he’s a palaeontology person, and I’m a physics boffin, so you’d think I could get away with pretending that we don’t have that problem in this Department, but I started this note by quoting Feynman’s QED: The Strange Theory of Light and Matter (1986), so that’s not really a pretence worth keeping up. When it comes to formal education, I only have systematic experience with one field; oh, I took classes in pure mathematics and neuroscience and environmental politics and literature and film studies, but I won’t presume to speak in depth about how those subjects are taught.

So, with all those caveats stated, I can at least sketch what I suspect to be a contributing factor (which other sciences might encounter to a lesser extent or in a different way).

Suppose I want to teach a classful of college sophomores the fundamentals of quantum mechanics. There’s a standard “physicist’s history” which goes along with this, which touches on a familiar litany of famous names: Max Planck, Albert Einstein, Niels Bohr, Louis de Broglie, Werner Heisenberg, Ernst Schrödinger. We like to go back to the early days and follow the development forward, because the science was simpler when it got started, right?

The problem is that all of these men were highly trained, professional physicists who were thoroughly conversant with the knowledge of their time — well, naturally! But this means that any one of them knew more classical physics than a modern college sophomore. They would have known Hamiltonian and Lagrangian mechanics, for example, in addition to techniques of statistical physics (calculating entropy and such). Unless you know what they knew, you can’t really follow their thought processes, and we don’t teach big chunks of what they knew until after we’ve tried to teach what they figured out! For example, if you don’t know thermodynamics and statistical mechanics pretty well, you won’t be able to follow why Max Planck proposed the blackbody radiation law he did, which was a key step in the development of quantum theory.

Consequently, any “historical” treatment at the introductory level will probably end up “conventionalized.” One has to step extremely carefully! Strip the history down to the point that students just starting to learn the science can follow it, and you might not be portraying the way the people actually did their work. That’s not so bad, as far as learning the facts and formulæ is concerned, but you open yourself up to all sorts of troubles when you get to talking about the process of science. Are we doing physics differently than folks did N or 2N years ago? If we are, or if we aren’t, is that a problem? Well, we sure aren’t doing it like they did in chapter 1 of this textbook here. . . .