Via the Knight Science Journalism Tracker comes Sharon Begley’s story in Newsweek entitled “Sorting Out Good Science From Bad” (7 May issue, strangely enough). It runs under the sub-heading, “Just Say No — To Bad Science.” The content shouldn’t surprise anyone who grew up with Darrell Huff’s fascinating little book, How To Lie With Statistics (1954, reissued 1993). In a chatty two pages, Begley’s piece looks at one particular trick: selection bias.
The technique is simple. If you want to demonstrate that Massachusetts cities are statistically clustered in the eastern part of the United States, you need to look at a map of the whole country. Likewise, if you want to understand what cold weather does to space-shuttle O-rings, you can’t look only at the low-temperature results; the meaning of the data only becomes clear if you look over the entire temperature range.
After kicking off with the standard anecdote, Begley arrives at the truth-killer’s modus operandi. Here’s what good methodology gives you:
In April, scientists released the most thorough study of abstinence-only programs ever conducted. Ordered up by Congress, it followed 2,000 kids, starting in grades 3 through 8, in rural and urban communities who had been randomly assigned to an abstinence-only program or not. Result: kids in abstinence-only “were no more likely to abstain from sex than their control group counterparts … [both] had similar numbers of sexual partners and had initiated sex” at the same age.
How do you cook up a study whose results can support an abstinence-only agenda? It’s not so difficult:
Many evaluated programs where kids take a virginity pledge. But kids who choose to pledge are arguably different from kids who spurn the very idea. “There’s potentially a huge selection issue,” says Christopher Trenholm of Mathematica Policy Research, which did the abstinence study for the government. “It could lead to an upward bias on effectiveness.”
Claims for abstinence-only also rest on measurements not of sexual activity, but attitudes. The Bush administration ditched the former in favor of assessing whether, after an abstinence-only program, kids knew that abstinence can bring “social, psychological, and health gains.” If enough answered yes, the program was deemed effective. Anyone who is or was a teen can decide if knowing the right answer is the same as saying no to sex.
Oh, those savvy teenagers. How do you test whether virginity pledges work? Why, by asking the teenagers who took them, of course!
This is the sort of methodology which makes shuttles explode.
On a related note,
A study of another abstinence program found it did a phenomenal job of getting girls to postpone their first sexual encounter. One problem: it evaluated only girls who stayed in the program, says Maynard. Girls who had sex were thrown out. In a related strategy, some studies of true sex ed, not the just-say-no variety, follow kids for only a few months, says Kirby of ETR Associates, a research contractor. But to see any difference between kids who took the class and those who did not, you have to let enough time go by for kids (in the latter group, one hopes) to have sex and get pregnant. A short time horizon may miss a program’s effectiveness.
Now, this may surprise you, but I’m all in favor of including abstinence lessons in sex education. It’s a very common form of sexual activity (by the same logic which makes atheism a religion and health a disease, of course) so we should teach kids how to do it well.
Anyway, it’s nice to see the who, how and wherefore of bad statistics addressed (even in a quick two-page piece) in such a prominent outlet as Newsweek. More of the same and we might see real progress. Naturally, though, we can’t understand how journalism works on technical issues by looking at the high-quality end only. That’s like cutting off the tail and wagging an imaginary dog, or like federal funding policies on sex education. Rest assured, more bad science (and math!) journalism will be gracing your screen anon.
ALSO: happy Fuehrerstodestag!