Junkfood Science: Yo-Yo science and the dangers of coincidence

September 20, 2007

Yo-Yo science and the dangers of coincidence

Growing numbers of people are finally getting wise to pop science and those studies that claim something is dangerous one day and healthy the next. The story is bigger than just flaws with the research itself, however, but involves how our minds work — in ways that can lead even experts to be fooled.

First the news. The Los Angeles Times echoed last weekend’s post examing the reliability of studies and expert opinions. As the subhead to a recent article read: “Coffee is good for you — no, it's bad. Epidemiological studies can come up with some crazy results, causing some critics to wonder if they're really worthwhile.”

Scientists do the numbers

SAGITTARIANS are 38% more likely to break a leg than people of other star signs — and Leos are 15% more likely to suffer from internal bleeding...Leos, Sagittarians: There's no need to worry. Even the study's authors don't believe their results. They're illustrating a point — that a scientific approach used in many human studies often leads to findings that are flat-out wrong.

Such studies make headlines every day, and often, as the public knows too well, they contradict each other.... “It's the cure of the week or the killer of the week, the danger of the week," says Dr. Barry Kramer, associate director for disease prevention at the National Institutes of Health in Bethesda, Md. It's like treating people to an endless regimen of whiplash, he says.... “I've seen so many contradictory studies with coffee that I've come to ignore them all," says Donald Berry, chair of the department of biostatistics at the University of Texas MD Anderson Cancer Center in Houston....

These critics say the reason this keeps happening is simple: Far too many of these epidemiological studies — in which the habits and other factors of large populations of people are tracked, sometimes for years — are wrong and should be ignored. In fact, some of these critics say, more than half of all epidemiological studies are incorrect.

These studies have a surprising amount of influence, though. People believe them and start taking some vitamin, or eating or avoiding certain foods. These studies also influence the practice of medicine, the paper reports, "long before their effects were tested in randomized clinical trials, the gold standard of medical research."

Some of epidemiology's critics are calling for stricter standards before such studies get reported in medical journals or in the popular press. [Stan Young, a statistician at the National Institute of Statistical Sciences in Research Triangle Park, N.C.], one of the foremost critics, argues that epidemiological studies are so often wrong that they are coming close to being worthless. “We spend a lot of money and we could make claims just as valid as a random number generator," he says....

The article goes on to briefly review the types of observational, epidemiological studies and their caveats, including that most are overturned when randomized, controlled clinical trials are performed to test their hypotheses. The research of Dr. Ioannidis (reviewed here) was brought up and Dr. Kramer said Dr. Ioannidis “is voicing what many know to be true.” Even Dr. Young told the LA Times that he sees the same thing in his own watch of epidemiological claims:

“When, in multiple papers, 15 out of 16 claims don't replicate, there is a problem," he says. Belief can be costly, Young adds. For example, one part of the large, randomized Women's Heath Initiative study tested the widely held belief -- based in large part on epidemiological studies -- that a low-fat diet decreases the risk of colorectal cancer, heart disease, or stroke. The findings suggested that there was no effect. “$415 million later, none of the claims were supported," Young says.

Why does this happen? Young believes there's something fundamentally wrong with the method of observational studies -- something that goes way beyond that thorny little issue of confounding factors. It's about another habit of epidemiology some call data-mining. Most epidemiological studies, according to Young, don't account for the fact that they often check many different things in one study. “They think it is fine to ask many questions of the same data set," Young says. And the more things you check, the more likely it becomes that you'll find something that's statistically significant -- just by chance, luck, nothing more. [The LA Times concluded with epidemiologists’ concerns that setting the bar too high could be dangerous and you might miss something. But, they provided no evidence.]

It’s the classic parlor trick of getting an audience to believe you can read their minds or are able to communicate with their dead relatives. Ask dozens of questions and eventually you’re bound to come up with something positive. Cold reading techniques "are a testament to the wonderful capacity of our species to find meaning in just about any image, word, phrase, or string of such items," said Robert Todd Carroll in The Skeptic's Dictionary. Of course, when the results of a clinical trial don’t agree with beliefs born of epidemiology, believers are quick to spin things and propose reasons why. Selective validation can lead us to see and hear only what we want to.

There is much more to this story. When interpreting epidemiological studies, our minds work in predictable ways. But unless we recognize how, we can be fooled.

People don't realize that coincidences, even remarkable and huge ones, are not that unusual. Nor do they usually mean anything.

"A poor understanding of probability and statistics, common in our society, causes people to be more amazed than they should be when confronted with coincidences," said Robert Novella. There are several very simple reasons why we naturally and easily misinterpret correlations, he said in an educational article, "The Power of Coincidence." We humans have a poor innate grasp of probabilities and believe that all effects must have deliberate causes, he explained. We want to believe in things and have an explanation.

We also don't understand the laws of dealing with numbers, especially big ones. The law of truly large numbers means that when enough data or people are involved (as in epidemiological studies), "unusual" occurrences become highly probable. Our species, however, is hardwired to look for patterns and connections and suggest explanations that often don't exist. The vast majority of our experiences "turn out to be much more probable than they appear, if analyzed critically." Compound that with selective validation, and the true power of coincidence to deceive us can be realized.

Bookmark and Share