Junkfood Science: News stories for the bird cage

April 17, 2008

News stories for the bird cage

Does a single drink a day really raise a woman’s risk for breast cancers? That’s what 403 media stories (and counting) have been reporting, based on a new study said to be “the largest of its kind.” But not all studies reported in the news are worth taking seriously or let worry us. Here’s why this one shouldn’t have even registered on our radar.

Since there’s actually no study to review (!), we’ll walk through the news. When would you have changed the television station or tossed the newspaper aside?

According to the news, the researchers reviewed data on 184,418 post menopausal women and found that women who drank even just one to two drinks a day were 32% more likely to develop breast cancers of a certain type (estrogen-receptor and progesterone-receptor positive, or ER+/PR+). This study was said to provide evidence that alcohol is positively associated with breast cancer.

When hundreds of news outlets around the world report on a single study, out of the hundreds released each day, on exactly the same day and all saying exactly the same thing, you can be sure someone issued a press release. Sure enough, this paper came with a press release. Science doesn’t issue press releases. Marketing departments do that. As Drs. Steven Woloshin, M.D. and Lisa Schwart, M.D., reported in a 2002 issue of the Journal of the American Medical Association, press releases rarely reveal a study’s limitations and the information is typically presented in ways that can exaggerate the perceived importance of the findings.

Our first clue that this was marketing was that it was released directly to the media. It bypassed the scientific community altogether, as well as any expert critical analysis. We heard only one interpretation of this study — the one on the press release.

Did you toss it?
A press release. Knowing this, our healthiest response would have been to discard it and move on. But if we hadn’t...

More importantly, this study was not even published... anywhere. It was presented at a meeting. It hasn’t been peer-reviewed, nor is the study available for healthcare professionals to evaluate its methodology or findings for themselves. We’re supposed to take someone’s word about what it really found.

A fourth-year medical student at the University of Chicago gave a report at this week’s meeting of the American Association for Cancer Research on this study. Any research presented at a conference should raise our suspicions because it is well-recognized that such papers suffer from the worst distortions and figure flaws, and oftentimes never make it through the peer review process to even get published. That’s why JFS rarely reviews research presented at meetings — it’s impossible to go to the original source and see what the researchers really did and actually found.

Did you toss it?
Presented at a meeting. Knowing this, we could have tossed it and waited for an actual published study. But if we hadn’t...

In the press release, although not consistently reported, we learn that it was a data dredge of the NIH-AARP Diet and Health Study. This is merely a collection of returned surveys sent out by AARP in 1995-6, asking its senior members living in select areas of the country, to recall their diets and lifestyle habits through their lives. None of the self-reported retrospective information on these questionnaires had been verified. Already, the quality of information used was unreliable.

It was not a random sampling of seniors, nor representative of seniors. Only 0.3% of AARP members had even returned the questionnaires. As we already know from a previous study using the NIH-AARP database, the women who returned the AARP survey were very different from most seniors. More than 90% of the women were white, most of higher incomes, and 55% of them were still taking hormones at the time of the survey, considerably more than most postmenopausal women. According to CDC data, only 22% of postmenopausal women in the U.S. were taking hormones when this survey was done.

Did you toss it?
Based on surveys or polls. Knowing this, we should have tossed it right there and waited for a clinical trial using randomized samples of people with verified clinical information. But if we hadn’t...

For this study, out of the total of 227,021 questionnaires in the NIH-AARP study database from women, we’re told they used data on 184,418 postmenopausal women who had answered the question about their alcohol consumption. The researchers then used cancer registries, attempting to cross-match the AARP questionnaires with breast cancer. The cancer registries enabled them to identify 5,461 cases of invasive breast cancer, but they only had tumor type information on less than half of those (2,391). Then, among those, they had data on 1,641 ER+/PR+ breast cancers that they used to look for correlations with alcohol consumption. So, 184,418 was the Trojan Number reported, and the actual study was conducted on the records of 1,641 women.

But we know nothing about these women and all the ways they differed from the women excluded from the study, so it’s impossible to make any sort of credible comparisons or conclusions. We also don’t know all the ways the subset of women used in this report differed from most women their age. The press release provided one indication of a seemingly unusual sampling, reporting that 70% of the women in their study drank — remarkably different from most elderly women. According to CDC data, when this survey was conducted, only 36% of all U.S. women 65 years of age and older consumed alcohol.

Next, since we know it was a data dredge looking for correlations, the associations found must be tenable — above random chance or computer modeling error — meaning, at least several times over null. In this case, they were unable to find a single genuine correlation. According to the press release, one drink/day was associated with a 7% increase in relative risk for developing this type of breast cancer over 7 years. Three drinks was associated with a 51% higher relative risk compared to nondrinkers. Nowhere close to viable links. This study was a nonfinding.

These relative risks also tell us nothing about what the actual risks are and how many women got cancer and didn’t, so it’s impossible to realize just how insignificant these untenable associations really are. Relative risks always give overstated perceptions, so anytime we’re not given actual numbers, we know we’re likely being manipulated. As JFS readers remember from a similar study of hormone receptor breast cancers and red meat consumption by postmenopausal women, its reported 97% increase in relative risks equated to a mere 0.15% difference over 12 years in actual risks — such a small number as to be unreplicable, not clinically meaningful and just as likely a math error.

Did you toss it?
A data dredge finding untenable associations. Knowing this, we should have tossed it right there. But if we hadn’t...

Clearly, the media, with headlines like “Alcohol May Boost Breast Cancer Risk,” hopes we’ll jump to the conclusion that this correlation is not only significant, but that it indicates a cause. But we know that correlations do not prove causation.

Not only were the authors unable to find a viable correlation, even using poor-quality data, we have no information on if they even adjusted for the other factors that might play a role in explaining this correlation. Did the researchers consider hormone use, an important factor in hormone-receptor cancers? Other studies of senior women, such as by Mount Carmel College of Nursing in Columbus, Ohio, have found that those who drink alcohol are 44% more likely to use over-the-counter drugs, to be older, 67% more likely to smoke, etc. We have no idea if the researchers even factored for any of these things.

There was simply no credible evidence presented to base any fears or make any reasonable conclusions.

The strongest evidence we do have is of marketing. Epidemiological correlations are most readily misused for marketing because statistical associations can be, and have been, drummed up to implicate virtually any and every aspect of our diets and lifestyles with some deadly disease. Epidemiological correlations have been twisted into causations and used to convince us that health is a matter of personal responsibility under our control, and to blame people’s “bad” behavior for any health problems they develop. As Paul R. Marantz of Albert Einstein College of Medicine, New York, said: “The misleading message that an individual will prevent a particular disease by altering a particular behavior or exposure (and its converse, than an individual will develop a particular disease if such behavior is not changed) has unfortunately been widely conveyed.”

Already, this study has been widely used in attempts to scare us about alcohol. One of the most notable illustrations appeared in the Globe and Mail: “Alcohol is quite a toxic drug, said the director of the Centre for Addictions Research in British Columbia. "The sad fact is there are approximately 60 ways in which alcohol can kill a person or cause them to be very ill."

Is there a body of research giving us reason to fear that moderate alcohol consumption increases our risks for breast cancer or premature death? Far from it. Even another data dredge of the same NIH-AARP database published in the December issue of the Archives of Internal Medicine found the lowest all-mortality rates associated with the women drinking the higher percentile of alcohol. The American Heart Association’s latest preventive health guidelines for women included 54 alcohol studies, and an examination of the mortality data also supported the healthfulness of responsible drinking. Even the Second Expert Report just issued by the World Cancer Research Fund and American Institute for Cancer Research released in November found no meaningful association between alcohol (or any food) and incidences of 17 cancers.

Perhaps, if this study had reported that wearing a bra is associated with 12,500 times the relative risk for breast cancer (true), we might have taken the finding more seriously.

Or not.

© 2008 Sandy Szwarc

Bookmark and Share