Junkfood Science: What the news should have reported: No link between fat and risks for ovarian cancer

January 13, 2009

What the news should have reported: No link between fat and risks for ovarian cancer

Science by press release is increasingly becoming the news of the day. A press release is sent out six weeks before a study is actually published in a medical journal, guaranteeing reporters will jump on a juicy story, but medical professionals won’t have had an opportunity to read it or comment with critical analyses.

When we see this marketing tactic employed, it’s our heads up — our baloney alert, if you will — that the science wasn’t credible in the first place. Someone is trying to sell us something and compromise the integrity of medical research and the peer review process.

An unpardonable example of brazen misrepresentation of a medical “study” came out this past week when the media, in lockstep, reported from a press release. This press release had been issued six weeks before the study is to be published on February 15th in Cancer, the journal of the American Cancer Society. It headlined: “Study links obesity to elevated risk of ovarian cancer.”

The press release and the study abstract were carefully drafted to give the impression that this was major medical research that had followed nearly 100,000 women for seven years and found that fat women had more ovarian cancers. The full study was available to every medical editor and health writer who cared to request a copy. Yet, not one — not a single media outlet— went beyond the press release and told the public that this “study” wasn’t a clinical study at all.

Millions of women have been cruelly and needlessly frightened into thinking that major research, purportedly conducted by the National Institutes of Health, had found that they are at an 80% higher risk of getting cancer if they are fat.

Ladies, this was not science. Not a single woman was ever examined in this study, not a single cancer was diagnosed. I suspect you will feel outrage when you learn what I found when I got a copy of the actual study.

It turned out to be another computer data dredge of those AARP member mail-in questionnaires from 1995!

AARP survey says. As readers will remember, in 1995-6, the AARP had sent out questionnaires to its senior members (50-71 years of age) in select parts of country, asking them their height and weight and to remember what they had eaten and about their lifestyle habits during their lives. Only 0.3% of AARP members even returned the questionnaires. The women who mailed back the surveys were not representative of women of similar age at all. It wasn’t a randomized sampling of women. The self-reported survey answers were never confirmed and not one woman was ever examined.

Never the less, these returned questionnaires became the NIH-AARP Diet and Health Study database. It has been used to dredge through and find all sorts of correlations to frighten men and women, such as when WebMD told women that it had supposedly found that a single drink raised their risks for breast cancer by a third. Or, remember the study reported as finding that the natural weight gain women experience with aging could raise their risks for breast cancer (but that wasn’t what the data had actually shown at all)? And, while one study of the AARP database dredged up that being fatter was associated with a lower risks for prostate cancer in men (that wasn’t reported), another study from the very same AARP database was reported as finding that obesity raised men’s risks for advanced and fatal prostate cancers.

We are continually reminded that data dredges can, and do, pull out all sorts of meaningless correlations — that can even contradict each other — depending on the data the researchers select to use, the variables they plug in and their computer models. These studies are the “Rorschach tests” of epidemiology because computer models can pull out patterns in almost unlimited combinations and conclude just about anything the researcher sets out to find… or report.

When we hear about a “study,” remember, all studies are not created equal. The way this study was reported, as having been conducted by researchers at the National Cancer Institute, left consumers with a clear perception that the study was more credible than it was. Reading the news coverage, there is no way for the public to have imagined that the “study” they were hearing about was not clinical research, but from specious 14-year old mail-in member surveys.

Hundreds of news outlets — from nationally syndicated mainstream to alternative, and their blogs — reported verbatim from the press release. Not one revealed that these findings had come from mail-in questionnaires.

Even the Abstract gave a misimpression. Anyone reading it easily came away thinking that the women had been clinically “followed” and that their cancers had been clinically diagnosed. Most unusually, under the Methods section, the Abstract made no mention of the true source of the data, saying only:

The authors prospectively investigated the association between BMI and ovarian cancer among 94,525 US women who were followed between 1996 through 1997 to December 31, 2003. During 7 years of follow-up, 303 epithelial ovarian cancer cases were documented.

Going to the actual study reveals that the authors had used 138,057 of the returned follow-up AARP questionnaires, then excluded data from thousands of women, including those who’d had oophorectomies, all the thin women (more than 1,500 with BMIs of 18.5 or lower), 46 women with BMIs over 65, and thousands with missing data. Of the remaining 94,525 surveys, they located the social security numbers for 85% of the women and then searched state cancer registries to find reported cases of epithelial ovarian cancers. Epithelial cancers were identified using billing codes (International Classification of Disease for Oncology, second and third editions — ICD-O C56.9). Of those, a validation analysis estimated they’d identified about 90% of cancer cases. Their computer then looked for correlations between cases of epithelial ovarian cancers and numerous combinations of selected variables.

They were unable to find a single tenable link between obesity and higher relative risks for ovarian cancer. It was a null study. We didn’t hear that fact in the news.

As we know, the more data that’s dredged through, the more likely we are to find statistical correlations that are meaningless. Among the multiple permutations their model examined and wasn't able to derive a statistical link, only among 43 women who had never taken hormone replacements and had a BMI≥30 was there an 83% relative risk associated with ovarian cancer when compared to 39 women with BMIs <25. But if the fat women reported ever having taken even a single hormone pill, illogically, their relative risks dropped to 4% less than the “normal” weight women. And ‘overweight women dropped to 32% lower. As we know, none of these relative risks are tenable — beyond random chance or statistical error in an epidemiological study. Untenable correlations that are no better than random chance or a statistical fluke are no findings at all, no matter how scary or important they might sound.

Among women with a family history for ovarian cancer, being fat might even appear protective, as the overweight women were associated with a 71% lower relative risk for cancer compared to the normal weight women and the ‘obese’ women a 26% lower risk. We really didn’t hear about those correlations. But, of course, these relative risks weren’t significant, either.

The authors concluded: “The observed relations between obesity and ovarian cancer risk have relevance for public health programs aimed at reducing obesity in the population.” The study, however, hadn’t shown that naturally fat women who lose weight can reduce their risks to those of naturally thinner women. In fact, the women who reported BMIs>25 at age 18 and had lost even modest amounts of weight (to BMI<25) during adult hood had a 26% higher relative risk associated with ovarian cancer compared to women who had been naturally slim their whole life. These relative risks aren’t tenable, either, but clearly provide no support for a claim that dieting lowers cancer risks.

There is still no justification for blaming fat women for getting cancer or for using cancer to try and scare them thin. The thousands of studies done on millions of women, which have all failed to find an association between body fatness and cancers or cancer deaths, continue to provide a reassuring body of evidence for women. And these were all considerably stronger studies and used better data and methodology than an AARP member survey.

This study failed to find a valid link between obesity and cancer and it most certainly found no credible evidence to support the flurry of news stories scaring women or for government anti-obesity programs.

Once again, when you see the same news story reported everywhere all at the same time, you can be pretty certain someone sent out a press release. Press releases come from marketing departments. They’re marketing. Science by press release is almost never good science.

© 2009 Sandy Szwarc

Bookmark and Share