Junkfood Science: Are you sure about that?

September 15, 2007

Are you sure about that?

According to the gospel of evidence-based medicine, “randomized controlled clinical trials are objective, free of bias and produce robust conclusions about the benefits and risks of treatment and clinicians should be trained to rely on them,” wrote Canadian and British researchers in the British Medical Journal a few years ago.

Trouble is, that’s often not true.

Professor James McCormack and Trisha Greenhalgh found that “the interpretations and dissemination of results [are] open to several biases that can seriously distort the conclusions.”

They reached these conclusions after examining the long-awaited UKPDS (United Kingdom Prospective Diabetes Study) clinical trial. As they explained, incredibly, only one well-designed, prospective clinical trial had ever been done on the medications used for a quarter of a century to treat type 2 diabetes and that study had never been replicated — in 25 years! When the first randomized controlled clinical trial conducted after all that time, the UKPDS, was finally done it involved over 5,000 patients, 20 years and 23 medical centres in the UK. Closely examining this important clinical trial, McCormack and Greenhalgh discovered that the evidence about the benefits and risks of glucose control contradicted the positive spin being given by editorialists and the medical community.

This observation has been confirmed by others, including this writer. One systematic examination of published articles reviewing the UKPDS trial was led by Dr. Allen F. Shaughnessy, associate director of Harrisburg Family Practice Residency in Harrisburg, Pennsylvania. They found, for instance, that only six of 35 reviews in the medical literature included the patient-oriented outcomes of the study, evidence which showed that tight blood glucose control had no effect on diabetes-related or overall mortality. And "no review pointed out that treatment of overweight patients with type 2 diabetes with insulin or sulphonylurea drugs had no effect on microvascular or macrovascular outcomes," they said. They concluded that articles have not accurately reported to doctors the valid patient evidence found in the UKPDS and clinicians relying on them, “may be misled.”

Biases are powerful influences not only in the selective or distorted reporting of the results written up for medical journals, but in the very conduct of trials, even at academic centers, how studies are interpreted after they’re published, and how clinical guidelines are developed. Researchers, authors and editors are highly susceptible to interpretive biases, McCormack and Greenhalgh found. Among the biases they identified in interpreting and reporting research, were:

· "We've shown something here" bias — the researchers' enthusiasm for a positive result. [To suggest that after 20 years, several classes of drugs being used to treat diabetes had little or no effect would have “been a distinct anticlimax,” they wrote.]

· "The result we've all been waiting for" bias — the clinical and scientific communities' prior expectations. [The imperative of strict control of glucose has been widely believed since the 1980s and is “the raison d'être of the diabetologist and should be the principal objective of every well behaved patient,” they wrote.]

· "Just keep taking the tablets" bias — the tendency of clinicians to overestimate the benefits and underestimate the harms of drug treatment. [Low emphasis is given to side effects and their effects on patients, they wrote.]

· "What the hell can we tell the public?" bias — the political need for regular, high impact medical breakthroughs. [“Pressure from the press and patient support groups arguably drew staff from the British Diabetic Association, and perhaps even the trials' authors, into producing soundbites with a positive spin,” they said.]

· "If enough people say it, it becomes true" bias — the subconscious tendency of reviewers and editorial committees to "back a winner." [“The writing — that the study was about to cause a sensation — was probably already on the wall, so it would have taken a brave and rebellious individual to be the first to jump off the bandwagon,” they said.]

This post isn’t about the debate over tight control of HbgA1C levels in diabetic patients, as we covered that some in May, looking at the editorial by Dr. Rodney A. Hayward, M.D., describing the politics behind clinical guideline performance measures for diabetic patients. We’ll talk more about diabetes in upcoming articles, though, as for the past decade, diabetes has taken on the very same confluence of vested financial and political interests, distortions and abuses of the science that plagues the obesity issue. The debate was rekindled this week at MedRants when Dr. Robert M. Centor, M.D., general internist at the University of Alabama School of Medicine in Birmingham, encouraged his readers to examine the use of randomized clinical trials in the development of clinical practice guidelines.

Coincidentally, he made very similar arguments as Dr. Hayward had made, calling into question the efficacy of performance measures based on getting all patients’ HbgA1c levels below 7. Arbitrary number management — which can (and typically does) encourage the prescribing of multiple drugs “to reach the magic number” — can have the unintended consequences of removing incentives for doctors to help patients with extremely high blood sugars who could most truly benefit by reducing levels, even if they’re never able to achieve 7. Meanwhile, doctors are rewarded for focusing on the more attainable modest reductions (say, from 8 to 7) that will have minor impact on preventing complications while putting those patients at risk for side effects. “Treating numbers, for the sake of guidelines, may conflict with treating patients,” he wrote.

But Friday, Dr. Centor also identified another important bias within the medical profession that is relevant to this discussion and one that’s frequently overlooked: specialty bias. As he explained: “[O]ne of the many biases in guideline development comes when those making the guideline are experts in that particular disease. Diabetologists see the world through a sugary lens. Their passion in life is treating blood sugar. I would submit that they have a bias which stems from their specialty.”

We’ll look at groupthink bias among specialists more in a moment, but there’s also a specialty bias held among the public and busy medical practitioners who believe that if specialists have developed guidelines and reached a consensus on them, and if guideline are held at reputable institutions, then they must be sound. As British doctors discovered when investigating the factors that influence doctors’ decisions about new drugs, most doctors do not evaluate research data themselves, they rely on expert assessment. “The decision to initiate a new drug is heavily influenced by 'who says what,' in particular the pharmaceutical industry, hospital consultants, and patients,” they wrote. “Prescribing of new drugs is not simply related to biomedical evaluation and critical appraisal but, more importantly, to the mode of exposure to pharmacological information and social influences on decision making.”

It isn't that doctors don't care, as most are focused on caring for their patients and making a difference in their lives. It takes time-consuming effort for medical professionals to critically look at the research behind clinical practice guidelines and question the status quo, and admirable bravery to speak out, especially when the soundest evidence goes against the experts, their peers, funding sources, and the official consensus of reputed organizations. Dr. Centor is commendable in his continued efforts.

Undeniably, the trade organizations that have developed the guidelines for diabetes management, like many other health indices, are supported by the pharmaceutical industry and vested interests. One only has to look at the corporate sponsors of the American Diabetes Association here, here and here, to recognize financial conflicts. Same goes for the international initiative to promote A1Cs under 7% for everyone. It’s become popular to focus on financial conflicts of interest and to go after all researchers who collaborate with drug companies. Some argue, however, that this focus potentially discourages beneficial funding and slows the advancement of medicine. Often overlooked is the misguided belief that money is the only bias that could influence otherwise objective researchers and experts.

Believing, not only in the consensus of expert opinion, but in the purity of research itself, can be harmful.

In an investigative report, “Why most published research findings are false,” published in 2005, Dr. John P. Ioannidis, M.D., at the University of Ioannina School of Medicine in Ioannina, Greece, and with the Institute for Clinical Research and Health Policy Studies at Tufts-New England Medical Center, Tufts University School of Medicine in Boston, found that the greater the financial and other interests, and prejudices, in a scientific field, the less likely the published study findings were to be true. As he wrote:

Prejudice may not necessarily have financial roots. Scientists in a given field may be prejudiced purely because of their belief in a scientific theory or commitment to their own findings. Many otherwise seemingly independent, university-based studies may be conducted for no other reason than to give physicians and researchers qualifications for promotion or tenure. Such nonfinancial conflicts may also lead to distorted reported results and interpretations. Prestigious investigators may suppress via the peer review process the appearance and dissemination of findings that refute their findings, thus condemning their field to perpetuate false dogma. Empirical evidence on expert opinion shows that it is extremely unreliable.

Reporting research findings that bring notoriety, career opportunities, promotions, speaking engagements, publishing opportunities and countless other nonfinancial rewards can be the most compelling biases. In fact, some of the more egregious scientific misconduct lately has been among academic researchers with no company affiliations.


Troubling evidence

Which, coincidentally brings us to an article in yesterday’s Wall Street Journal, titled: “Most science studies appear to be tainted by sloppy analysis.” [For the access-challenged, a pdf of this article has been made available courtesy of Junkscience.com here. Thanks Steve and Barry!]

Journalist Robert Hotz discusses Dr. Ioannidis’ controversial 2005 report documenting how, in thousands of peer-reviewed research papers published each year, most published findings are wrong. It is almost inconceivable when one first realizes that: Research findings are, in fact, more likely to be false than true.

Dr. Ioannidis told the WSJ: “People are messing around with the data to find anything that seems significant, to show they have found something that is new and unusual.”

Dr. Ioannids’ paper is worth a read to capture additional points not covered in the WSJ. For instance, another of the key markers to identifying research that is less likely to be true was the hotter the field and the more scientists involved. It’s the power of groupthink and competition among researchers trying to one-up each other. As Dr. Ioannidis wrote:

This seemingly paradoxical corollary follows because [the positive predictive value] of isolated findings decreases when many teams of investigators are involved in the same field. This may explain why we occasionally see major excitement followed rapidly by severe disappointments in fields that draw wide attention. With many teams working on the same field and with massive experimental data being produced, timing is of the essence in beating competition. Thus, each team may prioritize on pursuing and disseminating its most impressive “positive” results. “Negative” results may become attractive for dissemination only if some other team has found a “positive” association on the same question. In that case, it may be attractive to refute a claim made in some prestigious journal. The term Proteus phenomenon has been coined to describe this phenomenon of rapidly alternating extreme research claims and extremely opposite refutations.

Granted, trial and error is part of scientific discovery and conflicts will normally arise out of incomplete data, but science relies on peer review to catch problems — a process that increasingly falls short because of biases. As the WSJ reported:

[F]ndings too rarely are checked by others or independently replicated. Retractions, while more common, are still relatively infrequent. Findings that have been refuted can linger in the scientific literature for years to be cited unwittingly by other researchers, compounding the errors....

But Dr. Ioannidis’ most astounding finding, not mentioned in the WSJ, was that findings claimed to have been found in studies may simply be measures of the prevailing bias.

“The majority of modern biomedical research is operating in areas with very low pre- and post-study probability for true findings,” he said. And science has a long history of wasting efforts on research with absolutely no true scientific information to ever be discovered, he said. But what scientist or doctor would ever admit they’re working in a “null field”? As Dr. Ioannidis poignantly explains:

For example, let us suppose that no nutrients or dietary patterns are actually important determinants for the risk of developing a specific tumor. Let us also suppose that the scientific literature has examined 60 nutrients and claims all of them to be related to the risk of developing this tumor with relative risks in the range of 1.2 to 1.4 for the comparison of the upper to lower intake tertiles. Then, the claimed effect sizes are simply measuring nothing else but the net bias that has been involved in the generation of this scientific literature.. It even follows that between “null fields,” the fields that claim stronger effects (often with accompanying claims of medical or public health importance) are simply those that have sustained the worst biases.

This concept totally reverses the way we view scientific results. Traditionally, investigators have viewed large and highly significant effects with excitement, as signs of important discoveries. Too large and too highly significant effects may actually be more likely to be signs of large bias in most fields of modern research. They should lead investigators to careful critical thinking about what might have gone wrong with their data, analyses, and results.

Of course, investigators working in any field are likely to resist accepting that the whole field in which they have spent their careers is a “null field." [They're highly unlikely to just say, "Oh, never mind."]

This explains entire fields like obesity and preventive nutrition.

What can be done? Dr. Ioannidis said that instead of chasing statistical significance, we should improve our understanding of risk values and make sure researchers are testing true relationships, not those based on bias. He suspected that several large, established “classic” studies would fail the test. Even with statistically significant associations found in a multitude of studies performed around the world, he said, the probability that they are true is only one in five, hardly any better than chance and the probability known before the extensive research was ever undertaken.

“Diminishing bias through enhanced research standards and curtailing of prejudices may help,” he wrote. “However, this may require a change in scientific mentality that might be difficult to achieve.”

But it is an imperative for us to try, for all our sakes.

© 2007 Sandy Szwarc

Bookmark and Share