Junkfood Science: Beware the false RCT

June 04, 2009

Beware the false RCT

When we hear about a study from a randomized, controlled clinical trial, it’s easy to give the findings more importance than we would correlations derived from an observational study. But a study from a randomized controlled clinical trials isn’t always about a randomized controlled clinical trial. Increasingly, it’s an epidemiological study in disguise.

Even medical professionals get taken by this growing technique. It’s most common when secondary studies use the database from participants in a randomized controlled trial to look for correlations — not to scientifically test a hypothesis, let alone one the original trial had been designed to fairly test. Carefully controlled clinical trials are concerned with causes and effective treatments. In contrast, multivariate analyses of large databases, with their statistical manipulations and regression computer modeling, are statistics. Statistics is about correlations. It’s not biological research.

When we fail to look closely at a study’s methodology, it can be easy to miss when a randomized controlled trial has morphed into an observational study. This was seen last week when authors of a meta-analysis on tight blood sugar control for type 2 diabetics wrote in The Lancet:

More recently, extension of the initial randomised groups in the UKPDS study has shown a reduction in myocardial infarction and all-cause mortality with both metformin and sulphonylurea-insulin regimens.

By calling it an “extension” of a randomized intervention trial, did you think that this secondary study was a randomized clinical trial? As we examined, the original UK Prospective Diabetes Study (UKPDS) was designed to see if improving blood sugar control can help prevent the complications of type 2 diabetes. It had begun in 1977 and was completed in 1994 and the primary results published in 1998. The UKPDS found that the intensive medical management did not reduce any adverse clinical endpoint or all-cause mortality, and was possibly associated with higher risks for some patients.

The study cited by The Lancet authors as being an extension of the original trial and used to reverse the study’s original null findings to now suggest that the interventions were effective, was published last fall in the New England Journal of Medicine.*

Briefly, of the original 4,209 newly-diagnosed type 2 diabetics who had been randomized in the UKPDS trial, the authors used data compiled from questionnaires received on 1,525 of the participants, six and ten years after the trial had been completed when no further clinical follow-up was done. Information was also obtained from the Office of National Statistics. The authors noted that during the decade since the trial was completed, no efforts had been made to maintain the interventions the participants had received during the UKPDS trial. They then performed statistical modeling to calculate serial hazard ratios associated with seven outcomes, according to the intervention categories the participants had been assigned to during the original trial.

Did you catch when it stopped being a randomized controlled clinical trial?

When the original study ended.

It was nothing more than an observational study looking for correlations. Just because the questionnaires came from some people who once participated in a clinical trial doesn’t change that. The controlled interventions ended a long time ago.

If a dentist you hadn’t seen in ages attributed the beautiful smile you have today to a cleaning he once gave you fifteen years ago, would you find the evidence compelling? Or, would you think it might have more to do with the multiple dental procedures, sealants, teeth whitening and fluoride treatments you’d had since then?

If a randomized, double-blind, placebo-controlled clinical trial was completed 15 years ago, and you sent the participants a questionnaire today asking them about their health, would you find the correlations compelling evidence that the clinical trial intervention was the cause for their health status? Or, would you think the correlations might have more to do with countless other medical interventions, life situations and life-changing events, and doctors they’d seen in the interim?

Would you want the FDA to approve a drug based on questionnaires returned from only one-third (36%) of study participants, or would you want them to know what happened to the other two-thirds? Worse, would you want the randomized clinical trial evidence ignored and treatment guidelines based on this evidence — guidelines that become the performance measures (P4P) your doctor must follow to be paid and that you must follow to retain health insurance coverage?

By mistaking this observational study for a randomized, controlled clinical trial, peer reviewers may have not looked as closely to see its numerous weaknesses (“biases”) that made it fail as a fair test of anything. For example, the participants were not representative of the original cohort of type 2 diabetics (those who turned in the questionnaires were two years older and included significantly more minorities), no information about the diabetics’ treatment for the past decade was known or considered, and their computer modeling didn’t control for even the most significant factors in mortality, such as social-economic status. Even then, the results wouldn’t be considered tenable.

This is an example of why it’s important to understand that all studies are not created equal. Studies that are not designed to be fair tests of an intervention can lead to conclusions about treatment effects that are systematically different from the truth. The importance of sound science and randomized clinical trials properly designed to be fair tests of an intervention, with the findings objectively interpreted, really does matter.

But it may be up to you to know the difference.

© 2009 Sandy Szwarc

* The study’s published disclosure statements:

Dr. Holman reports receiving grant support from Asahi Kasei Pharma, Bayer Healthcare, Bayer Schering Pharma, Bristol-Myers Squibb, GlaxoSmithKline, Merck, Merck Serono, Novartis, Novo Nordisk, Pfizer, and Sanofi-Aventis, consulting fees from Amylin, Eli Lilly, GlaxoSmithKline, Merck, and Novartis, and lecture fees from Astella, Bayer, GlaxoSmithKline, King Pharmaceuticals, Eli Lilly, Merck, Merck Serono, Novo Nordisk, Takeda, and Sanofi-Aventis, and owning shares in Glyme Valley Technology, Glyox, and Oxtech;

Dr. Paul, receiving consulting fees from Amylin;

Dr. Bethel, receiving grant support from Novartis and Sanofi-Aventis and lecture fees from Merck and Sanofi-Aventis;

Dr. Matthews, receiving lecture and advisory fees from Novo Nordisk, GlaxoSmithKline, Servier, Merck, Novartis, Novo Nordisk, Eli Lilly, Takeda, and Roche and owning shares in OSI Pharmaceuticals and Particle Therapeutics; and

Dr. Neil, receiving consulting fees from Merck, Pfizer, Schering- Plough, and Solvay Healthcare. The Oxford Centre for Diabetes, Endocrinology and Metabolism (OCDEM) has a Partnership for the Foundation of OCDEM, with Novo Nordisk, Takeda and Servier. No other potential conflict of interest relevant to this article was reported.

Bookmark and Share