Junkfood Science: Meta-Who?

December 04, 2006

Meta-Who?


Last week’s report on the meta-analyses done by researchers at the Wolfson Institute of Preventive Medicine in London, led many to ask ‘Just what is a meta-analysis’?

Since there’s another example coming right up, let’s take a moment to understand these new types of studies. You might not come to think of these studies as studies at all, but many believe they are.

Meta-analysis is a statistical method first proposed in 1976 by an educational psychology statistician, Gene V. Glass, as a way to analyze findings from a bunch of individual studies.

A meta-analysis is an analysis of other analyses to create a new study.

This technique is frequently used when there are no large, high quality, randomized, double-blind, placebo-controlled clinical trials — the gold standard — to prove the validity of a treatment or theory. So a meta-analysis lumps together whatever evidence is available: the good, bad and indifferent. Some studies may show a weak positive statistical association, others report none, and others may even report a negative correlation. It can end up giving the same weight to well-designed studies as poor ones, and create mud. By pooling what are oftentimes weak studies together, it is hoped to create a statistically stronger estimation of an effect. And therein lies the rub.

A favorite definition among critics is that of professor John Brignell, PhD, who authored The Epidemiologists: Have they got scares for you!--

Meta Analysis is making a strong chain by combining weak links.

When you’re reading about a study and see the word, “meta-analysis,” it’s a warning sign to proceed with extreme caution. There are several caveats to this technique. First,

it depends upon what studies the authors choose to include. Oftentimes, only published studies are used and those suffer from “publication bias.” This is the well-known phenomenon where studies showing positive results are much more likely to be published than “boring” ones showing no effect. Most studies disproving things never get published. Some studies are also updated and re-released multiple times, ‘stacking the deck.” A systematic review of the problems with meta-analyses by H.J. Eysenck in the British Medical Journal found meta-analyses often contradicted each other mainly because of the arbitrary nature of deciding which studies to include and that “these criteria had often been applied so as to favour a favorite hypothesis or vested ideological interest.”

Second, the studies lumped together in a meta-analysis can vary considerably in quality, measures, populations, methodologies and statistical analyses. Sort of like apples and oranges.

As John Bailar, III wrote a recent discussion of the discrepancies between meta-analyses and later large, randomized, controlled trials in the New England Journal of Medicine: Meta-analysis “does not work nearly as well as we might want it to work. The problems are so deep and so numerous that the results are simply not reliable.”

This is not to say that everyone believes all meta-analyses are worthless, but at least two-thirds are of such exceedingly poor quality they cannot even be used to guide clinical practice, according to an evaluation of 139 such studies, recently published in the journal Critical care. They cautioned doctors to think carefully before even considering applying the results of meta-analyses in their practices.

Bookmark and Share