“The next time a headline declares something is bad for you, read the small print.”
She adds insights on the statistical problems and bias in research, with a report from Dr. John Ioannidis, noting that oftentimes researchers looking for risk factors for diseases are not even aware they have to modify their statistics in certain situations. Her helpful post explores the clash between observational studies and clinical trials. She concludes with comments on what it means for consumers and journalists. She writes:
....Dr. Peter Austin examined hospital admission records and discovered that astrological birthsigns are associated with particular conditions. E.g., people born under Leo are 15% more likely to be admitted to hospital with gastric bleeding than other birthsigns; similarly for Sagittarians who are 38% more likely to be admitted for a broken arm.
Dr. Austin does not endorse the above findings nor claim that they are meaningful. He used the findings to illustrate the inadequacy of commonly used statistical analyses that "run the risk of identifying relationships when, in fact, there are none". Austin's analysis demonstrates why so many health claims look important at first blush but can not be substantiated in later studies....
As her posted noted, a recent article from The Economist had especially thought-provoking comments from Dr. Ionnidis:
Unfortunately, many researchers looking for risk factors for diseases are not aware that they need to modify their statistics when they test multiple hypotheses. The consequence of that mistake, as John Ioannidis of the University of Ioannina School of Medicine, in Greece, explained to the meeting, is that a lot of observational health studies—those that go trawling through databases, rather than relying on controlled experiments—cannot be reproduced by other researchers. Previous work by Dr Ioannidis, on six highly cited observational studies, showed that conclusions from five of them were later refuted. In the new work he presented to the meeting, he looked systematically at the causes of bias in such research and confirmed that the results of observational studies are likely to be completely correct only 20% of the time. If such a study tests many hypotheses, the likelihood its conclusions are correct may drop as low as one in 1,000—and studies that appear to find larger effects are likely, in fact, simply to have more bias.
Thank you, Shinga, for the most kind comments. :)