Junkfood Science: JFS Special: Food and heart attacks — is a link for real?

October 24, 2008

JFS Special: Food and heart attacks — is a link for real?

According to food headlines this past week, a new study has shown that a Western diet causes 30% of all the heart attacks … throughout the whole world. A Western diet, defined as one based on fried and salty foods, eggs and meat, was said to exemplify a bad diet. Taxes on harmful greasy foods is one possible solution to the Western diet crisis, said lead investigator, Salim Yusuf, DPhil, FRCPC, FRSC, Professor of Medicine, McMaster University, and director of Population Health Research Institute in Ontario. “Just like with tobacco, we could have safety warrants for foods with high salt,” professor Yusef said. In contrast, eating a “prudent” healthy diet of fruits and vegetables was said to lower the risk of a heart attack by a third.

Can food really do all that? Are one-third of all heart attacks really to blame on bad foods? And can virtuously eating a low-fat, vegetable-based diet — what’s been called a heart-healthy diet — protect us? And did this new study actually support these sweeping conclusions?

This study’s far-reaching assertions of a link between what we eat and heart disease, backed by the world’s largest and most prestigious heart and health organizations, make it even more important to really understand the sort of evidence behind these claims.


Overview

This report was published in the online edition of Circulation, a journal of the American Heart Association. This study design epitomizes what Dr. John Brignell, Ph.D., calls a data dredge.*

The authors used the database called the INTERHEART study** for this secondary analysis. This project had been initiated by Dr. Yusuf, with the objectives of identifying risk factors associated with cardiovascular disease among populations throughout the world. Their findings had been published in an earlier September 2004 issue of Lancet, where the objectives of the database were described in more detail:

Although age-adjusted cardiovascular death rates have declined in several developed countries in past decades, rates of cardiovascular disease have risen greatly in low-income and middle-income countries, with about 80% of the burden now occurring in these countries. Effective prevention needs a global strategy based on knowledge of the importance of risk factors for cardiovascular disease in different geographic regions and among various ethnic groups.***

Between February 1999 and March 2003, 12,461 patients who were hospitalized for acute heart attacks had been enrolled into the INTERHEART study from 262 centers in 52 countries. Three out of four of the patients enrolled were men. Only 2% of the original INTERHEART study cohort (296 people) were from North America. Matching controls without heart disease were recruited to match the cardiac patients’ ages and gender. According to the earlier 2004 INTERHEART report, 58% of the controls were hospital patients, 36% were relatives or caretakers of a patient and 6% were unaccounted for.

While the heart attack patients were in the hospital, they were asked to complete questionnaires on their demographics; social-economic factors and lifestyles; and health histories, such as for diabetes and hypertension. To assess their diets, they were asked to fill out simple food frequency questionnaires that included 19 food categories, asking them how many times a day, week or month they had eaten those foods in the past 12 months (no portion sizes were assessed and the diet questionnaires were not verified). The patients also had labwork drawn and their body measurements taken. All of this data became the INTERHEART database.

For this new study, the authors said that to minimize confounding of diet-disease relationships, they confined their analysis to 5,761 cases of first heart attacks and used only 10,646 of the control group. In other words, this study was about the role of diet for the primary prevention of first-time heart attacks. No information was reported on how the more than half (54%) of heart attack patients they eliminated from the original cohort differed from those they used in this analysis.

The authors used a statistical method called factor analysis to combine the reported foods into categories and create three dietary patterns: “Western” (which combined the risks associated with fried food, salt snacks and meats), “Prudent” (fruits and vegetables), and “oriental” (high on soy foods). In creating their model and generating a dietary risk score, the authors said they considered the foods in the Western group to be predictive of a heart attack and those in the Prudent group to be preventative. The authors then divided each dietary pattern into quartiles, with the first quartile the lowest consumption and the fourth quartile the highest consumption of each dietary pattern. Finally, they conducted computer modeling, logistic regression analysis, to look for associations between each dietary pattern and acute heart attacks.

Let’s pause here for a moment. Did you catch some of the weaknesses in this study so far and why it fails as a fair test of whether diet causes heart attacks?

● The health histories and dietary information was self-reported, unverified and restricted to things the authors chose to examine. [information bias]

● The study population was derived from hospitalized heart attack patients and therefore, as the authors themselves noted, “unlikely to reflect the population prevalence of risk factors in an entire country or region.” [selection bias]

● The blood pressure readings and lab tests measured on the heart attack patients were likely most reflective of the medications and treatments they were receiving while in the hospital after their heart attack and not indicative of previous levels, as the authors acknowledged, reducing the reliability of the relationships reported. [observation bias]

● The data used was retrospective, gathered while patients were in the hospital after their first acute heart attack. In this situation, recall bias and reverse causation are well established epidemiological problems, commented Dr. Majid Ezzati with Harvard School of Public health in Boston in Lancet. If you’ve just had a heart attack, you’ll be more likely to report and recall eating foods or doing something you believe are bad for you, as you search for an answer to “why me?”

● When individual food groups are associated with insignificant relative risks for a disease, adding them together into arbitrary categories may make the number larger, but the correlations are no more valid. Among the problems recognized in doing a factor analysis, it is notorious for the subjectivity involved in what factors to include/exclude and how to categorize the items. It’s more than just whether all relevant variables are included, but “if one deletes variables arbitrarily in order to have a cleaner factorial solution, erroneous conclusions” will result. How many Westerners really eat like their stereotypical definition, with fatty salty foods and meats as staples? And do other cultures, such as Asian cuisines, really not fry and salt foods? In other words, these are computer-contrived models.

● Finally, and most importantly, this study was looking for correlations (risk factors) which can never provide evidence of causation. More about that in a minute.

The study participants differed in significant ways between those placed in the lowest and highest quartile of each dietary category, indicating the presence of notable confounding factors. In the Prudent diet category, for instance, the highest quartile differed from the lowest by including more women, healthier people, more than twice the people physically active and with higher educational status, people with higher BMIs (average BMI=26), one-third fewer current smokers, and nearly twice the number of higher income households.

The authors compared the odds ratios (already, using a method of comparing groups to exaggerate a link) for heart attacks between the lowest and highest quartiles. For the Prudent diet, the odds ratio for the highest quartile was 33%; and for the Western diet, the odds ratio for the highest quartile was 35%. These were the correlations, according to the authors, after adjusting for selected risk factors (age, gender, geographic region, education, smoking, physical activity and body mass index). They found no relationship between heart attacks and the Oriental diet of soy foods.

Did you catch their key omission? Some of the most important confounding factors for health inequities, especially when examining underdeveloped and developing countries, were not controlled for, even though the information was included in the INTERHEART database: social-economic/poverty status, household income and rural-urban locations. Specific foods eaten by people are typically markers for socioeconomic status, a fact evident in nearly all dietary studies and just one of a multitude of confounding factors when examining populations. In developed countries, as we’ve seen, while the precise food choices eaten among people of low-income may be different from those eaten by wealthier people, population studies for more than 50 years have shown that they aren’t appreciably different in the actual nutrients. The prestige of a food is more a measure of class status than nutrition.

Bottom line, even after all of this, none of the reported odds ratios (30%, 35%) were tenable.**** It was actually a null finding. The relationships could be explained by random chance, mathematical or modeling error, or various study biases and confounding factors. To fall outside a null finding, relative risks generally need to be greater than 200% to be considered tenable (odds ratios even moreso). In fact, a growing number of researchers are acknowledging that relative risks that later prove to be genuine in real life and in subsequent clinical trial research, are extraordinarily larger than that.

Dr. John P. Ioannidis, M.D., at the University of Ioannina School of Medicine in Ioannina, Greece, and with the Institute for Clinical Research and Health Policy Studies at Tufts-New England Medical Center, Tufts University School of Medicine in Boston, had a term for this method of comparing upper to lower tertiles when the relative risks are small and untenable like this: working in a null field. As he explained in an investigative report, “Why most published research findings are false,” the claimed effect sizes are simply measuring the net biases [discussed here].

Finally, the INTERHEART researchers used a computer program to estimate the population attributable risk (PAR) — the source of their conclusion that 30% of heart attacks worldwide are related to Western diets. Roughly, PAR is the attributable risk (relative risk) times the prevalence of the risk factor in an entire population. In essence, it takes an untenable correlation (that says nothing about a causal link) found in a specific subset of a population and inappropriately multiplies it to other populations — or in this case, the entire planet — making the resulting figure sound all the more impressive, when it’s really no more defensible.

The quick and easy shortcut would have been to say that this study was a data dredge and found no tenable correlation between diet and first time heart attacks. It was a null finding. But many might have been tempted to dismiss such remarks, not realizing that they weren't just flippant, but held a lot more scientific reasoning behind them.


Putting research into perspective — the body of evidence

Scientific knowledge advances as sound research builds a body of credible and replicated evidence. The INTERHEART study didn’t do a review of the medical literature and examine how its findings fit into the scheme of things.

The most glaring departure from sound usage of epidemiological correlations is when correlations are turned into having causal roles and then used to make recommendations for health interventions, with no evidence that they are actually beneficial and outweigh the risks for harm.

Consumers were incorrectly told that this study provided evidence that junkfood and bad diets (“as assessed by a simple dietary risk score”) cause one-third of all heart attacks and that this study supports the benefits of the American Heart Association’s heart-healthy eating. Yet, even the evidence used by the American Heart Association in its guidelines on healthy eating for the primary prevention of cardiovascular disease and premature death doesn't hold up to scrutiny. It relies on observational (epidemiological) studies, confusing correlation with causation. As we saw in its “Evidence-based Guidelines for Cardiovascular Disease Prevention in Women,” for example, not one of the observational studies it presented had actually found a tenable link between its healthy eating recommendations and the prevention of heart disease or premature death. And the only cited clinical, randomized controlled intervention trial concluded that “the diet had no significant effects on incidence of CHD, stroke or CVD.” [see below]

Randomized clinical trials are the strongest forms of evidence we have on any health intervention yet, when it comes to dietary beliefs, we rarely hear about the soundest body of evidence. The medical literature is saturated with contradictory and untenable correlations claiming links between diet and health, even though the scientific process has long ago moved beyond correlations. Decades of well-designed trials, and even carefully-conducted observational population studies, have not supported most of the popular claims of various specifically-defined "healthy" diets, certain foods or supplements for holding special abilities to prevent premature death or the diseases that are the biggest causes of death. And observational studies claiming to find correlations continue to be unsupported when they’re put to fair tests in well-designed randomized clinical trials. So often, foods and diets in population studies are markers for the true influences on health, such as genetics and social-economic status.

There is no scientific debate about the benefits of essential vitamins, minerals and nutrients in intervening for actual deficiencies; for a varied diet and having enough to eat. But there is no credible evidence to support fears that people in developed countries suffer widespread nutritional deficiencies or are poorly nourished. And eating and getting the nutrients our bodies need isn’t nearly as precarious as many want us to fear. Claims that bad foods cause today’s most costly chronic diseases (popularly called "The Big Three": diabetes, cancers and heart disease) and premature deaths are not supported by any sound research to date, nor do they have biological plausibility grounded in nutritional science. Similarly, claims that special “healthy eating” or nutritional supplements can prevent those chronic diseases, promote optimal "wellness," slow aging, or add years to our life go beyond the science, too.

JFS recently examined the largest and most careful systematic review of the clinical trial evidence examining if antioxidants can avert the damage of free radicals and prevent chronic disease of aging — the primary causes of death, such as heart disease and cancer — and enable us to live longer. The free-radical theory of aging serves as the basis for much of today’s preventive health movement and lifestyle medicine, and beliefs of ‘lifestyle diseases.’

This major undertaking was by the Cochrane Collaborative. Its scientists examined every clinical trial conducted since 1945, along with the protocols and a multitude of published papers for each trial, and conducted detailed analyses of each trial looking for biases to ensure each was a quality study. That’s the way we know if research has been conducted to be a fair test of an hypothesis. Their analysis included clinical trials on 232,550 people from all over the world. Not one sound clinical trial was able to find a tenable effect for vitamins in reducing mortality or for the primary or secondary prevention of chronic diseases (they all hugged either side of a null effect). This 191-page paper was discussed extensively here.

One of the largest, longest and most expensive randomized, controlled, primary dietary intervention clinical trial in the history of our country was launched in 1993. This was carefully designed and conducted, and meant to be THE study to end all studies and would finally prove the benefits of the “healthy eating” — precisely as defined by the American Heart Association and the government’s Dietary Guidelines: low-fat, high fiber diets with lots of fresh fruits and vegetables and wholegrains. It was a major undertaking, costing $415 million of taxpayers money and conducted at 40 medical centers across the country. This clinical trial, called the Women’s Health Initiative Dietary Modification Trial, was covered in-depth here.

The 48,835 women were closely followed for more than eight years and the incidences of clinically confirmed cancers, heart disease, heart attacks and strokes were carefully monitored. And what did it show? After more than eight years, there were no differences in the incidences of breast cancer, colon cancer, and dozens of other cancers; heart attacks or strokes among those who ate the “healthy” diet and those who ate whatever they wanted. "Healthy eating" proved to have no effect on cardiovascular disease. The researchers concluded: “a dietary intervention that reduced total fat intake and increased intakes of vegetables, fruits, and grains did not significantly reduce the risk of CHD, stroke, or CVD in postmenopausal women.” And among the women who had heart disease at the beginning of the study, the low-fat diet slightly increased their risks for heart attacks.

And the women who followed the healthy eating diet and carefully watched what they ate for eight years ended up weighing no less than the control group of women who ate anything they darn well pleased. The two groups differed by about one pound. The authors concluded: “A low-fat eating pattern does not result in weight gain in postmenopausal women.”

In contrast, this INTERHEART study epitomized the weakest type of study. Not only was it a statistical undertaking done in a computer, but, in the scientific sense, it was highly biased. It was never designed to really answer if diets actually contribute to heart attacks or to provide credible evidence of a safe and effective health intervention. The scientific literature is littered with epidemiological data dredges that divert precious medical research resources away from credible research to find effective treatments and cures for diseases. While this study was largely funded by pharmaceutical companies, it doesn’t change the fact that the prolific misuse of epidemiology to claim correlations between diet and lifestyles and health is what Dr. Michael Fitzpatrick calls “the subordination of science to propaganda.”

The wider values that have acquired a pervasive influence in modern society… include a pessimistic outlook towards the prospects for nature and society, reflected in the popularity of apocalyptic and doomsday scenarios of all kinds, and notably in a willingness to embrace the likelihood of catastrophe from epidemic disease... They also include a misanthropic outlook towards humanity, expressed in contemptuous attitudes towards the masses, notably...those who smoke or are overweight. A third theme is a growing sympathy for authoritarian interventions to deal with social problems... A combination of these attitudes — among scientists and politicians as much as in the general public — leads to an inclination to turn a blind eye towards pseudoscience if it furthers the wider social agenda that follows from them. — Dr. Michael Fitzpatrick, a British doctor and author of The Tyranny of Health: Doctors and the Regulation of Lifestyle (October 24, 2008).


© 2008 Sandy Szwarc


* Data Dredges. Dr. Brignell is a British scientist and engineer who knows numbers. He publishes Number Watch and is the author of The Epidemiologists — Have they got scares for you! As Dr. Brignell explains, when an epidemiologist wants to see if something X causes a disease, he begins by comparing the number of exposures of X among a group of people with the disease to a group without it. The researcher then calculates the relative risk (1:1 = null) and the probability that this relationship occurred by chance or statistical accident…and the probability that it has not: the chance that the correlation is statistically significant. But if there are bunches of factors to compare, a researcher now has hundreds of possible combinations and hundreds of times the chances of undercutting his probability that it occurred by chance or fluke.

When there are no real relationships between the factors and the diseases, it’s all down to random numbers, and the more chances that mining for links will randomly yield a certain number of hits. Rather than adjust for the greater chance of random correlations, said Dr. Brignell, “more often than not, the five successes are published as ‘scientific facts, and the other 95 ignored.”

Data dredges, however, go even further. They retrospectively mine large databases, that include “hundreds of putative causes and effects,” looking for correlations. The correlations investigated are only those the study author chooses to examine, leaving out countless others that may have been far more important. The data in these large databases is also problematic — typifying that phrase “garbage in, garbage out.” Databases are often compiled all or in part from self-reported, unverified anecdotes, drawn from questionnaires. And the questions can be vague and susceptible to recall bias: “Can you remember what and in what quantity all the foods you ate last week?” The data is also often subject to reverse causality and the subjects are often not representative of the general population.

Dr. Stan Young, Ph.D., a statistician and the Assistant Director of Bioinformatics at the National Institute of Statistical Sciences in Research Triangle Park, NC, isn’t a fan of data dredges, either. As discussed in more detail in a previous post, using the same data set to ask many questions is fundamentally wrong, he says. “The more things you check, the more likely it becomes that you’ll find something that’s statistically significant — just by chance, luck, nothing more.” Remember the law of truly large numbers, which means that ‘unusual’ links become highly probably when enough data or people are involved. “Epidemiological studies are so often wrong that they’re close to being worthless,” said Dr. Young.


** INTERHEART Study Sponsors:

The World Heart Federation
The World Health Organization
The International Clinical Epidemiology Network

Funding Sources:
International Coordination:
Canadian Institutes of Health Research
Heart and Stroke Foundation of Ontario
The International Clinical Epidemiology Network

and generous donations from pharmaceutical companies
Astra Zeneca
Aventis
Bristol Myers Squibb
Abbott
Novartis
Sanofi
Synthelabo


*** Null findings are sometimes the most important of all. The original 2004 INTERHEART study found one enriching point that seems especially deserving of remembering today. That’s the continued finding that humankind is more alike than not. Regardless of where in the world we live, our race/ethnicity or what we look like, people have similar health problems as we age. The notion that we can target a specific group of people based on the color of their skin or an outward physical characteristic as being inherently more diseased wasn’t supported in this study. History is filled with efforts to do so, though, and identify inferior groups using junk science like phrenology, with tragic consequences.


**** Tenable correlations. Newer readers who’ve missed earlier posts have written, confusing tenable associations with those that are statistically significant. A statistically significant relative risk does not make it tenable. Tenable relative risks and epidemiological research were explained here and here.

Observational studies — the various epidemiological studies that dredge through data on a group of people and use computer models to find correlations with a health outcome — are most rife with misinterpreted statistics, errors and biases, and are most easily manipulated to arrive at whatever conclusions researchers set out to find. These are the studies most popularly reported as the scare or magical cure of the week — coffee linked to increased risk for heart disease one week and linked to lower risk for heart disease they next!

Investigators can manipulate their study design, analyses and reporting in countless ways so that more relationships cross statistical significance (the p = 0.05 threshold), even though they wouldn’t have otherwise, Dr. Ioannidis explained, adding:

Such manipulation could be done, for example, with serendipitous inclusion or exclusion of certain patients or controls, post hoc subgroup analyses, investigation of genetic contrasts that were not originally specified, changes in the disease or control definitions, and various combinations of selective or distorted reporting of the results.

Commercially available “data mining” packages actually are proud of their ability to yield statistically significant results through data dredging. Furthermore, even in the absence of any bias, when ten independent research teams perform similar experiments around the world, if one of them finds a formally statistically significant association, the probability that the research finding is true is only 1.5 x10-4 — hardly any higher than the probability we had before any of this extensive research was undertaken!

“Statistical significance should not be mistaken for evidence of a substantial association,” explained authors from the University of Texas School of Public Health in Houston, Texas, in an analytical paper of epidemiology and Sir Austin Bradford Hill’s overlooked lessons. Researchers still frequently present results as if statistical significance and p-values are useful decision criteria, they said, but emphasis on the p-value has been soundly denounced for decades. No matter how low the p-value (statistically significant), the association could still be explained by a methodical error. The inadequacies of epidemiology mean that methodological errors that occur in the process of conducting an epidemiological study are often even greater than random errors. “No statistical test of random sampling error informs us about the possible impacts of measurement error, confounding, and selection bias,” they said. And “precision should not be mistaken for validity.”

Dr. Ernst Wynder, M.D., founder and director of the American Health Foundation and editor of Preventative Medicine prior to his death, said any relative risk less than 200% is suspect. And Dr. Marcia Angell, M.D., former editor-in-chief of the New England Journal of Medicine said they looked for 200% or more before even accepting a study for publication. Yet, we frequently hear about studies finding risks of 30% or 80% reported as if that means anything, but they wouldn’t be taken as worthy of note by prudent scientist.

The biggest misconception — besides knowing when a correlation is big enough to suggest a true link, is mistaking correlation for causation. Epidemiological studies can never show causation, no matter how strong the correlation. Wearing a bra has been associated with a 12,500 times greater risk for breast cancer, but bras do not cause breast cancer, of course (regardless of how hard some have even tried to make up a biologically plausible explanation). It’s a real correlation, but meaningless, explained by co-factors.

Correlation does not prove causation. This basic principle has been forgotten as causal inferences and health decisions are often made based on beliefs of a causal link. Epidemiological studies, looking for links, were meant to be used as the first step in narrowing down potential factors in infectious diseases, and that is still their primary value. If a strong link is found, then an hypothesis would be tested in a series of clinical intervention studies to see if there is a true causal role in the disease, and to learn if the intervention is effective, with benefits that outweigh the harms. If a well-designed population study can’t even find a tenable correlation, then good scientists move on and look somewhere else for the cause. That’s why those null findings, showing no evidence of a tenable correlation, are especially valuable nowadays, as we all suffer from daily epidemiological whiplash.

So, whenever we hear of a new study about something linked to higher or lower risks for a disease, it means the scientific process has barely begun to investigate a potential cause, let alone an intervention. There is, with very rare exceptions, nothing yet that warrants us to change our behavior or base any health decision. Wait until a clinical trial actually tests something.

© 2008 Sandy Szwarc. All rights reserved.

Bookmark and Share