Junkfood Science: Traffic tickets for sugar — Does healthy eating mean low-sugar?

July 01, 2008

Traffic tickets for sugar — Does healthy eating mean low-sugar?

Sugar makes food taste good and fun to eat. Kids especially love sweets. Therefore, sugar must be bad. To allow ourselves to love food means we might eat too much and get fat.

As incredible as that may sound, it’s the basic logic behind beliefs that everyone — from children to adults — should limit sugar to eat ‘healthy’ and avoid getting fat. All sorts of elaborate theories, of course, have worked backwards from this belief, trying to propose explanations for why sugar is fattening, but that doesn’t make them true. When the right question isn’t asked first, the answers aren’t likely to be very helpful.

Question: Is there even a link between sugars and body weight?

In the last of what has inadvertently become a series on the Red Lights being given to fats, salt and sugars in foods, we’ll look at two large sugar studies published this past month that examined this key question. Their findings came as no surprise, though, as they concurred with more than half a century of evidence that has continued to show no consistent link between added sugars and body weight, or that there is anything to fear from the sweet things in life.

It might seem that health concerns surrounding sugar are new or just being realized, when, in reality, the very same scary claims about sugar have been raised for generations. Sugar is another food ingredient where the science has had a hard time breaking through mythologies. Sugar has become so feared that even little kids, who have sweeter tastes than grown-ups, are being taught that it’s bad for them and put on low-sugar diets. Even a lot of adults believe that cutting sugar will prevent obesity, cavities, and age-related chronic diseases like diabetes, cancers and heart disease.


From New Zealand

The first new study examined information from the National Nutrition Survey of 1997 and the Children’s Nutrition Survey in 2002. These national health surveys, commissioned by the New Zealand Ministry of Health, gathered 24-hour food recall data and health information from representative samples of New Zealand adults (4,636) and children (3,275). These surveys are like our NHANES (National Health and Nutrition Examination Surveys), conducted through the Department of Health and Human Services, which are viewed by medical professionals as the most accurate information available on our diets, lifestyles and health. The New Zealand research was led by Dr. Winsome Parnell, associate professor at the Department of Human Nutrition at the University of Otago, specializing on poverty and nutrition.

What made this national study interesting and unique was that the researchers were able to evaluate the correlations among distinct ethnic population groups — indigenous Maori, those born and immigrated from Pacific countries, and New Zealanders of European/other origin — of children at different ages to see if sugars played a role in the development of obesity. They were also able to see if sugar could explain the dramatic differences in obesity rates between ethnicities. For example, by age 14, 35% of Pacific island, 18% of Maori, and 6% of European girls fall into the “obese” category.

The researchers examined the links between sugars from foods and beverages, both intrinsic or added. High fructose corn syrup is not used in the entire country, but kids there love powdered drinks similar to the sweetened Koolaid* American kids used to drink literally by the gallons. The packets used to cost only 5-10 cents apiece and were mixed with a cup of sugar and water. Many summers were spent with lips the color of the day. :)

The researchers found, not surprisingly, that young people through age 24 consume lots more sugar that older adults. Children everywhere, and throughout time, have had stronger preferences for sweets which are partly outgrown into adulthood.

Among the kids, there were no significant differences by gender or age group in the amount of sugar they ate, but the heaviest kids were associated with the lowest intakes of sugar compared to their peers.

Not only was total sugar intake — in total grams or as a percentage of calories — lowest among the Pacific kids at every age compared to other ethnic groups, but sugar intake among the biggest kids was slightly lower at every age group within every ethnic group.

When the researchers separated out just the sucrose from beverages, which was the largest source (26%) of sugars among the kids, they found no statistical correlations between sugary drinks and weight among any of the children or the adults.

In other words, sweet drinks were unrelated to body weight. Nor were the researchers able to find any evidence that sugar was related to the development of obesity. The differences in body size between different ethnicities was not due to their diets and how much sugar or fat they ate.

Examining various diets, and the fats and sugars consumed, the researchers found:

In both adults and children, there were no significant differences between overweight/obese and normal weight individuals with respect to choice of diet type.

As they noted in their conclusions, their findings agree with other studies, such as the Dietary and Nutrition Survey of British Adults, which also found no evidence that fat people are more likely to have diets higher in fat and sugars.


Sugar-sweet drinks and BMI

Sweetened drinks is the form of sugar currently feared to especially contribute to weight gain. This second study may not have been widely reported because it couldn’t be clearly explained in a soundbyte. The aspects that set it apart from other studies like it were probably missed by reporters, so its value to consumers was lost, too.

This study, published in the American Journal of Clinical Nutrition, was a meta-analysis of original research on humans published between 1966 and October 2006 examining the association between sugar-sweetened drinks and weight gain among children and teens. But this meta-analysis wasn’t like many such papers published today and was an educational tool in itself. These authors, from the University of Maryland’s Center for Food, Nutrition and Agriculture Policy, went to unusual lengths to attempt to overcome the potential flaws and biases that can inflict meta-analyses, as well as to openly explain their methodology.

· They did a funnel plot of the studies identified in their analysis, looking for publication bias, and searched six other databases for unpublished studies.

· They analyzed the influence of each study individually on their overall results.

· They also conducted four different sensitivity tests to evaluate the robustness of their findings.

· They had an independent expert, David Allison, Ph.D. (then, the upcoming president of the Obesity Society), review and critique their draft.

· They examined the disclosure for the studies and noted that “none of the studies included in this meta-analysis received funding from the food industry.”

· They also critically read each available study, both those they used and those they didn’t, and explained the strengths and weaknesses of each.

A total of 12 (10 longitudinal and 2 randomized controlled clinical trials) studies were reviewed. Another two trials were not included in the analysis. One, because it hadn’t estimated the independent association of sugary drinks and BMI changes, although it had reported no association between snacks, sweets and sugary drinks and BMI changes among the more than 10,000 children 9-14 years of age. The other study hadn’t reported BMIs at all.

The authors, led by Richard A. Forshee, said that: “None of the RCT studies found a statistically significant difference between the treatment and control groups.” The estimated differences in BMI ranged from 0.1 to 0.14.

In reviewing possible bias, the authors found that no single study had a notable influence on their results. Analyzing their findings by removing one study at a time brought the same conclusions. Their funnel plot showed that the studies with the most precise estimates were tightly and evenly grouped around the average effect size, meaning the published studies were those more likely to have shown statistically significant results. The less precise studies were the one whose conclusions were more anomalous and had weaker effects.

They conducted four sensitivity tests: evaluating the effect of adjusting for total caloric intakes (which affected the results 0.008-0.023 for fixed and random effects, respectively); seeing if 10 more studies all showing insignificant associations might change the overall effect if they were added together (the effects were “close to zero”); see if sugary drinks had a greater effect on the highest tertile (fattest kids) than their nonfat peers (no estimate was distinguishable from zero); and to see if a “blockbuster” study, reporting large effects at least twice those of other studies and the most precision of any other study, could make their results no longer statistically significant.

After all of this, their meta-analysis found: “The overall estimate of the association was a 0.004 (95% CI: -0.006, 0.014) change in BMI during the time period defined by the study for each serving per day change in sweetened beverage consumption with the fixed-effects model and 0.017 (95% CI: -0.009, 0.044) with the random-effects model.”

They concluded:

The results of the meta-analysis show that the current science base finds that the relation between sweetened beverage consumption and BMI among children and adolescents is near zero. The best current scientific evidence shows that the relation... is not statistically significant.

The strongest current evidence is that reducing or eliminating sweetened-beverage consumption would not have a large effect on the BMI distribution of children or adolescents.

Their findings concurred with the body of evidence on sugars, compiled and reviewed since the 1970s [covered here]. Even the 2002 review of 300 studies on by the National Academy of Sciences Institute of Medicine had concluded: “There is no clear and consistent association between increased intake of added sugars and [body weight].” Reviews of the medical literature on the development of obesity have also shown that fat and thin children eat no differently to explain the diversity of their sizes and shapes.


A test — Seeing how reviews can differ

As the University of Maryland researchers noted, three other recently published reviews have also reached similar conclusions, finding that the scientific literature shows weak, inconsistent relationships between sugary drinks and weight gain or obesity. Randomized, controlled clinical trials carry more weight than observational studies, of course. And, as professor Forshee and colleagues noted, neither of the clinical trials on children or teens have found sweetened drinks to contribute to statistical differences in either BMI or weight changes between intervention and control groups.

So, why have some reviews reported finding a relationship? It goes even beyond careful methodology that accounts for bias. As the University of Maryland authors explained, one reason is that stronger reviews examine the studies included and consider the magnitude of the associations reported and actual changes in BMIs, in order to evaluate the statistical significance of the findings. Careful reviews also consider the confounding factors that contradict associations seen. They don’t just tally studies on one side.

In contrast, less reliable meta-analyses may note an association reported by a study without examining if it’s statistically significant (above chance or random error) and will note changes in the percentages of people in various BMI categories, rather than report the actual weight or BMI changes, again leaving it impossible to determine if the changes are statistically significant or meaningful.

One recent meta-analysis, led by Vasanti S Malik, a doctoral student, along with professors at Harvard School of Public Health in Boston, had been widely reported in the media as showing sugary drinks cause weight gain and obesity. Comparing it next to the Forshee meta-analysis proved an eye-opening illustration of the difference in meta-analyses. This 2006 review had also examined the studies published since 1966, but used 30 and included cross-sectional observational studies. But there were no funnel plots, sensitivity tests or independent reviews. There was no effect size diagram or findings that reported the weight of associations found.

But, the most striking difference between the reviews was seen in the care and accuracy of the evaluations of the studies that had been included.

CHOPPS. For example, the Christchurch obesity prevention programme in schools (CHOPPS), popularly called the “ditch the fizz” project was a trial of 644 children ages 7-11 [covered here]. After two years, the authors had found no effect of anti-soda interventions on changing the numbers of kids who were classified as ‘overweight’ or ‘obese’, nor on influencing the percentage of weight the children had grown.

In their review of the CHOPPS study, the University of Maryland authors observed that the average percentage of kids in the’ overweight’ and ‘obese’ weight categories had changed by 0.2% in the intervention group as compared to 7.5% in the control group, but it was not statistically significant. Examining the actual changes in BMI values and z scores (changes relative to anticipated growth), there was no statistical difference. They also noted that only 36% of the kids had returned both of their surveys (at the beginning and end of program) and that because of problems with randomization by school classroom, the validity of the gathered information was subject to greater bias. The study, Forshee and colleagues noted, also didn’t control for any other dietary intakes, socioeconomic status or activity levels.

The Malik et.al. review said this trial “was successful in producing modest reduction, which was associated with a reduction in the prevalence of overweight and obesity.”

Diet soda. The other randomized clinical trial was a small pilot study by Boston researchers delivering free diet sodas to the homes of 103 teens in an attempt to lower their consumption of sweetened sodas; along with monthly motivational counseling to increase exercise, and reduce calories and sedentary activities. After 25 weeks, telephone 24-hour dietary recalls of soda consumption were conducted.

The Forshee et.al. analysis noted that this study found no statistically significant change in BMIs overall, with effects that varied considerably depending on the starting BMI. Only among the “obese” children, a statistical net effect on BMIs of 0.75 was reported from the interventions, but further study is needed to determine if it was related at all to their sugar consumption. As the Forshee group also noted: “The interaction between weight change and baseline BMI was not attributable to baseline consumption of sugary beverages.”

The Harvard reviewers said that the results of this trial had shown: “decreasing sugar-sweetened beverage intake significantly reduced body weight in subjects with baseline BMI>30.” And in the text, they said that this pilot study had shown “decreasing sugar-sweetened beverage consumption had a beneficial effect on body weight that was associated with baseline BMI (the difference in BMI between the treatment group and the control subjects in the uppermost tertile of baseline BMI was 0.75).”

A single pop. Remember that study widely claimed to have found that “a single 12-ounce soft drink with sugar per day raises a child’s risk of obesity by 60 percent!”? As covered here, it had actually shown no difference in the BMIs of children consuming the most and the least amounts of sugar or sugary drinks. Even its authors had noted in their results that “there is no clear evidence that consumption of sugar per se affects food intake in a unique manner or causes obesity.” [This study had also illustrated the importance of actually reading a study and seeing what its data actually found, rather than just skim the conclusions in the abstract.]

The Malik et.al. meta-analysis reported that this study found an “association between sugar-sweetened beverage intake and BMI, and odds of obesity (OR: 1.60).”

You get the idea. According to the Harvard reviewers, sugar-sweetened beverages, especially sodas, are empty calories [a popular concern covered here] and increase weight gain and could lead to serious health problems [another concern covered here]. “Over the past 2 decades, obesity has escalated to epidemic proportions,” they said. “Given the global rates of overweight and obesity are on the rise, it is imperative that public health strategies include education about beverage intake.”

They concluded that “sufficient evidence exists for public health strategies to discourage consumption of sugary drinks as part of a healthy lifestyle.”

Would you have made that conclusion from their review?

How many people, healthcare professionals or journalists, actually read studies and examine meta-analyses to learn if the actual data jives with what’s being reported? All studies are not created equally but to evaluate the quality of research necessitates understanding “fair test,” not to mention actually reading the original papers. When a study doesn’t agree with what’s popular to believe, it’s easier to simply dismiss it out of hand.

It’s easy to make a scary claim. It’s gobs harder to do science and show there’s little credible evidence for a need to be scared.


© 2008 Sandy Szwarc


* The Hastings Museum in Nebraska is home to the Kool-Aid collection. The History of Kool-Aid explains how this childhood favorite was first invented by Edwin Perkins in 1927. By 1950, more than a million packets were being made a day.


Disclosure test. As we’ve seen time and again, the source of information is no measure of credibility — only the actual science can guide us there. The Forshee et.al. review provided an opportunity to come to grips with another common fallacy of logic: ad hominem (attacking the source). When this is raised, it’s a clue that we’re being encouraged to dismiss research based on beliefs, rather than a careful analysis of the soundness of its methodology and interpretations. Some have suggested we ignore the findings of the University of Maryland paper. Its findings weren’t popular with those promoting the globesity epidemic and public health interventions.

While the University of Washington authors “retained complete control of the study design, collection of data, analysis of data and interpretation of results,” the study was “supported by a grant from the American Beverage Association.” The Harvard study was supported by an American Heart Association award and a National Institutes of Health grant. The HHS had declared war on obesity in 2004 and made obesity research a national priority.

Did learning the funding source change the science or cause you to dismiss either study? Or did the veracity of the research continue to be what mattered most?

Bookmark and Share