Observational Studies Show Similar Results to Randomized Controlled Trials

4.2/5 - (179 votes)

How legitimate is the common corporate criticism of the scientific nutrition literature that the credibility of observational studies is questionable?

Discuss
Republish

Below is an approximation of this video’s audio content. To see any graphs, charts, graphics, images, and quotes to which Dr. Greger may be referring, watch the above video.

While randomized controlled trials are highly reliable in assessing interventions like drugs, they’re harder to do with diet. Dietary diseases can take decades to develop. It’s not like you can give people placebo food, and it’s hard to get people to stick to assigned diets, especially for the years it would take to observe effects on hard endpoints like heart disease or cancer. That’s why we have to use observational studies of large numbers of people and their diets over time to see which foods appear to be linked to which diseases. And interestingly, if you compare data obtained from observational population studies versus randomized trials, on average, there is little evidence for significant differences between the findings. Not just in the same direction of effect, but of the same general magnitude of the effect, in about 90 percent of the treatments they looked at.

But wait, what about the hormone replacement therapy disparity I talked about in the last video? It turns out when you go back and look at the data, it was just a difference in timing in terms of when the Premarin was started, and they actually showed the same results after all.

But even if observational trials did provide lower-quality evidence, maybe we don’t need the same level of certainty when we’re telling someone to eat more broccoli or drink less soda, compared to whether or not you want to prescribe someone some drug. After all, prescription drugs are the third leading cause of death in the United States. It goes heart disease, cancer, then doctors. About 100,000 Americans are wiped out every year from the side effects of prescription drugs taken as directed. So, given the massive risks, you better have rock-solid evidence that there are benefits that outweigh the risks. You are playing with fire; so, darn right I want randomized double-blind, placebo-controlled trials for drugs. But when you’re just telling people to cut down on doughnuts, you don’t need the same level of proof.

In the end, the industry-funded sugar paper concluding that the dietary guidelines telling people to cut down aren’t trustworthy, because they’re based on such “low-quality evidence,” is an example of the inappropriate use of the drug trial paradigm in nutrition research. You say yeah, but what were the authors supposed to do? If GRADE is the way you judge guidelines, then you can’t blame them. But no, there are other tools––like for example, NutriGrade, a scoring system specifically designed to assess and judge the level of evidence in nutrition research.

One of the things I like about NutriGrade is that it specifically takes funding bias into account, so industry-funded trials are downgraded—no wonder the industry-funded authors chose the inappropriate drug method instead. HEALM is another one, Hierarchies of Evidence Applied to Lifestyle Medicine, specifically designed because existing tools such as GRADE are not viable options when it comes to questions that you can’t fully address through randomized controlled trials (RCTs). Each research method has its unique contribution. In a lab, you can explore the exact mechanisms, RCTs can prove cause and effect, and huge population studies can study hundreds of thousands of people at a time for decades.

Take the trans fat story, for example. We had randomized controlled trials showing trans fats increased risk factors for heart disease, and we had population studies showing that the more trans fats people ate, the more heart disease they had. So, taken together, these studies forged a strong case for the harmful effects of trans fat consumption on heart disease, and as a consequence, it was largely removed from the U.S. food supply, preventing as many as 200,000 heart attacks every year. Now, it’s true that we never had randomized controlled trials looking at hard endpoints, like heart attacks and death, because that would take years of randomizing people to eat like canisters of Crisco every day. You can’t let the perfect be the enemy of the good when there are tens of thousands of lives at stake.

Public health officials have to work with the best available balance of evidence there is. It’s like when we set tolerable upper limits for lead exposure or PCBs. It’s not like we randomized kids to drink different levels of lead, and saw who grew up to have tolerable brain damage. You can’t run those kinds of experiments; so, you have to just pull in evidence from as many sources as possible and make your best approximation.

“Even if RCTs are unavailable or impossible to conduct, there is plenty of evidence from observational studies on the nutritional causes of many cancers, such as on red meat increasing the risk of colorectal cancer.“ So, if dietary guidelines aiming at cancer prevention were to be assessed with the drug-designed GRADE approach, they’d reach the same conclusion that the sugar paper did—low quality evidence. And so, it’s no surprise a meat-industry-funded institution hired the same dude who helped conceive and design the sugar-industry funded study. And boom, lead author saying we can ignore the dietary guidelines to reduce red and processed meat consumption, because they used GRADE methods to rate the certainty of evidence, and though current dietary guidelines recommend limiting meat consumption, their results predictably demonstrated that the evidence was of low quality.

Before I dive deep into the meat papers, there’s one last irony about the sugar paper. The authors used the inconsistency of the exact recommendations across sugar guidelines over a 20-year period to raise concerns about the quality of the guidelines. Now obviously, we would expect guidelines to evolve, but the most recent guidelines show remarkable consistency, with one exception: the 2002 Institute of Medicine guideline that said a quarter of your diet could be straight sugar without running into deficiencies. But that outlier was partly funded by the Coke, PepsiCo, cookie, candy-funded institute that is now saying see, since recommendations are all over the place (thanks in part to us), they can’t be trusted.

Please consider volunteering to help out on the site.

Motion graphics by Avo Media

Below is an approximation of this video’s audio content. To see any graphs, charts, graphics, images, and quotes to which Dr. Greger may be referring, watch the above video.

While randomized controlled trials are highly reliable in assessing interventions like drugs, they’re harder to do with diet. Dietary diseases can take decades to develop. It’s not like you can give people placebo food, and it’s hard to get people to stick to assigned diets, especially for the years it would take to observe effects on hard endpoints like heart disease or cancer. That’s why we have to use observational studies of large numbers of people and their diets over time to see which foods appear to be linked to which diseases. And interestingly, if you compare data obtained from observational population studies versus randomized trials, on average, there is little evidence for significant differences between the findings. Not just in the same direction of effect, but of the same general magnitude of the effect, in about 90 percent of the treatments they looked at.

But wait, what about the hormone replacement therapy disparity I talked about in the last video? It turns out when you go back and look at the data, it was just a difference in timing in terms of when the Premarin was started, and they actually showed the same results after all.

But even if observational trials did provide lower-quality evidence, maybe we don’t need the same level of certainty when we’re telling someone to eat more broccoli or drink less soda, compared to whether or not you want to prescribe someone some drug. After all, prescription drugs are the third leading cause of death in the United States. It goes heart disease, cancer, then doctors. About 100,000 Americans are wiped out every year from the side effects of prescription drugs taken as directed. So, given the massive risks, you better have rock-solid evidence that there are benefits that outweigh the risks. You are playing with fire; so, darn right I want randomized double-blind, placebo-controlled trials for drugs. But when you’re just telling people to cut down on doughnuts, you don’t need the same level of proof.

In the end, the industry-funded sugar paper concluding that the dietary guidelines telling people to cut down aren’t trustworthy, because they’re based on such “low-quality evidence,” is an example of the inappropriate use of the drug trial paradigm in nutrition research. You say yeah, but what were the authors supposed to do? If GRADE is the way you judge guidelines, then you can’t blame them. But no, there are other tools––like for example, NutriGrade, a scoring system specifically designed to assess and judge the level of evidence in nutrition research.

One of the things I like about NutriGrade is that it specifically takes funding bias into account, so industry-funded trials are downgraded—no wonder the industry-funded authors chose the inappropriate drug method instead. HEALM is another one, Hierarchies of Evidence Applied to Lifestyle Medicine, specifically designed because existing tools such as GRADE are not viable options when it comes to questions that you can’t fully address through randomized controlled trials (RCTs). Each research method has its unique contribution. In a lab, you can explore the exact mechanisms, RCTs can prove cause and effect, and huge population studies can study hundreds of thousands of people at a time for decades.

Take the trans fat story, for example. We had randomized controlled trials showing trans fats increased risk factors for heart disease, and we had population studies showing that the more trans fats people ate, the more heart disease they had. So, taken together, these studies forged a strong case for the harmful effects of trans fat consumption on heart disease, and as a consequence, it was largely removed from the U.S. food supply, preventing as many as 200,000 heart attacks every year. Now, it’s true that we never had randomized controlled trials looking at hard endpoints, like heart attacks and death, because that would take years of randomizing people to eat like canisters of Crisco every day. You can’t let the perfect be the enemy of the good when there are tens of thousands of lives at stake.

Public health officials have to work with the best available balance of evidence there is. It’s like when we set tolerable upper limits for lead exposure or PCBs. It’s not like we randomized kids to drink different levels of lead, and saw who grew up to have tolerable brain damage. You can’t run those kinds of experiments; so, you have to just pull in evidence from as many sources as possible and make your best approximation.

“Even if RCTs are unavailable or impossible to conduct, there is plenty of evidence from observational studies on the nutritional causes of many cancers, such as on red meat increasing the risk of colorectal cancer.“ So, if dietary guidelines aiming at cancer prevention were to be assessed with the drug-designed GRADE approach, they’d reach the same conclusion that the sugar paper did—low quality evidence. And so, it’s no surprise a meat-industry-funded institution hired the same dude who helped conceive and design the sugar-industry funded study. And boom, lead author saying we can ignore the dietary guidelines to reduce red and processed meat consumption, because they used GRADE methods to rate the certainty of evidence, and though current dietary guidelines recommend limiting meat consumption, their results predictably demonstrated that the evidence was of low quality.

Before I dive deep into the meat papers, there’s one last irony about the sugar paper. The authors used the inconsistency of the exact recommendations across sugar guidelines over a 20-year period to raise concerns about the quality of the guidelines. Now obviously, we would expect guidelines to evolve, but the most recent guidelines show remarkable consistency, with one exception: the 2002 Institute of Medicine guideline that said a quarter of your diet could be straight sugar without running into deficiencies. But that outlier was partly funded by the Coke, PepsiCo, cookie, candy-funded institute that is now saying see, since recommendations are all over the place (thanks in part to us), they can’t be trusted.

Please consider volunteering to help out on the site.

Motion graphics by Avo Media

Doctor's Note

This is the third in an eight-part series on how industries impact dietary and health guidelines. The first two videos introduced what happened when the 2015 Dietary Guidelines committee recommended reducing sugar consumption: How Big Sugar Undermines Dietary Guidelines and How Big Sugar Manipulated the Science for Dietary Guidelines.                         

Next, we will look at the recent articles in the Annals of Internal Medicine on meat consumption:

If you haven't yet, you can subscribe to our free newsletter. With your subscription, you'll also get notifications for just-released blogs and videos. Check out our information page about our translated resources.

Subscribe to our free newsletter and receive the Purple Sweet Potato Longevity Smoothie recipe from How Not to Age.

Pin It on Pinterest

Share This