BPA and blood pressure: Why the media gets it wrong

By John Rost, Chairman, North American Metal Packaging Alliance

At the end of 2014, some of the most popular stories in the Health and Wellness sections of dozens of media outlets, from the New York Times to USA Today, focused on a recent study published in the journal Hypertension. The study found a rapid increase in blood pressure among participants who each drank a single beverage from a can with traces of BPA in its liner.

On the surface, and judging by the headlines, the report may seem pretty frightening. Knowing that one canned beverage could raise your blood pressure by almost five points (technically, 5 mm Hg) within an hour of consumption would no doubt alarm most people. But as is often the case with complex scientific studies, the details that were not reported cast the actual results in a much different light.

As I’ll explain, the most accurate headline for these stories might have been Drinkers of phytoestrogens-rich soy milk from a can experience a slightly lower, but statistically insignificant, decrease in systolic blood pressure while seeing no difference in diastolic blood pressure or heart rate. That, of course, was not the headline in any of the media covering this study.

There are numerous objective concerns with the study format,none of which were reported in the media coverage. Researchers engaged human volunteers, which makes sense when trying to determine human health effects. Unfortunately the volunteers were almost all women over the age of 60 — not a valid representation of the entire population. While this alone does not invalidate the study, it obviously impacts the results, and should have been reported.

More importantly, but equally as ignored by the press in its coverage, was the fact that the none of the participants’ blood pressure actually went up. This merits repeating – not a single one of the participants showed any increase in blood pressure.

To a certain degree I can understand the media’s confusion. It stems from the study’s poor choice of soy milk as a delivery vehicle. It is well known consuming soy drinks causes blood pressure to drop. What the researchers actually measured was a slightly lower drop in blood pressure in the canned drink versus the glass-packaged drink. This is very different from an increase in blood pressure, and yet most of the stories reported just that.

Soy milk is also a bad choice because of its well-known and well- documented estrogenic properties. The concerns about BPA’s health effects revolve around the hypothesis that it acts like an estrogen in the body. For that reason, controlling the diet is one of the most critical aspects of any study trying to measure these types of effects. Using any food or drink that is high in estrogen makes it difficult to tell whether resulting any resulting effects came from the food itself or from the packaging. Yet for this study, the scientists chose a food type, soy, that has some of the highest levels of natural phytoestrogens of any food available. This is a critical flaw and it should not be ignored. In fact, over the years numerous studies have been invalidated for using diets that were hundreds of times lower in estrogen than soy drinks.

Another key fact, also completely overlooked by the media, is the differences in the packaging of the canned and control soy drinks. Again, the study tested soy milk from cans against soy milk in glass bottles. Soy milk requires heat processing after packaging to protect against harmful microorganisms. So by using two distinct packaging lines, the study was using two different, distinct batches of soy milk. More importantly, the heating process for each packaging type is different — heating a product in a glass bottle requires longer heat times than for a metal can, So it’s possible that the study’s two groups of participants consumed measurably different drinks.

Finally, let’s consider the statistical effect reported by the study. When looking for statistical differences it is important to look at all the data and determine the relative differences among the various samples; this creates a range based on how many data points are being measured and how far apart the data points are spread. If the data from the test group overlaps with that of the control group, you actually have no difference, even if the groups’ averages are different. That’s exactly the case in this study; because of the overlap between the test and control groups, the effect being claimed by the authors is really not a difference at all. Again, this is a point that went unmentioned in media reports.

For all these reasons, serious researchers quickly dismissed this particular study as meaningless, even nonsensical, but you’d never know it by reading the headlines. It’s a shame, if not a surprise, that the media is more interested in a scary headline than in accurate reporting. That puts the burden on readers to remain wary — and to make the effort to see if what they read in the news is founded in actual science.

For more BPA news, read our article on BPA facts.