I have a great friend who was a Pulitzer Prize nominated Wall Street Journal investigative reporter for over a dozen years. Based on his experience there, he occasionally gives two-day seminars on how to deal with the press. He always starts out on day one by asking the question, What is news?

What is news? Think about it and try to come up with your own answer before reading on.

Participants in these seminars come up with all kinds of answers. Just about all are variants of one theme: News is the reporting of current events that are of great importance or interest to citizens.

Wrong answer.

According to my friend, news is whatever the news reporters determine is news and decide to report.

I agree with him. Which brings me to the recent study fingering the consumption of animal protein as an accelerated trip to the graveyard.

Thousand of scientific articles are published each month. I always wonder what it is that drives journalists to reach into this haystack and pluck out one particular straw of an article to write about. Surely there are numerous papers of importance, so why do they all seem to grab the same one? Invariably, whichever one journalists deem newsworthy appears in every news report throughout mainstream media land. The recent protein paper was so honored. It ended up getting reported on everywhere.

What I don't wonder about is what is going to happen to me when journalists decide these kinds of papers are newsworthy. My Twitter account and the comments section on this blog are going to get blown up by people wondering what I think it all means.

In the early days of my blogging career (it's been almost ten years now), I relished tearing into these sorts of papers and presenting them as the rubbish most really are. Or at least pointing out the major shortcomings.

Now that I've ripped apart a ton of them, I view them as a major pain in the rear. Why?

Robb Wolf said it best in his response to this anti-protein paper:
I know that part of my job is to act as an interface between the science/medical scene and the folks who do not have a medical background. I take that role seriously but I'm not a fan of Groundhog Day. Well, the movie was kick-ass, but living it is not a party.
Robb is exactly right. Each time a paper like this one comes out, it is like Groundhog Day* for those of us who do the scientific translation of these kinds of papers from medicalese to regular English and tell why they aren't all the media cracks them up to be. It's like Groundhog Day because in a week or a month or two, there will be another paper just like it, and all the pleas for us to opine will start again.

Even with my rapid reading capabilities, it takes me at least three hours** to read one of these papers critically. I have to read, make notes and pull at least a half dozen - often more - articles (and read those) to see if they are really confirming whatever point the author is trying to make. Then I have to cogitate on it a bit before I start writing my rebuttal. My written response takes, depending upon the complexity of the article involved, anywhere from three to six hours. So, the time commitment to do a comprehensive review of a paper such as the one in question, and do it right, involves a serious commitment of time.

I don't mind doing these reviews - and I'm sure Robb doesn't either - if the paper I'm dealing with is half way decent. Problem is, all the papers the media seems to seize on a publicize are the same kinds. Red meat causes heart disease; red meat causes cancer, saturated fat clogs arteries, dietary fat causes obesity, yada yada yada. Same song, second verse. It's like Hercules fighting the frigging Hydra. Lop off one head, and two more appear. All these kinds of studies have been refuted so many times, that it's almost pointless to do it again. Yet they still keep popping up like, well, Groundhog Day.

But those in the media seem to think each time one of these turkeys is published that it's a red letter day and some brand new scientific truth has been released to the masses. (Sadly, even the staid Wall Street Journal succumbed to this one. At least they had sense enough to pick up that it was really two studies.)

Anyway, you get the picture.

With all that said, there are a few out there who aren't as jaded as I on this sort of thing, who have a lot more energy, and who have taken the time to appropriately dissect this paper.

I linked above to the assessment of the almost-as-jaded-as-I Robb Wolf.

Denise Minger did her typical thorough job of it as well.

I'm particularly glad she wrote the following about the NHANES data used in this study:
And it gets worse. While it'd be nice to suspend disbelief and pretend the NHANES III recall data still manages to be solid, that's apparently not the case. A 2013 study took NHANES to task and tested how accurate its "caloric intake" data was, as calculated from those 24-hour recall surveys. The results? Across the board, NHANES participants did a remarkable job of being wrong. Nearly everyone under-reported how many calories they were consuming - with obese folks underestimating their intake by an average of 716 calories per day for men and 856 calories for women. That's kind of a lot. The study's researchers concluded that throughout the NHANES' 40-year existence, "energy intake data on the majority of respondents ... was not physiologically plausible." D'oh. If such a thing is possible, the 24-hour recall rests at an even higher tier of suckitude than does its cousin, the loathesome food frequency questionnaire.
Most of us in the nutrition biz have known the government run and funded NHANES data are pretty worthless, but the recent paper Denise links to, Validity of U.S. Nutritional Surveillance: National Health and Nutrition Examination Survey Caloric Energy Intake Data, 1971 - 2010, shows just how worthless.

Weighing in from across the pond, Zoë Harcombe wrote not just one, but two posts about this study:

Animal protein as bad as smoking?!

Headlines based on 6 deaths!

She points out one of the most common tricks in the book used in studies like these. If you can't find an overall correlation between whatever the risk factor is your testing and an overall outcome, start breaking up your data by age or some other factor until you can show a correlation for some subset. It's called torturing the data until it confesses. Then use the confession extracted to get your headlines.
After finding no overall association, the researchers spotted a pattern with age and split the information into participants aged 50-65 and participants over 65. They then found (direct quotation again): "Among those ages 50 - 65, higher protein levels were linked to significantly increased risks of all-cause and cancer mortality. In this age range, subjects in the high protein group had a 74% increase in their relative risk of all-cause mortality (HR: 1.74; 95% CI: 1.02 - 2.97) and were more than four times as likely to die of cancer (HR: 4.33; 95% CI: 1.96 - 9.56) when compared to those in the low protein group."
In her second post, she homes in on the fact that the authors used as a baseline the small database of 6 deaths in one group over an 18 year time period.
Here we find the real headline. What the researchers didn't want us to find out. The "four times more likely to die" global headline grabber was based on a reference group of six deaths. Yes six deaths. And not just six deaths - but six deaths over an 18 year study. And the 'researchers' tried to claim that animal protein is as bad as smoking based on this?
She goes on to discuss the folly of making large claims based on small datasets.

The folks at, whom I don't know from Adam, did an excellent review of the study.

High protein diets linked to cancer: Should you be concerned?

The author points out that this study is really two studies, not one.
First, it should be mentioned that to fully appreciate this study we must view it as two studies. There is an epidemiological study and there is a mouse intervention study; anytime tumor growth is mentioned, it refers to the mouse study, and causation can only be applied to the mouse study. It cannot be applied to the human study (as it is an epidemiological study).
This, as you might remember, is a technique used by T. Colin Campbell in his book The China Study. Mix and match data about humans and rodents, use the pronouns as if it all applies to humans, and confuse the heck out of your readers. Except the readers don't think they're confused. They think they're reading about human studies.

It's important to note in this study on animal protein that since all the data about humans comes from observational or epidemiological studies, it shows only correlations. Not causality.

And the actual experimental part of the study was done on rodents and applies to rodents, not humans. And the tumor studies were done not with tumors the rodents developed during the course of their little natural lives, but were done on tumors implanted by the researchers. The data gathered is interesting but far from being applicable to humans.

But the casual reader of this and similar studies confuses the rodent data with the human data. Most of the main stream media certainly did.

Even if this study were done by experimentation on humans (which would be unethical), the results are meaningless unless they can be repeated by other groups of scientists.

Typically when studies showing highly significant results are repeated, the findings aren't nearly as robust in the follow up studies, and, in many cases, fall off with repeated studying leading to the conclusion that the first study was really an outlier and the findings came in as they did by chance.

Never, ever rely on just one study to prove anything.

I'm going to keep this post at the ready, so that the next time one of these studies gets plastered all over the mainstream media, and a hundred people email and tweet me about what it all means, I'll send them a link.

As always, if you disagree with my take or if you want to put your own spin on it, please do so through the comments section.