SocraticGadfly: What's healthy, what's not? Fix the p-values and we might, maybe know

May 05, 2014

What's healthy, what's not? Fix the p-values and we might, maybe know

Long, long ago, we Americans hear that "fat was bad."

Then more research said, maybe just saturated fats.

Now, with more consensus (for now), it's that trans fats are definitely bad.

Meanwhile contrarians like Gary Taubes and Nina Teicholz are claiming that not only are we eating too many carbs, adn that more fats as well as proteins may be good, but specifically, that more saturated fats are good.

Teicholz is the latest contrarian to deliberately overreact and not only criticize but demonize more traditional research.
This shift seemed like a good idea at the time, but it brought many potential health problems in its wake. In those early clinical trials, people on diets high in vegetable oil ... were more likely to die from violent accidents and suicides."
Teicholz then futher undercuts her own credibility by claiming the possibility, even the likelihood, of causal correlation from vegetable oils changing brain chemistry.

Beyond that, it looks like there's a BUNCH of errors and problems behind the latest research Teicholz cites to support her claims. Outright errors run through the filter of meta-analysis is a sure prescription for bad interpretation.

Also, I've generally found that the likes of Tabues and Teicholz fail to distinguish between simple and complex carbohydrates. It's true that the likes of Ancel Keys, along with all of his  research ethics issues in leading the crusade against saturated fats, also did the same. But, that was 60 years ago. There's no excuse for that today.

The real problem? The p-value of 0.05 in medical research is too high, just as it is in social sciences, and is almost surely a leading contributor to the problem of replicability in psychology and sociology.


I understand why it was set as high as it is, compared to the 0.0001 in modern natural sciences. Researchers wanted a loose value so as to not screen out potentially lifesaving, or health- or mental health-saving treatments and protocols.

However, our accuracy of research, plus the number of researchers, especially in health and medicine, plus the compiled history of past research, all tell us that we don't need such loose standards today. If anything, they tell us that such loose standards may even be harmful.

They confuse people on what really is best medical practice.

In addition, they give contrarian diet writers like the above more loopholes to write contrarian books.

And, especially in mental health, but also elsewhere, the loose p-value gives Big Pharma more room to make dubious claims about the effectiveness of many prescriptions.

And, as a combination of Nos. 2 and 3, they cause the US as a whole to waste a bunch of medical spending. 

That said, I'm not going to argue that the current "medical establishment," minus the Big Pharma part, doesn't have some financial vested interests itself. Are they, relative to the overall size of the medical issues involved, of the same degree of financial interest as Big Pharma or contrarians, though? I don't think so.

And, I just mentioned people like Gary Taubes. The folks like Joseph Mercola use the looseness of p-values, and other, related issues, to drive Mack trucks laden with gold through them on the way to their personal Fort Knoxes. These fixes alone won't shut up Mercola, but they'll force the likes of him to rely on studies that are not only more clearly fringe ones, but outdated ones, too.

So, a simple suggestion?

Cut the p-value in health/medicine tighter. Say, 0.03 instead of 0.05. That's still plenty loose enough to not screen out "edgy" but true findings.

The feds, via the National Institutes of Health and the National Institutes of Mental health, have the power to make this happen. Stop funding research that don't adopt the tighter standard.

Will tighter p-values alone make a huge difference?   Maybe not a huge difference, but, they will make a difference. The feds could also stop funding most research that uses meta-analysis, and tighten up other data-related issues, many of which, such as confidence levels, are themselves at least loosely correlated with p-values. All of that together would make a big difference indeed.

And, changing what we readily can on statistical tightness, starting with p-values, would be a signal to the health and medicine fields on the one hand, and social science on the other, that it's a new day in the research world. The tighter standards would signal a need for tighter research in general. And, the requirement of special justification for the use of meta-analysis would signal that the days of cherry-picking are over.

No comments: