P-values of the same looseness as in medicine/social sciences have been used to claim intercessory prayer actually works on sick people (halfway down the linked page), for example, or here (two-third down the linked page):
Targ's paper is not the only questionable study on the efficacy of prayer that has been published by medical journals. The editors and referees of these journals have done a great disservice to both science and society by allowing such highly flawed papers to be published. I have previously commented about the low statistical significance threshold of these journals (p-value of 0.05) and how it is inappropriate for extraordinary claims (Skeptical Briefs, March 2001). This policy has given a false scientific credibility to the assertion that prayer or other spiritual techniques work miracles, and several best selling books have appeared that exploit that theme. Telling people what they want to hear, these authors have made millions.
Also, per a blogger, I came across a good statement on how many people misunderstand p-values in general:
First, the p value is often misinterpreted to mean the “probability for the result being due to chance”. In reality, the p-value makes no statement that a reported observation is real. “It only makes a statement about the expected frequency that the effect would result from chance when the effect is not real”.
In short, as I’ve tried to explain to people over at Kevin Drum’s blog, p values in medicine are simply too loose.
But, as the study’s authors claim, doesn’t meta-analysis take care of all those p-value problems? No.
Meta-analysis, no matter how much it’s defended, can’t totally cover that up.
I’m not saying that the results of a meta-analysis are no stronger than the weakest study in its umbrella. I am saying that, with p values as loose as they are in health/medicine (and social sciences), is that no massive amount of individual research studies being included under one meta-analysis will make the meta-analysis’ results anything more than a little bit stronger than the best individual study.
In other words, in medicine, and in social sciences, meta-analysis adds a very modest bump, nothing more. The problem is, most people believe it does much more than that when it doesn’t.
Or, to put it another way, meta-analysis is no better than the material it’s analyzing.
So, what’s needed is medical studies to continue with the p of 0.05, because we don’t want to risk screening out potentially life-saving study, but, to re-crunch research studies at the same time. I’m not saying we need to do that with a p of 0.0001, or 1/100 of 1 percent, like the natural sciences, especially physics, normally do. But to re-crunch with a p of 0.01, or 1 percent instead of 5 percent? Absolutely.
Research that made the 5 percent cutoff but not the 1 percent cutoff would be categorized as “worthy of further study but without any immediate conclusions from it being acceptable.”
A sidebar benefit would be that a lot of alt-medicine research would get a less than full imprimatur.
1 comment:
First, I'd like to see more from that Swedish study. How many committed suicide after 1 week on anti-Ds, before they kicked in, vs. how many committed suicide after six months?
Second, many other addicts are using alcohol or illicit drugs to medicate anxiety, not depression. And, since anti-depressants, in distinction from benzos, are a safe, nonaddicting treatment for anxiety disorders, these folks actually benefit from anti-Ds, especially if they haven't been on them before.
That said, given biochemical changes, I believe it's a good idea, unless strongly indicated, for a person to wait at least six months after getting clean/sober, to consider starting on psychotropic medications.
Post a Comment