If only post hoc analyses always brought out the inner skeptic in us all! Or came with red flashing lights instead of just a little token "caution" sentence buried somewhere.
Post hoc analysis is when researchers go looking for patterns in data. (Post hoc is Latin for "after this.") Testing for statistically significant associations is not by itself a way to sort out the true from the false. (More about that here.) Still, many treat it as though it is - especially when they haven't been able to find a "significant" association, and turn to the bathwater to look for unexpected babies.
Even when researchers know the scientific rules and limitations, funny things happen along the way to a final research report. It's the problem of researchers' degrees of freedom: there's a lot of opportunity for picking and choosing, and changing horses mid-race. Researchers can succumb to the temptation of over-interpreting the value of what they're analyzing, with "convincing self-justification." (See the moving goalposts over time here, for example, as trialists are faced with results that didn't quite match their original expectations.)
And even if the researchers don't read too much into their own data, someone else will. That interpretation can quickly turn a statistical artifact into a "fact" for many people.
Let's look more closely at Significus' pet hate: post hoc analyses. There are dangers inherent in multiple testing when you don't have solid reasons for looking for a specific association. The more often you randomly dip into data without a well-founded target, the higher your chances of pulling out a result that will later prove to be a dud.
It's a little like fishing in a pond where there are random old shoes among the fish. The more often you throw your fishing line into the water, the greater your chances of snagging a shoe instead of a fish.
Here's a study designed to show this risk. The data tossed up significant associations such as: women were more likely to have a cesarean section if they preferred butter over margarine, or blue over black ink.
The problem is huge in areas where there's a lot of data to fish around in. For published genome-wide association studies, for example, over 90% of the "associations" with a disease couldn't consistently be found again. Often, researchers don't report how many tests were run before they found their "significant" results, which makes it impossible for others to know how big a problem multiple testing might be in their work.
The problem extends to subgroup analyses where there is not an established foundation for an association. The credibility of claims made on subgroups in trials is low. And it has serious consequences. For example, an early trial suggested only men with stroke-like symptoms benefit from aspirin - which stopped many doctors from prescribing aspirin to women.
How should you interpret post hoc and subgroup analyses then? If analyses were not pre-specified and based on established, plausible reasons for an association, then one study isn't enough to be sure.
With subgroups that weren't randomized as different arms of a trial, it's not enough that the average for one subgroup is higher than the average for another subgroup. There could be other factors influencing the outcome other than their membership of that subgroup. An interaction test is done to try to account for that.
It's more complicated when it's a meta-analysis, because there are so many differences between one study and another. The exception here is an individual patient data meta-analysis, which can study differences between patients directly.
In the end, it comes down to being careful not to see a new hypothesis generated by research as a "fact" already proven by the study from which it came.
Post hoc, ergo propter hoc. This description of basic faulty logic - "after this, therefore because of this" - is as ancient as the language that made it famous. We've had millennia to snap out of the dangerous mental shortcut of seeing a cause where there's only coincidence. Yet we still hurtle like lemmings over cliffs into its alluring clutches.
In "The Big Fat Surprise" Nina Teicholz states that such post hoc statistical analysis is disparagingly referred to as "drawing targets around the bullet holes."
ReplyDeletegreat post!
ReplyDeleteThanks, Hilda. On this topic, I can thoroughly recommend this paper by De Groot, which was written in Dutch and neglected for over half a century, but then translated into English in 2014.
ReplyDeletede Groot, A. D. (2014). The meaning of “significance” for different types of research [translated and annotated by Eric-Jan Wagenmakers, Denny Borsboom, Josine Verhagen, Rogier Kievit, Marjan Bakker, Angelique Cramer, Dora Matzke, Don Mellenbergh, and Han L. J. van der Maas]. Acta Psychologica, 148, 188-194. doi: http://dx.doi.org/10.1016/j.actpsy.2014.02.001
The purpose and value of post hoc analyses is to develop new studies analysing the association as a prespecified objective. But what when post hoc identifies a risk/harm v potential benefit of an intervention. Ethics can be difficult
ReplyDelete