Sunday, June 30, 2013

Goldilocks and the three reviews



Goldilocks is right: that review is FAR too complicated. The methods section alone is 652 pages long! Which wouldn't be too bad, if it weren't that it is a few years out of date. It took so long to do this review and go through rigorous enough quality review, it was already out of date the day it was released. Something that happens often enough to be rather disheartening.

When methodology for systematic reviewing gets overly rococo, the point of diminishing returns will be passed. That's a worry, for a few reasons. For one, it's inefficient and more reviews could be done with the resources. Secondly, more complex methodology can both be daunting, and it can be hard for researchers to accomplish with consistency. Thirdly, when a review gets very elaborate, reproducing or updating it isn't going to be easy either.

It's unavoidable for some reviews to be massive and complex undertakings, though, if they're going to get to the bottom of massive and complex questions. Goldilocks is right about review number 2, as well: that one is WAY too simple. And that's a serious problem, too.

Reviewing evidence needs to be a well-conducted research exercise. A great way to find out more about what goes wrong when it's not, is reading Testing Treatments. And see more on this here at Statistically Funny, too.

You need to check the methods section of every review before you take its conclusions seriously - even when it claims to be "evidence-based" or systematic. People can take far too many shortcuts. Fortunately, it's not often that a review gets as bad as the second one Goldilocks encountered here. The authors of that review decided to include only one trial for each drug "in order to keep the tables and figures to a manageable size." Gulp!

Getting to a good answer also quite simply takes some time and thought. Making real sense of evidence and the complexities of health, illness and disability is often just not suited to a "fast food" approach. As the scientists behind the Slow Science Manifesto point out, science needs time for thinking and digesting.

To cover more ground, people are looking for reasonable ways to cut corners, though. There are many kinds of rapid review, including reliance on previous systematic reviews for new reviews. These can be, but aren't always, rigorous enough for us to be confident about their conclusions.

You can see this process at work in the set of reviews discussed at Statistically Funny a few cartoons ago. Review number 3 there is in part based on review number 2 - without re-analysis. And then review number 4 is based on review number 3.

So if one review gets it wrong, other work may be built on weak foundations. Li and Dickersin suggest this might be a clue to the perpetuation of incorrect techniques in meta-analyses: reviewers who got it wrong in their review, were citing other reviews that had gotten it wrong, too. (That statistical technique, by the way, has its own cartoon.)

Luckily for Goldilocks, the bears had found a third review. It had sound methodology you can trust. It had been totally transparent from the start - included in PROSPERO, the international prospective register for systematic reviews. Goldilocks can get at the fully open review and its data are in the Systematic Review Data Repository, open to others to check and re-use. Ahhh - just right!


PS:

I'm grateful to the Wikipedians who put together the article on Goldilocks and the three bears. That article pointed me to the fascinating discussion of "the rule of three" and the hold this number has on our imaginations.

Sunday, June 23, 2013

Studies of cave paintings have shown....



The mammoth has a good point. Ogg's father is making a classic error of logic. Not having found proof that something really happens, is not the same as having definitive proof that this thing cannot possibly happen.

Ogg's family doesn't have the benefit of Aristotle's explanation of deductive reasoning. But even two thousand years after Aristotle got started, we still often fall into this trap.

In evidence-based medicine, a part of this problem is touched on by the saying, "absence of evidence is not evidence of absence." A study says "there's no evidence" of a positive effect, and people jump to the conclusion - "it doesn't work." Baby Ogg gets thrown out with the bathwater.

The same thing is happening when there are no statistically significant serious adverse effects reported, and people infer from that, "it's safe." 

This situation is the opposite of the problem of reading too much into a finding of statistical significance (explained here). Only in this case, people are over-interpreting non-significance. Maybe the researchers simply didn't study enough of the right people, or they weren't looking at the outcomes that later turn out to be critical.

Researchers themselves can over-interpret negative results. Or they might phrase their conclusions carelessly. Even if they avoid the language pitfalls here, journalists could miss the nuance (or think the researchers are just being wishy-washy) and spread the wrong message. And even if everyone else phrased it carefully, the reader might jump to that conclusion anyway.

When researchers say "there is no evidence that...", they generally mean they didn't find any, or enough of, a particular type of evidence that they would find convincing. Obviously, no one can ever be sure they have even seen all the evidence. And it doesn't mean everyone would agree with their conclusion, either. To be reasonably sure of a negative, you might need quite a lot of evidence.

On the other hand, the probability of something being extremely unlikely to be real based on quite a lot of knowledge - that there's a community of giant blue swans with orange and pink polka dots on the Nile, say - increases the confidence you might have in even a small study exploring that hypothesis.

In 2020 during the Covid-19 pandemic, we found out how deeply another problem goes: taking the absence of particular types of evidence as the rationale for not taking public health action. Early in April I wrote in WIRED about how this was leading us to policies that didn't make sense – especially in not recommending personal masks to help reduce community transmission. At the same time, Trisha Greenhalgh and colleagues pointed out that was ignoring the precautionary principle: it's important to avoid doing harm caused by not taking other forms of evidence seriously enough. When it was finally acknowledged that the policy had to change, it was a recipe for chaos.

Which brings us to the other side of this coin. Proving that something doesn't exist to the satisfaction of people who perhaps need to believe it most earnestly, can be quite impossible. People trying to disprove the claim that vaccination causes autism, for example, are finding that despite the Enlightenment, our rational side can be vulnerable to highjacking. Voltaire hit that nail on the head in the 18th century: "The interest I have to believe a thing is no proof that such a thing exists."


Voltaire quote from 1763 with a cartoon pic of a man from that period: "The interest I have to believe a thing is no proof that such a thing exists."




~~~~

Update 3 July 2020: Covid-19 paragraph added.