Tuesday, May 21, 2013

He said, she said, then they said...



Conflicting studies can make life tough. A good systematic review could sort it out. It might be possible for the studies to be pooled into a meta-analysis. That can show you the spread of individual study results and what they add up to, at the same time.

But what about when systematic reviews disagree? When the "he said, she said" of conflicting studies goes meta, it can be even more confusing. New layers of disagreement get piled onto the layers from the original research. Yikes! This post is going to be tough-going...

A group of us defined this discordance among reviews as: the review authors disagree about whether or not there is an effect, or the direction of effect differs between reviews. A difference in direction of effect can mean one review gives a "thumbs up" and another a "thumbs down."

Some people are surprised that this happens. But it's inevitable. Sometimes you need to read several systematic reviews to get your head around a body of evidence. Different groups of people approach even the same question in different but equally legitimate ways. And there are lots of different judgment calls people can make along the way. Those decisions can change the results the systematic review will get.

When and how they searched for studies - and what type and subject - means that it's not at all unusual for groups of reviewers to be looking at different sets of studies for much the same question.

After all that, different groups of people can interpret evidence differently. They often make different judgments about the quality of a study or part of one - and that could dramatically affect its value and meaning to them.

It's a little like watching a game of football where there are several teams on the field at once. Some of the players are on all the teams, but some are playing for only one or two. Each team has goal posts in slightly different places - and each team isn't necessarily playing by the same rules. And there's no umpire.

Here's an example of how you can end up with controversy and people taking different positions even when there's a systematic review. The area of some disagreement in this subset of reviews is about psychological intervention after trauma to prevent post-traumatic stress disorder (PTSD) or other problems:

Published in 2002Published in 2005Published in 2005Published in 2010; Published in 2012Published in 2013.

The conclusions range from saying debriefing has a large benefit to saying there is no evidence of benefit and it seems to cause some PTSD. Most of the others, but not all, fall somewhere in between, leaning to "we can't really be sure". Most are based only on randomized trials, but one has none, and one has a mixture of study types.

The authors are sometimes big independent national or international agencies. A couple of others include authors of the studies they are reviewing. The definition of trauma isn't the same - they may or may not include childbirth, for example. The interventions aren't the same.

The quality of evidence is very low. And the biggest discordance - whether or not there is evidence of harm - hinges mostly on how much weight you put on one trial.

It's about debriefing. The debriefing group is much bigger than the control group because they stopped the trial early, and while it's complicated, that can be a source of bias.

The people in the debriefing group were at quite a lot higher risk of PTSD in the first place. Data for more than 20% of the people randomized is missing - and that biases the results too (it's called attrition bias). You can't be sure those people didn't return because they were depressed, for example. If so, that could change the results.

It's no wonder there's still a controversy here.


See also my 5 tips for understanding data in meta-analysis.

Links to key papers about this in my comment at PubMed Commons (archived here).


If you want to read more about debriefing, here's my post in Scientific American: Dissecting the controversy about early psychological response to disasters and trauma.


Thursday, May 9, 2013

They just Google THAT?!


I admit I needed Google to quickly find out that the category for bunny-shaped clouds is "zoomorphic". And I think Google is wonderful - and so does Tess. But...

There's just been another study published about the latest generation of doctors and their information and searching habits. Like Tess' friend, they rely pretty heavily on Googling. We could all be over-estimating, though, just how good people are at finding things with Google - including the biomedically trained.

Many of us assume that the "Google generation" or "digital natives" are as good at finding information as they are at using technology. A review in 2008 came to the conclusion that this was "a dangerous myth" [PDF] and those things don't go hand in hand. It may not have gotten any better since then either.

Information literacy is about knowing when you need information, and knowing how to find and evaluate it. Google leads us to information that the crowd is basically endorsing. If the crowd has poor information literacy in health, then that can reinforce the problem.

This is an added complication for health consumers. While there's an increasing expectation that healthcare system decisions and clinical decisions be based on rigorous assessments of evidence, that's not really trickling down very fast. Patient information is generally still pretty old school.

What would it mean for patient information to be really evidence-based? I believe it includes using methods to minimize bias in finding and evaluating research to base the information on, and using evidence-based communication. Those ideas are gaining ground, for example in standards in England and Germany, and this evaluation by WHO Europe of one group of us putting these concepts into practice.

Missing critical information that can shift the picture is one of the most common ways that reviews of research can get it wrong. For systematic reviews of evidence, searching for information well is a critical and complex task.

This brings us to why Tess' talents, passions and chosen career are so important. We need health information specialists and librarians to link us with good information in many ways.

This week at the excellent annual meeting of the Medical Library Association in Boston (think lots of wonderful Tess'es!), there was a poster by Whitney Townsend and her colleagues at the Taubman Health Sciences Library (University of Michigan). Their assessment of 368 systematic reviews suggests that even systematic reviewers need help searching.

Google's great, but it doesn't mean we don't still need to "go to the library."


(Disclosure: I work in a library these days - the world's largest medical one at the National Institute of Health (NIH). If this has put you in the mood for honing up your searching skills, there are some tips for searching PubMed Health here.)