Wednesday, September 30, 2015

AGHAST! The Day the Trial Terminator Arrived

Clinical trials are complicated enough when everything goes pretty much as expected. When it doesn't, the dilemma of continuing or stopping can be excruciatingly difficult. Some of the greatest dramas in clinical research are going on behind the scenes around this. Even who gets to call the shot can be bitterly disputed.

A trial starts with a plan for how many people have to be recruited to get an answer to the study's questions. This is calculated based on what's known about the chances of benefits and harms, and how to measure them.

Often a lot is known about all of this. Take a trial of antibiotics, for example. How many people will end up with gastrointestinal upsets is fairly predictable. But often the picture is so sketchy it's not much more than a stab in the dark.

Not being sure of the answers to the study's questions is an ethical prerequisite for doing clinical trials. That's called equipoise. The term was coined by lawyer Charles Fried in his 1974 book, Medical Experimentation. He argued that the investigator should be uncertain if people were going to be randomized. In 1987, Benjamin Freedman argued the case for clinical equipoise: that we need professional uncertainty, not necessarily individual uncertainty.

It's hard enough to agree if there's uncertainty at any time! But the ground can shift gradually, or even dramatically, while a trial is chugging along.

I think it's helpful to think of this in 2 ways: a shift in knowledge caused by the experience in the trial, and external reasons.

Internal issues that can put the continuation of the trial in question include:
  • Not being able to recruit enough people to participate (by far the most common reason);
  • More serious and/or frequent harm than expected tips the balance;
  • Benefits much greater than expected;
  • The trial turns out to be futile: the differences in outcome between groups is so small, even if the trial runs its course, we'll be none the wiser (PDF).
External developments that throw things up in the air or put the cat among the pigeons include:
  • A new study or other data about benefits or safety - especially if it's from another similar trial;
  • Pressure from groups who don't believe the trial is justified or ethical;
  • Commercial reasons - a manufacturer is pulling the plug on developing the product it's trialing, or just can't afford the trial's upkeep;
  • Opportunity costs for public research sponsors has been argued as a reason to pull the plug for possible futility, too.
Sometimes several of those things happen at once. Stories about several examples are in a companion post to this one over at Absolutely Maybe. They show just how difficult these decisions are - and the mess that stopping a trial can leave behind.

Trials that involve the risk of harm to participants should have a plan for monitoring the progress of the trial without jeopardizing the trial's integrity. Blinding or masking the people assessing outcomes and running the trial is a key part of trial methodology (more about that here). Messing with that, or dipping into the data often, could end up leading everyone astray. Establishing stopping rules before the trial begins is the safeguard used against that - along with a committee of people other than the trial's investigators monitoring interim results.

Although they're called stopping "rules", they're actually more guideline than rule. And other than having it done independently of the investigators, there is no one widely agreed way to do it - including the role of the sponsors and their access to interim data.

Some methods focus on choosing a one-size-fits-all threshold for the data in the study, while others are more Bayesian - taking external data into account. There is a detailed look at this in a 2005 systematic review of trial data monitoring processes by Adrian Grant and colleagues for the UK's National Institute of Health Research (NIHR). They concluded there is no strong evidence that the data should stay blinded for the data monitoring committee.

A 2006 analysis HIV/AIDS trials stopped early because of harm, found that only 1 out of 10 had established a rule for this before the trial began but it's more common these days. A 2010 review of trials stopped early because the benefits were greater than expected found that 70% mentioned a data monitoring committee (DMC). (These can also be called data and safety monitoring boards (DSMBs) or data monitoring and ethics committees (DMECs).)

Despite my cartoon of data monitoring police, DMCs are only advisors to the people running the trial. They're not responsible for the interpretation of a trial's results, and what they do generally remains confidential. Who other than the DMC gets to see interim data, and when, is a debate that can get very heated.

Clinical trials only started to become common in the 1970s. Richard Stephens writes that it was only in the 1980s, though, that keeping trial results confidential while the trial is underway became the expected practice. In some circumstances, Stephens and his colleagues argue, publicly releasing interim results while the trial is still going on can be a good idea. They talk about examples where the release of interim results saved trials that would have foundered because of lack of recruitment from clinicians who didn't believe the trial was necessary.

One approach when there's not enough knowledge to make reliable trial design decisions is a type of trial called an adaptive trial. It's designed to run in steps, based on what's learned. About 1 in 4 might adapt the trial in some way (PDF). It's relatively early days for those.

In the end, no matter which processes are used, weighing up the interests of the people in the trial, with the interests of everyone else in the future who could benefit from more data, will be irreducibly tough. Steven Goodman writes that we need more people with enough understanding and experience of the statistics and dilemmas involved in data monitoring committees.

We also need to know more about when and how to bring people participating in the trial into the loop - including having community representation on DMCs. Informing participants at key points more would means some will leave. But most might stay, as they did in the Women's Health Initiative hormone therapy trials (PDF) and one of the AZT trials in the earlier years of the HIV epidemic.

There is one clearcut issue here. And that's the need to release the results of any trial when it's over, regardless of how or why it ended. That's a clear ethical obligation to the people who participated in the trial - the desire to advance knowledge and help others is one of the reasons many people agree to participate. (More on this at the All Trials campaign.)

More at Absolutely Maybe: The Mess That Trials Stopped Early Can Leave Behind


Trial acronyms: If someone really did try to make an artificial gallbladder - not to mention actually start a trial on it! - I think lots of us would be pretty aghast! But a lot of us are pretty aghast about the mania for trial acronyms too. More on that here at Statistically Funny.

Sunday, July 19, 2015

ARR OR NNT? What's Your Number Needed To Confuse?

I used to think numbers are completely objective. Words, on the other hand, can clearly stretch out, or squeeze, people's perceptions of size. "OMG that spider is HUGE!" "Where? What - that little thing?"

Yes, numbers can be more objective than words. Take adverse effects of health care: if you use the word "common" or "rare", people won't get as accurate an impression as if you use numbers.

But that doesn't mean numbers are completely objective. Or even that numbers are always better than words. Numbers get a bit elastic in our minds, too.

We're mostly good at sizing up the kinds of quantities that we encounter in real life. For example, it's pretty easy to imagine a group of 20 people going to the movies. We can conceive pretty clearly what it means if 18 say they were on the edge of the seats the whole time.

There's an evolutionary theory about this, called ecological rationality. The idea is, our ability to reason with quantities developed in response to the quantities around us that we frequently need to mentally process. (More on this in Brase [PDF] and Gigerenzer and Hoffman [PDF].)

Whatever the reason, we're just not as good at calibrating risks that are lower frequency (Yamagishi [PDF]). We're going to get our heads around 18 out 20 well. But 18000 out of 200000? Not so much. We'll do pretty well at 1 out of 10, or 1 out of 100 though.

And big time trouble starts if we're reading something where the denominators are jumping around - either toggling from percent to per thousand and back, or saying "7 out of 13 thought the movie was great, while 4 out of 19 thought it was too scary, and 9 out of 17 wished they had gone to another movie". We'll come back to this in a minute. But first, let's talk about some key statistics used to communicate the effects of health care.

Statistics - where words and numbers combine to create a fresh sort of hell!

First there's the problem of the elasticity in the way our minds process the statistics. That means that whether they realize it or not, communicators' choice of statistic can be manipulative. Then there's the confusion created when people communicate statistics with words that get the statistics wrong.

Let's look at some common measures of effect sizes: absolute risk (AR), relative risk (RR), odds ratio (OR), and number needed to treat (NNT). (The evidence I draw on is summarized in my long post here.)

Natural frequencies are the easiest thing for people generally to understand. And getting more practice with natural frequencies might help us to get better at reasoning with numbers, too (Gigerenzer again [PDF]).

Take our movie-goers again. Say that 6 of the 20 were hyped-up before the movie even started. And 18 were hyped-up afterwards. Those are natural frequencies. If I give you those "before and after" numbers in percentages, that's "absolute risk" (AR). Lots of people (but not everybody) can manage the standardization of percentages well.

But if I use relative risks (RR) - people were 3 times as likely to be hyped-up after seeing that movie - then the all-important context of proportion is lost. That's going to sound like a lot, whether it's a tiny difference or a huge difference. People will often react to that without stopping to check, "yes, but from what to what?" From 6 to 18 out of 20 is a big difference. But going from 1 out of a gazillion to 3 out of a gazillion just ain't much worth crowing or worrying about.

RRs are critically important: they're needed for calculating a personalized risk if you're not at the same risk as the people in a study, for example. But if it's the only number you look at, you can get an exaggerated idea.

So sticking with absolute risks or natural frequencies, and making sure the baseline is clear (the "before" number), is better at helping people understand an effect. Then they can put their own values on it.

The number needed to treat, takes the change in absolute change and turns it upside down. (Instead of calculating the difference out of 100, it's 100 divided by the difference.) So that instead of the constant denominator of 100, you now have ones that change: instead of 60% of people being hyped-up because of the movie, it becomes NNT 1.7 (1.7 people have to see the movie for 1 person to get hyped-up).

This can be great in some circumstances, and many people are really used to NNTs. But on average, this is one of the hardest effect measures to understand. Which means that it's easier to be manipulated by it.

NNT is the anti-RR if you like: RRs exaggerate, NNTs minimize. Both can mislead - and that can be unintentional or deliberate.

When it comes to communicating with people who need to use results, I think using only statistics that will frequently mislead because it's a preference of the communicator is paternalistic, because it denies people the right to an impression based on their own values. Like all forms of paternalism, that's sometimes justified. But there's a problem when it becomes the norm.

The NNT was developed in the 1990s [PDF]. It was meant to do a few things - including counteracting the exaggeration of the RR. Turns out it overshot the mark there! It was also intended to be easier to understand than the odds ratio (OR).

The OR brings us to the crux of the language problems. People use words like odds, risks, and chances interchangeably. Aaarrrggghhh!

A risk in statistics is what we think of as our chances of being in the group: a 60% absolute risk means a 60 in 100 (or 6 in 10) "chance".

An odds ratio in statistics is like odds in horse-racing and other gambling. It factors in both the odds of "winning" versus the odds of "losing". (If you want to really get your head around this, check out Know Your Chances by Woloshin, Schwartz, and Welch. It's a book that's been shown in trials to work!)

The odds ratio is a complicated thing to understand, especially if it's embedded in confusing language. It's a very sound way to deal with data from some types of studies, though. So you see odds ratios a lot in meta-analyses. (If you're stumped about getting a sense of proportion in a meta-analysis, look at the number of events and the number of participants - they are the natural frequencies.)

There's one problem that all of these ways of portraying risks/chances have in common: when people start putting them in sentences, they frequently get the language wrong. So they can end up communicating something entirely other than what was intended. You really need to double-check exactly what the number is, if you want to protect yourself from getting the wrong impression.

OK, then, so what about "pictures" to portray numbers? Can that get us past the problems of words and numbers? Graphs, smile-y versus frown-y faces, and the like? Many think this is "the" answer. But...

This is going to be useful in some circumstances, misleading in others. Gerd Gigerenzer and Adrian Edwards: "Pictorial representations of risk are not immune to manipulation either". (A topic for another time, although I deal with it a little in the "5 shortcuts" post listed below.)

Where does all this leave us? Few researchers reporting data have the time to invest in keeping up with the literature on communicating numbers - so while we can plug away at improving the quality of reporting of statistics, there's no overnight solution there.

Getting the hang of the common statistics yourself is one way. But the two most useful all-purpose strategies could involve detecting bias.

One is to sharpen your skills at detecting people's ideological biases and use of spin. Be on full alert when you can see someone is utterly convinced and trying to persuade you with all their chips on a particular way of looking at data - especially if it's data on a single outcome. If the question matters to you, beware of the too-simple answer.

The second? Be on full alert when you see something you really want, or don't want, to believe. The biggest bias we have to deal with is our own.

More of my posts relevant to this theme:

Does It Work? Beware of the Too-Simple Answer

At Absolutely Maybe (PLOS Blogs):
5 Shortcuts to Keep Data on Risks in Perspective
Mind your "p"s, RRs, and NNTs: On Good Statistics Behavior

At Third Opinion (MedPage Today):
The Trouble With Evidence-Based Medicine, the 'Brand'
The NNT: An Overhyped and Confusing Statistic

Check out for a running summary of what I'm writing about.

Sunday, February 8, 2015

Let's Play Outcome Mash-up - A Clinical Trial Shortcut Classic!

Deciphering trial outcomes can be a tricky business. As if many measures aren't hard enough to make sense of on their own, they are often combined in a complex maneuver called a composite endpoint (CEP) or composite outcome. The composite is treated as a single outcome. And journalists often phrase these outcomes in ways that give the impression that each of the separate components has improved.

Here's an example from the New York Times, reporting on the results of a major trial from the last American Heart Association conference:
"There were 6.4% fewer cardiac events - heart disease deaths, heart attacks, strokes, bypass surgeries, stent insertions and hospitalization for severe chest pain..."
That individual statement sounds like the drug reduced deaths, bypasses, stents, and hospitalization for unstable angina, doesn't it? But it didn't. The modest effect was on non-fatal heart attacks and stroke only.*

CEPs are increasingly common: by 2007, well over a third of cardiovascular trials were using them. CEPs are a clinical trial shortcut because you need fewer people and less time to hit a jackpot. A trial's main pile of chips is riding on its pre-specified primary outcome: the one that answers the trial's central, most important question.

The primary outcome determines the size and length of the trial, too. For example, if the most important outcome for a chronic disease treatment is to increase the length of people's lives, you would need a lot of people to get enough events to count (the event in this case would be death). And it would take years to get enough of those events to see if there's anything other than a dramatic, sudden difference.

But if you combine it with one or more other outcomes - like non-fatal heart attacks and strokes - you'll get enough events much more quickly. Put in lots, and you're really hedging your bets.

It's a very valuable statistical technique - but it can go haywire. Say you have 3 very serious outcomes that happen about as often as each other - but then you add another component that is less serious and much more common. The number of less serious events can swamp the others. Everything could even be riding on only one less serious component. But the CEP has a very impressive name - like "serious cardiac events." Appearances can be deceptive.

Enough data on the nature of the events in a CEP should be clearly reported so that this is obvious, but it often isn't. And even if the component events are reported deep in the study's detail, don't be surprised if it's not pointed out in the abstract, press release, and publicity!

There are several different ways a composite can be constructed, including use of techniques like weighting that need to be transparent. Because it's combining events, there has to be a way of dealing with what happens when more than one event happens to one person - and that's not always done the same way. The definitions might make it obvious, the most serious event might count first according to a hierarchy, or the one that happened to a person first might be counted. But exactly what's happening often won't be clear - maybe even most of the time.

There's agreement on some things you should look out for (see for example Montori, Hilden, and Rauch). Are each of the components as serious as each other and/or likely to increase (or decrease) together in much the same way? If one's getting worse and one's getting better, this isn't really measuring one impact.

The biggest worry, though, is when researchers play the slot machine in my cartoon (what we call the pokies, "Downunder"). I've stressed the dangers of hunting over and over for a statistical association (here and here). The analysis by Lim and colleagues found some suggestion that component outcomes are sometimes selected to rig the outcome. If it wasn't the pre-specified primary outcome, and it wasn't specified in the original entry for it in a trials register, that's a worry. Then it wasn't really a tested hypothesis - it's a new hypothesis.

Composite endpoints, properly constructed, reported, and interpreted are essential to getting us decent answers to many questions about treatments. Combining death with serious non-fatal events makes it clear when there's a drop in an outcome largely because people died before that could happen, for example. But you have to be very careful once so much is compacted into one little data blob.

 (Check out slide 14 to see the forest plot of results for the individual components the journalist was reporting on. Forest plots are explained here at Statistically Funny.)

More on understanding clinical trial outcomes:

New this week: I'm delighted to now have a third blog, one for physicians with the wonderful team at MedPage Today. It's called Third Opinion.

Sunday, November 30, 2014

Biomarkers Unlimited: Accept Only OUR Substitutes!

Sounds great, doesn't it? Getting clinical trial results quickly has so much going for it. Information sooner! More affordable trials!

Substituting outcomes that can take years, or even decades, to emerge, with ones you can measure much earlier, makes clinical research much simpler. This kind of substitute outcome is called a surrogate (or intermediate) endpoint or outcome.

Surrogates are often biomarkers - biological signs of disease or a risk factor of disease, like cholesterol in the blood. They are used in clinical care to test for, or keep track of, signs of emerging or progressing disease. Sometimes, like cholesterol, they're the target of treatment.

The problem is, these kinds of substitute measures aren't always reliable. And sometimes we find that out in the hardest possible way.

The risk was recognized as soon as the current methodology of clinical trials was being developed in the 1950s. A famous statistician who was key to that process, Austin Bradford-Hill, put it bluntly: if the "rate falls, the pulse is steady, and the blood pressure impeccable, we are still not much better off if unfortunately the patient dies."

That famously happened with some drugs that controlled cardiac arrhythmia - irregular heartbeat that increases the chances of having a heart attack. On the basis of ECG tests that showed the heartbeat was regular, these drugs were prescribed for years before a trial showed that they were causing tens of thousands of premature deaths, not preventing them. That kind of problem has happened too often for comfort.

It happened again this week - although at least before the drug was ever approved. A drug company canceled all its trials for advanced gastric (stomach) cancer of a new drug. The drug is called Rilotumumab. Back in January, it was a "promising" treatment, billed as bringing "new hope in gastric cancer." It got through the early testing phases and was in Phase III trials - the kind needed to get FDA approval.

But one phase III trial, RILOMET-1, quickly showed an increase in the number of deaths in people using the drug. We don't know how many yet - but it was enough for the company to decide to end all trials of the substance.

This drug targets a biomarker associated with worse disease outcomes, an area seen by some as transforming gastric cancer research and treatment. Others see considerable challenges, though - and what happened to the participants in the RILOMET-1 trial underscores why.

There is a lot of controversy about surrogate outcomes - and debates about what's needed to show that an outcome or measure is a valid surrogate we can rely on. They can lead us to think that a treatment is more effective than it really is.

Yet a recent investigative report found that cancer drugs are being increasingly approved based only on surrogate outcomes, like "progression-free survival." That measures biomarker activity rather than overall survival (when people died).

It can be hard to recognize at first, what's a surrogate and what's an actual health outcome. One rule of thumb is, if you need a laboratory test of some kind, it's more likely to be a surrogate. Whereas symptoms of the disease you're concerned, or harm caused by the disease, are the direct outcomes of interest. Sometimes those are specified as"patient-relevant outcomes."

Many surrogate outcomes are incredibly important, of course - viral load for HIV treatment and trials for example. But in general, when clinical research results are based only on surrogates, the evidence just isn't as strong and reliable as it is for the outcomes we are really concerned about.


See also, Statistically Funny on "promising" treatments.

Sunday, October 12, 2014

Sheesh - what are those humans thinking?

I can neither confirm nor deny that Cecil is now a participant in one of the there-is-no-limit-to-the-human-lifespan resveratrol studies at Harvard's "strictly guarded mouse lab." If he is, I'm sure he's even more baffled by the humans' hype over there.

Resveratrol is the antioxidant in grapes that many believe makes drinking red wine healthy. And it's a good example of how research on animals is often terribly misleading and misinterpreted. I've written about it over at my Scientific American blog if you're interested in more detail about resveratrol.

But this week, it's media hype about a study using human stem cells in mice in another lab at Harvard that's made me ratty. You could get the idea that a human trial of a "cure" for type 1 diabetes is just a matter of time now - and not a lot of time at that. According to the leader of the team, Doug Melton, "We are now just one preclinical step away from the finish line."

An effective treatment that ends the need for insulin injections would be incredibly exciting. But we see this kind of claim from laboratory research all the time, don't we? How often does it work out - even for the studies that are at "the finishing line" for animal studies?

Not all that often: maybe about a third of the time.

Bart van der Worp and colleagues wrote an excellent paper explaining why. It's not just that other animals are so different from humans. We're far less likely to hear of the failed animal results than we are of human trials that don't work out as hoped. That bias towards positive published results draws an over-optimistic picture.

As well as fundamental differences between species, van der Worp points to other common issues that reduce the applicability for humans of typical studies in other animals:

  • The animals tend to be younger and healthier than the humans who have the health problem;
  • They tend to be a small group of animals that are very similar to each other, while the humans with the problem are a large very varied group;
  • Only male or only female animals are often used; and
  • Doses higher than humans will be able to tolerate are generally used.
Limited genetic diversity could be an issue, too.

So how does the Harvard study fare on that score? They used stem cells to develop insulin-producing cells that appeared to function normally when transplanted into mice. But this was the very early stages. When it came to the test they reported on the ones with diabetes, there were only 6 (young) mice who got the transplants (and 1 died) (plus a comparison group). Gender was not reported - and as is common in laboratory animal studies, there wasn't lengthy follow-up. This was an important milestone, but there's a very long way to go here. Transplants in humans face a lot of obstacles.

Van der Worp points to another set of problems: inadequacies in research methods that we've learned over time in human research bias the proceedings too much - including problems with statistical analyses. Jennifer Hirst and colleagues have studied this too. They concluded that so many studies were bedeviled by issues such as lack of randomization and blinding by those assessing outcomes, that they should never have been regarded as being "the finishing line" before human experimentation at all.

There's good news though! CAMARADES is working to improve this - with the same approach for chipping away at these problems as in human trials: by slogging away at biased methodologies and publication bias. And pushing for good quality systematic reviews of animal studies before human trials are undertaken.

Laboratory animal research may be called "preclinical," but even that jargon is a bit of over-optimistic marketing. Most of what's tried in the lab will never get near human trials. And when it does, it will mostly be disappointing. Laboratory research is needed, and encouraging progress is great. But people should definitely not be getting our hopes up too much about it.


The National Institutes of Health (NIH) addressed the issue of gender in animal experiments earlier in 2014. After I wrote this post, the NIH also released proposed guidelines for reporting preclinical research.

Thanks to Jonathan Eisen for adding a link for the full text of the paper to PubMed Commons, as well as to a blog post by Paul Knoepfler discussing the context of the stem cell work by Felicia Pagliuca, Doug Melton and colleagues. NHS Behind the Headlines have also analyzed and explained this study.

Thanks to Jim Johnson for pointing an oversight: that animal studies - this one included - can also suffer from having too little follow-up.

Interest declaration: I'm an academic editor at one of the journals whose papers on animal research I commended (PLOS Medicine) and on the human ethics advisory group of another (PLOS One), but I had no involvement in either paper.

Sunday, March 16, 2014

If at first you don't succeed...

If only post hoc analyses always brought out the inner skeptic in us all! Or came with red flashing lights instead of just a little token "caution" sentence buried somewhere. 

Post hoc analysis is when researchers go looking for patterns in data. (Post hoc is Latin for "after this.") Testing for statistically significant associations is not by itself a way to sort out the true from the false. (More about that here.) Still, many treat it as though it is - especially when they haven't been able to find a "significant" association, and turn to the bathwater to look for unexpected babies.

Even when researchers know the scientific rules and limitations, funny things happen along the way to a final research report. It's the problem of researchers' degrees of freedomthere's a lot of opportunity for picking and choosing, and changing horses mid-race. Researchers can succumb to the temptation of over-interpreting the value of what they're analyzing, with "convincing self-justification." (See the moving goalposts over time here, for example, as trialists are faced with results that didn't quite match their original expectations.)

And even if the researchers don't read too much into their own data, someone else will. That interpretation can quickly turn a statistical artifact into a "fact" for many people.

Let's look more closely at Significus' pet hate: post hoc analyses. There are dangers inherent in multiple testing when you don't have solid reasons for looking for a specific association. The more often you randomly dip into data without a well-founded target, the higher your chances of pulling out a result that will later prove to be a dud.

It's a little like fishing in a pond where there are random old shoes among the fish. The more often you throw your fishing line into the water, the greater your chances of snagging a shoe instead of a fish.

Here's a study designed to show this risk. The data tossed up significant associations such as: women were more likely to have a cesarean section if they preferred butter over margarine, or blue over black ink.

The problem is huge in areas where there's a lot of data to fish around in. For published genome-wide association studies, for example, over 90% of the "associations" with a disease couldn't consistently be found again. Often, researchers don't report how many tests were run before they found their "significant" results, which makes it impossible for others to know how big a problem multiple testing might be in their work.

The problem extends to subgroup analyses where there is not an established foundation for an association. The credibility of claims made on subgroups in trials is low. And it has serious consequences. For example, an early trial suggested only men with stroke-like symptoms benefit from aspirin - which stopped many doctors from prescribing aspirin to women.

How should you interpret post hoc and subgroup analyses then? If analyses were not pre-specified and based on established, plausible reasons for an association, then one study isn't enough to be sure.

With subgroups that weren't randomized as different arms of a trial, it's not enough that the average for one subgroup is higher than the average for another subgroup. There could be other factors influencing the outcome other than their membership of that subgroup. An interaction test is done to try to account for that.

It's more complicated when it's a meta-analysis, because there are so many differences between one study and another. The exception here is an individual patient data meta-analysis, which can study differences between patients directly.

In the end, it comes down to being careful not to see a new hypothesis generated by research as a "fact" already proven by the study from which it came.

Post hoc, ergo propter hoc. This description of basic faulty logic - "after this, therefore because of this" - is as ancient as the language that made it famous. We've had millennia to snap out of the dangerous mental shortcut of seeing a cause where there's only coincidence. Yet we still hurtle like lemmings over cliffs into its alluring clutches.

Sunday, December 29, 2013

What's so good about "early," anyway?

"Early." It's one of those words like "new" and "fast," isn't it? As though they are inherently good, and their opposites - "late," "old" and "slow" - are somehow bad.

Believing in the value and virtue of being an early bird has deep roots in our cultural consciousness. It goes back at least as far as ancient Athens. Aristotle's treatise on household economics said that early rising was both virtuous and beneficial: "It is likewise well to rise before daybreak; for this contributes to health, wealth and wisdom."

But just as Gertrud came to suspect the benefits for her of being early weren't all they were cracked up to be, earliness isn't always better in other areas either. The "get in early!" assumption has an in-built tendency to lead us astray when it comes to detection of diseases and conditions. And even most physicians - just the people we often rely on to inform us - don't understand enough about the pitfalls that lead us to jump to conclusions about early detection too, well…early.

Pitfall number 1: Those who need it least get the most early detection

This one is a double-edged sword. Firstly, whether it's a screening program or research studying early detection, there tends to be a "worried well" or "healthy volunteer" effect (selection bias). It's easy to have higher than average rates of good health outcomes in people who are at low risk of bad ones anyway. This can lead to inflated perceptions of how much benefit is possible.

The other problem is an over-supply of fatalism among many people who may be able to materially benefit from early detection. Constant bombardment about all the things they could possibly be worrying about might even make it more likely that they shut out vital information - which could make it even more likely that they ignore symptoms, for example.

Pitfall number 2: Over-diagnosis from detecting people who would never have become ill from the condition detected

This one is called length bias. For many conditions, like cancers, there are dangerous ones that develop too quickly for a screening program to catch them. Early detection is actually better at detecting ones that may never threaten their health. More people die with cancer, than of it.

So early detection means many people are fighting heroic battles that were never necessary. And some will actually be harmed by parts of some screening processes that carry serious risks of their own (like colonoscopies), or adverse effects of the treatments they got which they didn't need.

Add to those the number of people who are diagnosed as being "at risk" of conditions they will never have or which would have resolved without treatment, and the number harmed is depressingly huge.

This massive swelling of the numbers of people who have survived phantoms is spreading the shadow of angst ever wider (a subject I've written about in relation to cancer at Scientific American). Spend 10 minutes or so listening to Iona Heath on this subject - starting just past 2 minutes on this video. [And read @Deevybee's important comment and links about developmental conditions in early childhood below.]

Pitfall number 3: The statistical effect that means survival rates "improve" even if no one's life expectancy increases

This is lead-time bias. And it's why you should always be careful when you see survival rates in connection with early detection and treatment. Screening programs, by definition, are for people who have no symptoms (pre-clinical). So they cut the part of your life where you don't know you have the disease short. Even if the earlier diagnosis made no difference to the length of your life, the amount of time you lived with knowledge of the disease (disease "survival") is longer.

What we want is to move the needle on length and/or quality of life. For that to happen, there has to be safe and effective treatment, safe and effective screening procedures, and more people found at a time they can be helped than would have come from diagnosing the condition when there symptoms.

Here's an example. This person's disease began when they were 40 years old. They lived without any problem from it until 76 years old - then they died when they were 80. Their disease survival was less than 5 years. The proportion of their life that they "had" the disease was short.

Now here's the same person, with early detection that made no difference to when they died - but the needle on how long they have "had" the disease has shifted. So they now survive longer than 5 years with the disease. The "lead time" has changed, but survival in the way we mean it hasn't changed at all.

Randomized trials are needed to establish that in fact early detection and intervention programs do more good than harm - some do, some don't.

More Statistically Funny on screening - "You have the right to remain anxious" and on over-diagnosis: here and here.

Here's a fact sheet about what you need to know about screening tests. And here's a little more technical primer of the 3 biases explained here.

Sunday, November 17, 2013

Does it work? Beware of the too-simple answer

Leonard is so lucky! He's just asked a very complicated question and he's not getting an over-confident and misleading answer. Granted, he was likely hoping for an easier one! But let's dive into it.

"Does": that auxiliary verb packs a punch. How do we know whether something does or doesn't work?   It would be great if that were simple, but unfortunately it's not.

I talk a lot here at Statistically Funny about the need for trials and systematic reviews of them to help us find the answers to these questions. But whether we're talking about trials or other forms of research, statistical techniques are needed to help make sense of what emerges from a study.

Too often, this aspect of research is going to lead us down a garden path. It's common for people to take the approach of relying only, or largely, on a statistical significance test of the null hypothesis: the assumption that there is no difference. So if a result is within the range that could occur by chance alone, the assumption of the null hypothesis stands. But if it's not within that range, it's "statistically significant."

However a statistically significant result - especially from a single study - is often misunderstood and contributes to over-confidence about what we know. It's not a magical wand that finds out the truth. I wrote about testing for statistical significance in some detail at Absolutely Maybe. Leonard's statistician is a Bayesian: you can find out some more about that, too, in my post.

As chance would have it, there was also a lot of discussion this week in response to a paper published while I was writing that post. It called for a tightening of the threshold for significance, which isn't really the answer either. Thomas Lumley puts that into great perspective over at his wonderful blog, Biased and Inefficient: a very valuable read.

"It": now this part should be easy, right? Actually, this can be particularly tricky. The treatment you could be using may not be very much like the one that was studied. Even if it's a prescription drug, the dose or regimen you're facing might not be the same as the one used in studies. Or it might be used in conjunction with another intervention that could affect how it works.

Then there's the question of whether "it" is even what it says it is. Unlike prescription drugs, the contents of herbal remedies and dietary supplements aren't closely regulated to ensure that what it says on the label is what's inside. That was also recently in the news, and covered in detail here by Emily Willingham.

If it's a non-drug intervention, it's actually highly likely that the articles and other reports of the research don't ever make clear exactly what "it" is. Paul Glasziou had a brainwave about this: he's started HANDI: the Handbook of Non-Drug Intervention. When a systematic reviews shows that something works, the HANDI team wants to dig out all the details and make sure we all know exactly what "it" is.

For example, if you heard that drinking water before meals can help you lose weight, and you want to try it, HANDI helpfully points out what that actually means is drinking half a liter of water before every meal AND having a low-calorie diet. HANDI is new, so there aren't many "it"s explained. But you can see them here.

"Work": this one really needs to get specific. As I point out in the slides from a talk I gave this month, you really need to be thinking about each possible outcome separately - and thinking about the possible adverse effects too. There can be complicated trade-offs between effects, and the quality of the evidence is going to vary for each of them.

Think of it this way: if you do a survey with 150 questions in it, there are going to be more answers to some of the questions than others. For example, if you had 400 survey respondents, they might all have answered the first easy question and there could be virtually no answers to a hard question near the end. So thinking "a survey of 400 people found…" an answer to that later question is going to be seriously misleading.

Then there's the question of how much does it work for that particular outcome? Does a sliver of a benefit count to you as "working"? That might be enough for the person answering your question, but it might not be enough for it to count for you - especially if there are risks, costs or inconvenience involved.

And there's who did it work for in the research? Whether or not research results apply to a person in your situation can be straightforward, but it might not be.

And how high did researchers set the bar? Did the treatment effect have to be superior to doing nothing, or doing something else - or is the information coming from comparing it to something else that itself may not be all that effective? You might think that can't possibly happen, but it does more often than you might think. You can find out about this here at Statistically Funny, where I tackle the issue of drugs that are "no worse (more or less)." 

Finally, one of the most common trip-ups of all: did they really measure the outcome, or a proxy for it? If it's a proxy for the real thing, how good is it? The use of surrogate measures or biomarkers is increasing fast: you can learn more about why this can lead to an unreliable answer here.

So while there are many who might have told Leonard, "Yes, it's been proven to work in clinical trials" in a few seconds flat, I wonder how long it would take his statistician to answer the question? There are no stupid questions, but beware of the too-simple answer.

Monday, September 16, 2013

More than one kind of self-control

If you like reading randomized trials about skin and oral health treatments - and who doesn't? - you come across a few split-face and split-mouth ones. Instead of randomizing groups of people to different interventions so that a group of people can be a control group (parallel trials), sections of a person are randomized.

It's not only done with faces and teeth. Pairs of body parts can be randomized too, like arms or legs. These studies are sometimes called "within-person" trials. This kind of randomization means that you need fewer people in the trial, because you don't have to account for all the variations between human beings.

It has to be a treatment that affects only the specific area of the body treated, though. Anything that could have an influence on the "control" part is called a spill-over effect. There are still inevitably things that happen that affect the whole person, and those have to be accounted for with this kind of trial. Body part randomization is one of several ways a person can be their own control: the n of 1 trial is another way.

Randomizing sections didn't start in trials with people: it began with split-plot experiments in agricultural research. The idea was developed by the pioneer statistician, Sir Ronald Aylmer Fisher, who had done breeding experiments. He explained the technique in his classic 1925 text, "Statistical Methods for Research Workers."

It's great to see that neither blackheads nor treatment effects are hampering the Twilling sisters' style! They do seem to be at risk of susceptibility to the skincare industry's hard sells, though. Those issues are the subject of my post Blemish: The Truth About Blackheads.

Sunday, July 28, 2013

Alleged effects include howling

When dogs howl at night, it's not the full moon that sets them off. Dogs are communicating for all sorts of reasons. We're just not all that good at understanding what they're saying.

We make so many mistakes about attributing cause and effect for so many reasons, that it's almost surprising we get it right as often as we do. But all those mistaken beliefs we realize we have, don't seem to teach us a lesson. Pretty soon after catching ourselves out, we're at it again, taking mental shortcuts, being cognitive misers.

It's so pervasive, you would think we would know this about ourselves, at least, even if we don't understand dogs. Yet we commonly under-estimate how much bias is affecting our beliefs. That's been dubbed the bias blind spot that we (allegedly) tend to live in.

Even taking all that into account, "effect" is an astonishingly over-used word, especially in research and science communication where you would hope people would be more careful. The maxim that correlation (happening at the same time) does not necessarily mean causation has spread far and wide, becoming something of a cliche along the way.

But does that mean that people are as careful with the use of the word "effect" as they are with the use of the "cause" word? Unfortunately not.

Take this common one: "Side effects include...." Well, actually, don't be so fast to swallow that one. Sometimes, genuine adverse effects will follow that phrase. But more often, the catalogue that follows is not adverse effects, but a list of adverse events - things that happened (or were reported). Some of them may be causally related, some might not be.

You have to look carefully at claims of benefits and harms. Even researchers who aren't particularly biased will word it carelessly. You will often hear that 14% experienced nausea, say - without it being pointed out that 13% of people on placebos also experienced nausea, and the difference wasn't statistically significant. Some adverse effects are well known, and it doesn't matter (diarrhea and antibiotics, say). That's not always so, though - a complex subject I'll get to on a future Statistically Funny, so watch this space.

If the word "effect" is over-used, the word "hypothesis" is under-used. Although generating hypotheses is a critical part of science, hypotheses aren't really marketed as what they are: ideas in need of testing. Often the language is that of attribution throughout, with a little fig-leaf of a sentence tacked on about the need for confirmatory studies. In fact, we cannot take replication and confirmation for granted at all.

Occasionally, the word "effect" is used to name a literal "hypothesis." That happened with "the Hawthorne effect." You can read more about that in my post, The Hawthorne effect: An old scientists' tale lingering "in the gunsmoke of academic snipers"