Clinical trials are complicated enough when everything goes pretty much as expected. When it doesn't, the dilemma of continuing or stopping can be excruciatingly difficult. Some of the greatest dramas in clinical research are going on behind the scenes around this. Even who gets to call the shot can be bitterly disputed.
A trial starts with a plan for how many people have to be recruited to get an answer to the study's questions. This is calculated based on what's known about the chances of benefits and harms, and how to measure them.
Often a lot is known about all of this. Take a trial of antibiotics, for example. How many people will end up with gastrointestinal upsets is fairly predictable. But often the picture is so sketchy it's not much more than a stab in the dark.
Not being sure of the answers to the study's questions is an ethical prerequisite for doing clinical trials. That's called equipoise. The term was coined by lawyer Charles Fried in his 1974 book, Medical Experimentation. He argued that the investigator should be uncertain if people were going to be randomized. In 1987, Benjamin Freedman argued the case for clinical equipoise: that we need professional uncertainty, not necessarily individual uncertainty.
It's hard enough to agree if there's uncertainty at any time! But the ground can shift gradually, or even dramatically, while a trial is chugging along.
I think it's helpful to think of this in 2 ways: a shift in knowledge caused by the experience in the trial, and external reasons.
Internal issues that can put the continuation of the trial in question include:
- Not being able to recruit enough people to participate (by far the most common reason);
- More serious and/or frequent harm than expected tips the balance;
- Benefits much greater than expected;
- The trial turns out to be futile: the differences in outcome between groups is so small, even if the trial runs its course, we'll be none the wiser (PDF).
External developments that throw things up in the air or put the cat among the pigeons include:
- A new study or other data about benefits or safety - especially if it's from another similar trial;
- Pressure from groups who don't believe the trial is justified or ethical;
- Commercial reasons - a manufacturer is pulling the plug on developing the product it's trialing, or just can't afford the trial's upkeep;
- Opportunity costs for public research sponsors has been argued as a reason to pull the plug for possible futility, too.
Trials that involve the risk of harm to participants should have a plan for monitoring the progress of the trial without jeopardizing the trial's integrity. Blinding or masking the people assessing outcomes and running the trial is a key part of trial methodology (more about that here). Messing with that, or dipping into the data often, could end up leading everyone astray. Establishing stopping rules before the trial begins is the safeguard used against that - along with a committee of people other than the trial's investigators monitoring interim results.
Although they're called stopping "rules", they're actually more guideline than rule. And other than having it done independently of the investigators, there is no one widely agreed way to do it - including the role of the sponsors and their access to interim data.
Some methods focus on choosing a one-size-fits-all threshold for the data in the study, while others are more Bayesian - taking external data into account. There is a detailed look at this in a 2005 systematic review of trial data monitoring processes by Adrian Grant and colleagues for the UK's National Institute of Health Research (NIHR). They concluded there is no strong evidence that the data should stay blinded for the data monitoring committee.
A 2006 analysis HIV/AIDS trials stopped early because of harm, found that only 1 out of 10 had established a rule for this before the trial began but it's more common these days. A 2010 review of trials stopped early because the benefits were greater than expected found that 70% mentioned a data monitoring committee (DMC). (These can also be called data and safety monitoring boards (DSMBs) or data monitoring and ethics committees (DMECs).)
Despite my cartoon of data monitoring police, DMCs are only advisors to the people running the trial. They're not responsible for the interpretation of a trial's results, and what they do generally remains confidential. Who other than the DMC gets to see interim data, and when, is a debate that can get very heated.
Clinical trials only started to become common in the 1970s. Richard Stephens writes that it was only in the 1980s, though, that keeping trial results confidential while the trial is underway became the expected practice. In some circumstances, Stephens and his colleagues argue, publicly releasing interim results while the trial is still going on can be a good idea. They talk about examples where the release of interim results saved trials that would have foundered because of lack of recruitment from clinicians who didn't believe the trial was necessary.
One approach when there's not enough knowledge to make reliable trial design decisions is a type of trial called an adaptive trial. It's designed to run in steps, based on what's learned. About 1 in 4 might adapt the trial in some way (PDF). It's relatively early days for those.
In the end, no matter which processes are used, weighing up the interests of the people in the trial, with the interests of everyone else in the future who could benefit from more data, will be irreducibly tough. Steven Goodman writes that we need more people with enough understanding and experience of the statistics and dilemmas involved in data monitoring committees.
We also need to know more about when and how to bring people participating in the trial into the loop - including having community representation on DMCs. Informing participants at key points more would means some will leave. But most might stay, as they did in the Women's Health Initiative hormone therapy trials (PDF) and one of the AZT trials in the earlier years of the HIV epidemic.
There is one clearcut issue here. And that's the need to release the results of any trial when it's over, regardless of how or why it ended. That's a clear ethical obligation to the people who participated in the trial - the desire to advance knowledge and help others is one of the reasons many people agree to participate. (More on this at the All Trials campaign.)
More at Absolutely Maybe: The Mess That Trials Stopped Early Can Leave Behind
Trial acronyms: If someone really did try to make an artificial gallbladder - not to mention actually start a trial on it! - I think lots of us would be pretty aghast! But a lot of us are pretty aghast about the mania for trial acronyms too. More on that here at Statistically Funny.