Saturday, January 26, 2013

Newsflash: Honking causes cancer



In The Emperor of all Maladies, author Siddhartha Mukherjee describes a type of cancer as "terrifying to experience, terrifying to observe and terrifying to treat."

Somehow, though, in our efforts to stem the tide of the disease and our dread of it, we can end up making things worse for many people. The shadow of cancer angst is spreading much further than it needs to go.

We're struggling, as a culture, with the consequences of the over- and mis-use of associations from epidemiological data about cancer risks. The imposition of risk awareness has been called a form of cultural imperialism. Cancer awareness-raising continues relentlessly, though - even in cases where a community's problem has become over-estimation of risk, not a lack of awareness.

This week, Jeff Niederdeppe and I will be co-moderating a discussion for science writers and researchers on these issues in the Covering cancer causes, prevention and screening session at Science Online. Come along, or follow/share thoughts and resources at the Scio13 wiki or via Twitter: #Scio13  #SciCancer

Want to increase your skills at picking out the important signals from all the noise? There's a collection of (free) important books and articles at PubMed Health that could help.

Saturday, January 19, 2013

Fright night in the doctors' lounge


It doesn't come as a terrible shock to hear that a lot of patients struggle with statistics. It's a little more scary, though, to be reminded that doctors' understanding of health statistics and data on screening isn't all that fabulous either. And now this month we hear that "a considerable proportion of researchers" don't understand routinely used statistical terms in systematic reviews. Gulp.

We've probably only been scratching the surface of what can be done to improve this. A recent small trial found that hyperlinking explanations to statistical and methodological terms in journal articles could improve physicians' understanding.

Statistical literacy needs a combination of literacy, mathematical, and critical skills (PDF). In communication, numbers will always be tangled up with words (and sometimes words are better, as I discuss here).

Journalists are key to helping turn this problem around. They probably aren't getting the training they need, according to this study from 2010 - but that might be improving...slowly. Thankfully, Frank Swain from the Royal Statistical Society reports encouragingly on journalists' desire to learn more about statistics in the era of data journalism.

Want to learn more about basic statistics in health studies? Trials show that reading this book, Know Your Chances, could help.

And if you're wondering about how your own mathematics competency is faring since you left school, here's an online test. Mind you, it would help a lot if we had a clearer way of communicating numbers. The confusion over what means mean is a good case in point, covered here at Statistically Funny.


Another study on doctors' understanding and communication of data on the potential benefits and harms of treatment - published in August 2016.

This post was updated on 30 January 2016: the original shorter post was written when Evelyn Lamb and I were co-moderating a session at Science Online.

Additional study on 3 September 2016.

Saturday, December 8, 2012

The diagnosing disorders epidemic



It all started with a single category in the 1840 US Census: "idiocy/lunacy", with the first DSM appearing in 1952 (Diagnostic and Statistic Manual of Mental Disorders). Now there are hundreds of ways for us not to be 'normal'. Frances Allen, the psychiatrist who led DSM-4, has written a scathing critique of DSM-5 that has even more diagnoses: meet Disruptive Mood Dysregulation Disorder, folks (formerly known as temper tantrums).

Read more about psychiatric over-diagnosis in my guest blog at Scientific American: "Is anybody sane here?" said the psychiatrist to the journalist

Find out more about tackling this problem in medicine generally at Preventing Overdiagnosis: Winding back the harms of too much medicine



[Update] In 2016, another look at the history of the DSM, with another call for the next one to be based on an objective assessment of reliable evidence.

Friday, November 30, 2012

The one about the ship-wrecked epidemiologists



Just what the world needs.... another inadequately discoverable journal! The number of medical journals is doubling every 20 years - and trials are scattered across so many, that it is becoming ever harder to track them down. Just how many journals do you need to read?, blogs Paul Glasziou.  Richard Smith and Ian Roberts argue that trials shouldn't even be published in journals any more.

And in case you were wondering what an n-of-1 trial is: it's a trial with one person in it (number = 1). It means the patient is their own control in a structured experiment. For example, an "n of 1 trial" of a particular drug would mean taking it for a pre-specified time, stopping for a pre-specified time, and so on. You can read more about this kind of trial here. (Or you could ponder how a trial on "n of 1"s had to be terminated because of lack of enrollment - and I thought I had a tough week!)


[Update 12 April 2016] Salima Punja studied meta-analysis of n-of-1 trials for her doctoral dissertation (here). Together with colleagues, she's incorporated n-of-1 trials with RCTs in a meta-analysis and concludes it improved the result. That study is here.

[Update 19 August 2016] Are n-of-1 trials going to boom in the age of "personalized medicine"? These authors address whether they are research that need ethics approval. Leaning towards yes, they're research but no, they don't need to go to ethics committee.

[Update 24 April 2018] Chalachew Alemayehu and colleagues hunted for n-of-1 trials reported in journals and found 131 of them, 6 of which where in developing countries. Their systematic review is here.


Tuesday, October 23, 2012

A dip in the data pool



Sometimes, people combine data that really don't belong together - conflict all over the place!

The statistical test shown by the Iin a meta-analysis tries to pin down how much conflict there is in a meta-analysis. (A meta-analysis pools multiple data sets. Quick intro about meta-analysis here.)

I2  is one way to measure "combinability": another is the chi-squared test (χor Chi2).

You will often see the I2 in the forest plot. It is one way of measuring how much inconsistency there is in the results of different sets of data. That's called heterogeneity. The test is gauging if there is more difference between the results of the studies than you would expect just because of chance.

Here's a (very!) rough guide to interpreting the I2 result: 0 - 40% might be ok, 75% or more is "considerable" (that is, an awful lot!). (That's from section 9.5.2 here.)


Differences might be responsible for contradictory results - including differences in the people in the trials, the way they were treated, or the way the trials were done. Too much heterogeneity, and the trials really shouldn't be together. But heterogeneity isn't always a deal breaker. Sometimes it can be explained.

Want some in-depth reading about heterogeneity in systematic reviews? Here's an article by Paul Glasziou and Sharon Sanders from Statistics in Medicine [PDF].

Or would you rather see another cartoon about heterogeneity? Then check out the secret life of trials.

See also my post at Absolutely Maybe: 5 tips to understanding data in meta-analysis.

(Some of these characters also appear here.)

[Updated 4 July 2017.]


Thursday, October 18, 2012

You have the right to remain anxious....


"It's extremely hard not to have a diagnosis," according to Steve Woloshin, this week at the 2012 NIH Medicine in the Media course for journalists. Allen Frances talked about over-diagnosis of mental disorders (read more about that in my blog at Scientific American online).

The National Cancer Institute's Barry Kramer tackled the issue of over-diagnosis from cancer screening. He explained lead-time bias using an image of Snidely Whiplash tying someone to train tracks. Ineffective screening, he said, is like a pair of binoculars for the person tied to the tracks: you can see the train coming at you sooner, but it doesn't change the moment of impact.

Survival rates after a screening diagnosis increase, even when no one lived a day longer: people have cancer for longer when the diagnosis comes long before any symptoms. Screening is effective, on the other hand, when earlier detection means more people do well than would have done if they'd gone to the doctor first when there were symptoms.


Read more in The Disease Prevention Illusion: A Tragedy in Five Parts




Monday, October 15, 2012

Breaking news: space-jumping safety study



Making a good impression with headlines based on tiny preliminary studies? Too easy!

Other ways to fall into traps about exaggerated research findings: reports of laboratory or animal studies that don't mention their limitations, studies with no comparison group, conference presentations with inadequate data reports. These were some of the key points made by Steve Woloshin at the first full day of NIH's Medicine in the Media course, happening now in Potomac near Washington DC.

Read more here if you want to know more about the pitfalls of small study size and how to know if a study was big enough to be meaningful.

Update 31 July 2016: And now there's jumping from a plane without a parachute.

Friday, October 12, 2012

The Forest Plot Trilogy - a gripping thriller concludes



Forest plots, funnel plots - and what's with the mysterious diamond symbol, lurking like a secret sign, in meta-analyses? Meta-analysis is a statistical technique for combining the results of studies. It is often used in systematic reviews (and in non-systematic reviews, too).

A forest plot is a graphical way of presenting the results of each individual study and the combined result. The diamond is one way of showing that combined result. Here's a representation of a forest plot, with 4 trials (a line for each). The 4th trial finds the treatment better than what it's compared to: the other 3 had equivocal results because they're crossing the vertical line of no effect.



A funnel plot is one way of exploring for publication bias: whether or not there may be unpublished studies. Funnel plots can look kind of like the sketches below. The first shows a pretty normal distribution of studies - each blob is a study. It's roughly symmetrical: small under-powered studies spread around, with both positive and negative results.



This second one is asymmetrical or lopsided, suggesting there might be some studies that didn't show the treatment works - but they weren't published:


        Gaping hole where negative studies should be



(This post uses snapshots from slides I'll be using to explain systematic reviews at the 2012 NIH Medicine in the Medicine course that's starting this weekend. It's several days of in-depth training in evidence and statistics for journalists. This year it's being held at Potomac, just near Washington. And here's a post on the start of the course that I wrote for Scientific American online.)



Saturday, August 11, 2012

The non-statistical significance of the anecdote


























Compelling anecdotes - "It saved my life!" - can drive us so wildly astray. Rigorous research is the antidote, but it often doesn't feel like it has an even chance! Especially when it comes to screening and "preventive" medicine (conventional and complementary).

A wonderful book by Margaret McCartney is a great example of what we need so much: a combination of beautiful storytelling with reliable research. It charts the paths that lead to health care that does more harm than good - over-treating the (well-off) worried well while the (less well-to-do) sick wait. This is The Patient Paradox, where "clinics and waiting rooms are jammed with healthy people" but there's not enough care for the sick.

Margaret blogs here and tweets here.

Friday, August 3, 2012

Drugs go head-to-head at the Pharma Olympics


At the Olympics, humans try to go "higher, faster, stronger" - and achieve their personal best. The bar is constantly raised. Drugs don't have to be better to cross the line, though: they can get by on what's called non-inferiority or equivalence trials. "No worse" (more or less) can be good enough.

Some drugs are now only loosely possibly non-inferior to other non-inferior drugs - several degrees removed from proven superior to doing nothing. Add the increasing reliance on shortcut measures of what works, and there's a real worry that for drugs, the performance bar is being lowered.

If you want to read about the differences between traditional randomized controlled trials that can show superiority and their non-inferiority and equivalence cousins, click on the PDF here at the CONSORT website.