Wednesday, January 30, 2013

Data Bingo! Oh no!



Oh boy - look what a data hunter has dragged in this time! Why is this problem so common? And who on earth is Bonferroni?

Our friend here found one "statistically significant" result when he looked at goodness knows how many differences between groups of people. He's fallen totally for a statistical illusion that's a hazard of 'multiple testing'. And a lot of headline writers and readers will fall for it, too.

Then he's made it worse by taking his unproven hypothesis (that a particular drink on a particular day in a particular group of people prevented stroke) and whacking on another unproven hypothesis (that if everyone else drinks lots of it, benefits will ensue). But it's the problem of multiple testing (also called multiplicity) where Bonferroni comes in.

It's pretty much inevitable that multiple testing will churn out some some totally random, unreliable answers. Something that the Italian mathematician, Carlo Bonferroni (1892-1960), figured out how to analyze.

 A "statistically significant" difference between groups of people means that more than 95 times out of a 100, roughly the same difference is likely to be found in similar sets of data. That's a high probability. Or put another way, it's less than a 5/100 or 5% probability of being a data anomaly (a "p" value of less than 0.05). 

If you test for multiple possibilities, you need to expect even your statistically significant "findings" to be wrong on average 5 times out of a 100 (or 1 in 20 findings). If you test only a few things, your chances of this kind of random error is very low.

But especially if you have a big dataset, the more things you look at, the higher the chance is that you'll drag total nonsense out. With high-powered computers crunching big data, this becomes a big problem - large numbers of spurious findings that can't be replicated.

Bonferroni's name graces some statistical tests used to interpret results when doing multiple tests. There are others. Some are concerned that techniques based on Bonferroni are too conservative - too likely to throw the baby out with the water, if you like. So they use tests that have a different basis, such as the False Discovery Rate (FDR).

Statistical tests can't totally eliminate the chance of random error, though. So you usually need more than just a single possibly random test result to be sure about something.

If you're interested in how to communicate statistics accurately and well, check out Session 2G at Science Online this week: Evelyn Lamb and I are co-moderating. Follow on Twitter with #PublicStats (#Scio13).

Getting more technical...

What about multiplicity issues in systematic reviews? As the Cochrane Handbook (section 16.7.2) points out, systematic reviews concentrate on estimating pre-specified effects - not searching for possible effects. Safeguards still matter, though. Even pre-specified analyses need to be kept to a minimum. And how many analyses were done needs to be kept in mind when interpreting results.

If you would like to read more technical information about multiple testing, here are some free slides from the University of Washington. And if you want to read more about the controversies and issues, here's a primer in Nature and an article in the Journal of Clinical Epidemiology (behind paywalls).


Saturday, January 26, 2013

Newsflash: Honking causes cancer



In The Emperor of all Maladies, author Siddhartha Mukherjee describes a type of cancer as "terrifying to experience, terrifying to observe and terrifying to treat."

Somehow, though, in our efforts to stem the tide of the disease and our dread of it, we can end up making things worse for many people. The shadow of cancer angst is spreading much further than it needs to go.

We're struggling, as a culture, with the consequences of the over- and mis-use of associations from epidemiological data about cancer risks. The imposition of risk awareness has been called a form of cultural imperialism. Cancer awareness-raising continues relentlessly, though - even in cases where a community's problem has become over-estimation of risk, not a lack of awareness.

This week, Jeff Niederdeppe and I will be co-moderating a discussion for science writers and researchers on these issues in the Covering cancer causes, prevention and screening session at Science Online. Come along, or follow/share thoughts and resources at the Scio13 wiki or via Twitter: #Scio13  #SciCancer

Want to increase your skills at picking out the important signals from all the noise? There's a collection of (free) important books and articles at PubMed Health that could help.

Saturday, January 19, 2013

Fright night in the doctors' lounge


It doesn't come as a terrible shock to hear that a lot of patients struggle with statistics. It's a little more scary, though, to be reminded that doctors' understanding of health statistics and data on screening isn't all that fabulous either. And now this month we hear that "a considerable proportion of researchers" don't understand routinely used statistical terms in systematic reviews. Gulp.

We definitely need bloggers and journalists to help turn this around. Improving the use of numbers - and critiquing misuse of statistics - is the focus of a session co-moderated by Evelyn Lamb and me coming up this month at Science Online. Evelyn gets the ball rolling further discussion in her blog at Scientific American. And Frank Swain from the Royal Statistical Society also weighs in on journalists' desire to learn more about statistics in the era of data journalism. (#scio13 #PublicStats)

Trials show that reading this book, Know Your Chances, could help.