Sometimes, people combine data that really don't belong together - conflict all over the place!
The statistical test shown by the I2 in a meta-analysis tries to pin down how much conflict there is in a meta-analysis. (A meta-analysis pools multiple data sets. Quick intro about meta-analysis here.)
I2 is one way to measure "combinability": another is the chi-squared test (χ2 or Chi2).
You will often see the I2 in the forest plot. It is one way of measuring how much inconsistency there is in the results of different sets of data. That's called heterogeneity. The test is gauging if there is more difference between the results of the studies than you would expect just because of chance.
Here's a (very!) rough guide to interpreting the I2 result: 0 - 40% might be ok, 75% or more is "considerable" (that is, an awful lot!). (That's from section 9.5.2 here.)
Differences might be responsible for contradictory results - including differences in the people in the trials, the way they were treated, or the way the trials were done. Too much heterogeneity, and the trials really shouldn't be together. But heterogeneity isn't always a deal breaker. Sometimes it can be explained.
Want some in-depth reading about heterogeneity in systematic reviews? Here's an article by Paul Glasziou and Sharon Sanders from Statistics in Medicine [PDF].
Or would you rather see another cartoon about heterogeneity? Then check out the secret life of trials.
See also my post at Absolutely Maybe: 5 tips to understanding data in meta-analysis.
(Some of these characters also appear here.)
[Updated 4 July 2017.]