Precise numbers and claims - as though there is no margin for error - are all around us. When someone tells you that 54.3% of people with some disease will have a particular outcome, they're basically predicting the future of all groups of people based on what happened to another group of people in the past. Well, what are the chances of that, eh?
If our fortune teller was quoting the result of a study here, it could be written like this: 67.5% (95% CI: 62%-73%). The CI stands for "confidence interval" and it's an indication of certainty. It's showing us that 95 times out of 100, similar groups of people in similar circumstances would experience this result, somewhere between 62% and 73% of the time.
The chances of the result always being precisely 67.5% can be pretty slim or very high, depending on lots of things. If there is enough data to be really sure, the confidence interval will be narrow: the best case scenario and the worst case scenario will be close together (say, 66% to 69%).
We do this all the time. If someone asks, "How long does it take to get to your house?", we don't say "39.35 minutes". We say, "Usually about half an hour to 45 minutes, depending on the traffic."
In a systematic review, you will often see an outcome of an individual study shown as a line. The length of that line is showing you the length of the confidence interval around the result. It looks something like this:
This is called a forest plot. Find more from Statistically Funny on this in The Forest Plot Trilogy.