tag:blogger.com,1999:blog-6353097553819934624.post6928350591275496502..comments2018-02-05T06:40:23.466-05:00Comments on Statistically Funny: Nervously approaching significanceHilda Bastianhttp://www.blogger.com/profile/01418954331826160477noreply@blogger.comBlogger9125tag:blogger.com,1999:blog-6353097553819934624.post-50797110368908773412013-11-06T05:52:12.581-05:002013-11-06T05:52:12.581-05:00OK, Bruce, I think I found a way to go about it: b...OK, Bruce, I think I found a way to go about it: be great if you could let me know what you think now. I haven't gone into pretest probability because it would overload this post: I will do that in another post and then cross-link. But I've tried to be less "frequentist"/simplistic without getting too complex about it. I'd be grateful for feedback.Hilda Bastianhttps://www.blogger.com/profile/01418954331826160477noreply@blogger.comtag:blogger.com,1999:blog-6353097553819934624.post-69909472187603031272013-04-29T08:16:06.530-04:002013-04-29T08:16:06.530-04:00Still haven't thought of a good way to resolve...Still haven't thought of a good way to resolve this - other than to do a Bayesian cartoon/explanation and then cross-link. I don't think there's really such a thing as "no bias", and statistical significance is used in such a variety of ways. Pretest probability is just too big an issue to tackle in a sentence. So watch this space.....Hilda Bastianhttps://www.blogger.com/profile/01418954331826160477noreply@blogger.comtag:blogger.com,1999:blog-6353097553819934624.post-76468234877657458512013-04-11T18:38:16.349-04:002013-04-11T18:38:16.349-04:00I still don't think this is true. The chance o...I still don't think this is true. The chance of it being a fluke could be much greater than 5% or considerably less than 5% depending on the pretest probability. In my orange juice example, it simply isn't the case that you SHOULD think that the study of orange juice as a cure for pancreatic cancer that was reported out as p<0.05 is a fluke less than 5% of the time. You would be on firm ground to believe that it was a fluke very nearly 100% of the time.<br /><br />My formulation was neutral in regards to the question of the importance of considering pretest probability. I happen to think it is very important to consider pretest probability. (So does Ioannidis, who you link to on multiple occasions.) It seems you feel it is less important than I do. So be it. It's an active debate with reasonable arguments about why one approach is superior to the other in practice. However, it IS possible to give a definition of p-value that doesn't require you to take a position on the importance of pretest probability.<br /><br />You do this by explaining it in terms of the expected false positive rate given the study design (and the assumption of no bias). <br /><br />"Given a study of this design, with an assumption of no bias, a p-value threshold of <0.05 would result in a false positive result 5% of the time."<br /><br />Doesn't this capture what we need to say about p-value?<br /><br />You can't take a position on whether or not the result is a "fluke" less than 5% of the time unless you take a position on whether or not it is important to consider pretest probability.Bruce Scotthttps://www.blogger.com/profile/04422105650141812623noreply@blogger.comtag:blogger.com,1999:blog-6353097553819934624.post-86181219698256592142013-04-10T22:06:11.871-04:002013-04-10T22:06:11.871-04:00Thanks, Bruce: that nails more precisely what'...Thanks, Bruce: that nails more precisely what's grating. Very helpful. How about this?<br /><br />"...because it means the chance that this is a fluke is less than 5% (0.05 or 5 out of 100)."Hilda Bastianhttps://www.blogger.com/profile/01418954331826160477noreply@blogger.comtag:blogger.com,1999:blog-6353097553819934624.post-70021276756460136652013-04-10T19:49:51.485-04:002013-04-10T19:49:51.485-04:00How about:
For p<0.05
"If the null hypoth...How about:<br /><br />For p<0.05<br />"If the null hypothesis is true, then we would expect a study of this design to produce a false-positive result 5% of the time."<br /><br />It isn't as catchy or concise, but it is more true, isn't it? It might be that the attempt to make the explanation more understandable has meant that you've lost too much of the meaning. Although you might need to further define "null hypothesis" and "false positive", so the explanation gets longer and longer (and potentially off-putting to those trying to read the blog). Brevity isn't my strength.<br /><br />I don't think this explanation of p value would bother a Bayesian. (Even though they'd probably rather talk in likelihood ratios.) <br /><br />It's the "because it means that the relationship is highly unlikely to be a coincidence. The probability that it is a fluke is less than 5%" part that rankles, obviously. <br /><br />If a study conducted on 100C homeopathic treatment for influenza were reported out as positive because p<0.05, then I'd say that the relationship is extremely likely to be coincidence. (Or if we are less kind, bias or fraud.) The probability of it being a fluke is much, much greater than 5%. It approaches 100%, since the pretest probability is so vanishingly small. Bruce Scotthttps://www.blogger.com/profile/04422105650141812623noreply@blogger.comtag:blogger.com,1999:blog-6353097553819934624.post-12890103203752396972013-04-10T16:57:28.919-04:002013-04-10T16:57:28.919-04:00G'day, John! Yes, it's arbitrary in some w...G'day, John! Yes, it's arbitrary in some ways, although the choice was a genuine attempt to find a meaningful cut-off point. See the link included in this post: <a href="http://statistically-funny.blogspot.com/2013/04/dont-worry-its-just-standard-deviation.html" rel="nofollow">Don't worry...it's just a standard deviation</a><br /><br />Actually, for the reporting of trial results, the p value is discouraged by several journals. The confidence levels are more valuable. Click here for <a href="http://www.consort-statement.org/consort-statement/13-19---results/item17a_outcomes-and-estimation/" rel="nofollow">the CONSORT Statement</a>.Hilda Bastianhttps://www.blogger.com/profile/01418954331826160477noreply@blogger.comtag:blogger.com,1999:blog-6353097553819934624.post-75367194480914686882013-04-10T16:27:05.186-04:002013-04-10T16:27:05.186-04:00Hello Hilda "Statistical significance is reac...Hello Hilda "Statistical significance is reached when a "p" value is less than 5% " <br />5% is merely convention at best or habit at worst. It is entirely arbitrary. While 5% seems to be used in medicine, in others, a much lower value is used (eg in the hunt for fundamental particles like the Higgs Boson). I would encourage the use of such cut-off only in the design of trials. In the methodology report 5% (or whatever) as the alpha, but not use the term "statistical significance" for a result with p<0.05, rather just report the p value.<br /><br />John<br />100dialysishttp://100dialysis.wordpress.com/noreply@blogger.comtag:blogger.com,1999:blog-6353097553819934624.post-34570749534035966692013-04-09T21:45:56.359-04:002013-04-09T21:45:56.359-04:00Thanks for the comment - and glad you liked the Bo...Thanks for the comment - and glad you liked the Bonferroni one. Well, I'm not a Bayesian, it's true, but not entirely a frequentist either. I tried to make the explanation of the mathematical conception understandable, but I think I avoided saying it meant it must be true: I only spoke of probability, and linked back to another post explaining a certain proportion will always be wrong.<br /><br />The frequentist heuristic is going to blow up errors, for sure. But so will the Bayesian one, if the priors are based on a paradigm that comes unraveled. Thanks for adding the orange juice example!Hilda Bastianhttps://www.blogger.com/profile/01418954331826160477noreply@blogger.comtag:blogger.com,1999:blog-6353097553819934624.post-39423699452197914862013-04-09T19:28:16.436-04:002013-04-09T19:28:16.436-04:00Yikes.
When you say: "The probability that i...Yikes.<br /><br />When you say: "The probability that it is a fluke is less than 5% (0.05 or 5 out of a 100)."<br /><br />You've taken the view that the frequentist position is right. <br /><br />If you ask a Bayesian, they'd say that you can't make that statement without having an a priori estimate of the likelihood.<br /><br />You can argue that the frequentist position is the most useful to adopt. You can't argue that it is straightforwardly true or actually bears up to even the tiniest degree of scrutiny.<br /><br />Take the thought experiment:<br /><br />1) If someone tells me that their study shows that drinking a big glass of orange juice raises blood sugar (P<0.05), I'm happy to agree that the probability that this is just noise is less than 5%. (Much less, actually, and I'll wonder why the study needed to be done.)<br /><br />2) If someone tells me that their study shows that drinking a big glass of orange juice cures metastatic pancreatic cancer (P<0.05), it would be perverse to agree that the result is just noise is only 5%. <br /><br />Sorry for only commenting on the thing that bugged me. I found this site based on someone posting a link to a nice cartoon about multiple comparisons. Your batting average seems to be pretty good so far.Bruce Scotthttps://www.blogger.com/profile/04422105650141812623noreply@blogger.com