Statistical significance Positive predictive value Clinical significance Biostatistics
A statistically significant research finding should not be defined as a -value of 0.05 or less, because this definition does not take into account study power. Statistical significance was originally defined by Fisher RA as a -value of 0.05 or less. According to Fisher, any finding that is likely to occur by random variation no more than 1 in 20 times is considered significant. Neyman J and Pearson ES subsequently argued that Fisher's definition was incomplete. They proposed that statistical significance could only be determined by analyzing the chance of incorrectly considering a study finding was significant (a Type I error) or incorrectly considering a study finding was insignificant (a Type II error). Their definition of statistical significance is also incomplete because the error rates are considered separately, not together. A better definition of statistical significance is the positive predictive value of a -value, which is equal to the power divided by the sum of power and the -value. This definition is more complete and relevant than Fisher's or Neyman-Peason's definitions, because it takes into account both concepts of statistical significance. Using this definition, a statistically significant finding requires a -value of 0.05 or less when the power is at least 95%, and a -value of 0.032 or less when the power is 60%. To achieve statistical significance, -values must be adjusted downward as the study power decreases.
Metrics
3 Record Views
Details
Title
Predictive power of statistical significance
Creators
Thomas F Heston - Washington State University, Elson S. Floyd College of Medicine
Jackson M King - Creative Research Enterprises
Publication Details
World journal of methodology, Vol.7(4), pp.112-116