The last several years have been unkind to the p-value. Formerly the star quarterback of the statistics team, the p-value is being pushed off the field and is spending more time hanging out on the sidelines.
Some researchers suggest that we abandon p-values completely. Most statisticians will probably suggest a more moderate approach. P-values are a highly valuable way to assess how compatible a data-set is with a specific hypothesis. However, all of us will need to tune-up our p-values. Expect peer reviewers in 2020 to be skeptical of p-values and to question rigorously the conclusions we make from them.
These five tips will help you tune-up your p-values to make them 2020 compliant.
Can you make your point without p-values? The time when p-values were the only way to assess a statistical question has passed. Look for reviewers in 2020 to be far more receptive to other methods. Is your data sufficiently compelling that plotting the data can make your point clear? Try methods that combine plots with control boundaries - such as control charts and analysis of means (ANOM). If your work is a pilot study, it may suffice to simply describe the data using graphical methods
For 2020 try writing a paper without p-values. It will probably take several years for all of us to adapt to the new culture of hypothesis testing. Expect to encounter resistance. However, reviewers will increasingly know "p<0.05" is no longer necessary nor sufficient for publication. Both journals and reviewers will be under pressure to adapt to the new culture.
If you will use p-values, it's always best to use the minimum number needed to make your point. Adding more p-values to your paper does not make it more compelling - only more suspicious. Try using a maximum of two p-values for your project: one for the primary objective and one for the secondary objective. If your project involves more than two hypothesis tests, split the work into several more concise manuscripts.
In all cases, p-values should only test hypotheses developed before data inspection. If you develop a hypothesis by inspecting data, never use that same data to test it. It's okay to develop a hypothesis by looking at your data. This is often we generate new ideas. But if you find something interesting by inspecting your data, it's important to let your readers know that you developed this hypothesis by looking at the data. You can suggest that the hypothesis might need further study in the discussion, but don’t do any p-values on it.
Many of us were taught to specify the cut-off of where we considered a p-value to be significant. Usually this was 0.05 or 0.01. We were told that it should be specified before calculating the p-value and should be written into the methods. Under this cutoff, and your study was a success. Over that value and your study was an unpublishable failure.
2020 shapes up to be the year when this binary decision will be finally set to rest.
For 2020 don't specify an arbitrary cutoff in your methods. You should specify the exact value of any calculated p-values in the results section and let the reader make his or her own decision on the merit of the p-value. This allows readers to make their own decisions about p-values taking the context of the problem into account. Most of us intuitively understand this. We may be willing to try to a new educational method when a study shows a difference between intervention and control groups with a p-value of 0.07. Conversely, we would probably be fearful of taking a new chemotherapy drug even if a study showed a p-value of 0.001 without waiting for further evidence. Give the readers permission to inject their own judgments.
Interpreting p-values in a Bayesian sense is also likely to become more important in 2020 - relating pre-test probability to the scientific finding of the study. Without resorting to complicated math, you should explain in the introduction of your paper what previous research has shown. Describe what you expect to find. Then, in the discussion, relate your p-values to your expected outcome. Consider small or even modest p-values compatible with previous research and have a plausible mechanism as supportive evidence. Conversely, be careful with even very small p-values that contradict previous research or have no plausible mechanism. Treat them as suspect.
Unfortunately the word significant has been overworked and overused in the scientific literature. The differentiation between statistically significant and practically significant and clinically significant is often confusing. Eliminate the word entirely from your paper.
Instead for 2020 try being much more modest about scientific claims. Instead of saying "there was a statistically significant difference between the two groups" try using words such "this suggests that there may be a different" or "the study suggests that the treatment may be effective." Instead of saying "clinically significant" try saying "the difference may be important" or "further study may be worthwhile.
Prior to 2020, scientific papers followed a familiar routine. Calculate a point estimate (usually the mean) and then write the conclusions of the paper as if this point estimate was a well-established truth. In reality, however, a point estimate is just one guess of the value of a parameter. Often it is the best estimate. But it's still an estimate. Usually, the data are compatible with a range of values. Use of confidence intervals as a simple and effective way to get an idea of this range.
When reporting p-values report the confidence interval for the point estimate. For 2020, it will be more appropriate to look at the full range of values in the confidence intervals. Look at one bound of the confidence interval. What would that mean if it were true? Write about it in your results. Then, look at the other bound. What would that mean if it were the true value? Write about it in your paper.
Many scientists are speaking out - and rightly so - about the crisis of repeatability in science. Daily we hear of previously accepted scientific dogma found on further study to be unsubstantiated. Years or even decades of productivity wasted.
Statistics is here to help us make sense of these discoveries. We as statisticians are here to help you develop and promote your discoveries. Proper use of p-values will allow you to make your best contributions to science. Durable scientific discoveries. Reproducible findings.
2020 is shaping up to be a pivotal year in statistics and a breakthrough point for the p-value. Follow these 5 simple steps for your own breakthrough and tune-up your p-values for 2020.
Proper use of p-values is just one of many ways that we at STAT59 are fighting to keep statistics honest and ethical. Download our Ethical Analysis Checklist and join our fight to keep statistics ethical.
Sign up for our mailing list and you will get monthly email updates and special offers.