EBM 2.0
“EBM 1.0 was riddled with bugs, does not function and is no longer supported.
Continued use of EBM requires an update to EBM 2.0″ … Dr. Anand Senthi
My friend Dr Anand Senthi [ @DrSenthi ] has spent countless hours delving into the statistical science that underpins our profession. Evidence-based medicine is the science upon which we base our practice. It is the mechanism by which medicine has evolved in the latter half of the twentieth century and up until today. On his journey from “early-adopter” to “skeptic” Anand has posed an elementary question: “does it work?”
I was lucky enough to catch a talk by Anand at an online “conference” entitled: “Evidence Based Fraud & the End of Statistical Significance”. It was a cracking talk that really got at the heart of our science and gave a vision for the future of medical research and practice.
EBM 2.0 is Anand’s proposed update to EBM as we know it. There are many functional and useful features of EBM and these still form the cornerstone of EBM 2.0. However, there are some components of EBM, specifically p-values and the concept of ‘statistical significance’ that have been corrupted and no longer provide us with a pathway towards scientific “truth“.
Together with my usual sparring partner – Justin Morgenstern – we spend a few hours discussing EBM 2.0 and how we might move ahead in the ‘post-p-values era‘. This conversation has been edited down to two podcasts that cover all the concepts that Anand has written about. The first episode is below. The references are listed below as well – so you can stay skeptical and read the base literature for yourself.
If you want to hear a short version of Anand’s treatise then check out the video below.
Please go to the EBM 2.0 website for more data, details and debate.
REFERENCES:
- The problem with EBM
- Ioannidis, J. P. A. (2005). “Why Most Published Research Findings Are False.” PLoS Medicine 2(8): e124.
- Prasad, V., et al. (2013). “A Decade of Reversal: An Analysis of 146 Contradicted Medical Practices.” Mayo Clin Proc 88(8): 790-798.
- Herrera-Perez, D., et al. (2019). “A comprehensive review of randomized clinical trials in three medical journals reveals 396 medical reversals.” eLife 8: e45183.
- Bias
- Pannucci, C. J. and E. G. Wilkins (2010). “Identifying and avoiding bias in research.” Plast Reconstr Surg 126(2): 619-625.
- Jones, C. W., et al. (2013). “Non-publication of large randomized clinical trials: cross sectional analysis.” Bmj 347: f6104.
- Ioannidis, J. P. A. (2019). “What Have We (Not) Learnt from Millions of Scientific Papers with P Values?” The American Statistician 73(sup1): 20-25.
- Chance: p values, statistical significance and Bayesian Analysis
- Nuzzo, R. (2014). “Scientific Method: Statistical Errors.” Nature 506(7487): 150-152.
- Greenland, S., et al. (2016). “Statistical tests, P values, confidence intervals, and power: a guide to misinterpretations.
- “Wasserstein, R. L. and N. A. Lazar (2016). “The ASA’s Statement on p -Values: Context, Process, and Purpose.” The American Statistician 70(2): 129-133
- Wasserstein, R. L., et al. (2019). “Moving to a World Beyond “ p < 0.05”.” The American Statistician 73(1): 1-19.
- Amrhein, V., et al. (2019). “Scientists rise up against statistical significance.” Nature 567(7748): 305
- Benjamin, D. J. and J. O. Berger (2019). “Three Recommendations for Improving the Use of p-Values.” The American Statistician 73(sup1): 186-191.
Hi Casey & Justin & Anand!
Thank you for a really interesting episode! I was convinced by Bayesian statisticians a while ago and do think it makes more sense than frequentist strategies, especially since we are using the same kind of thinking in our everyday clinical work. I also think that p-values do more harm than good and statistical significance should probably be banned from medical literature. However, I have a few points or criticisms about your EBM 2.0 approach:
– Even the most convinced Bayesian statisticians would agree that Bayes is definitely no cure-all. It makes more sense and easier to interpret than frequentist statistics but it also has many pitfalls (defining prior probability distributions in a consensual and understandable way, for example) and is gameable in other ways. But most importantly, it cannot increase the information-content of a study’s findings, ie doesn’t do anything for bad hypotheses or study designs.
– In my opinion, a Bayesian approach might actually be most useful when looking at “negative” trials. A good example would be the Bayesian re-analysis of the ANDROMEDA-SHOCK trial. The problem with frequentist stats is that when p>0.05 the only conclusion you can make is that you failed to reject the null hypothesis which is not very informative. You talk a lot about how many “positive” trials are in fact false positives but not so much about how frequentist stats make interpreting and learning useful information from negative trials.
– I think you mention Confidence Intervals once in the podcast and are a bit dismissive of them but I think in reality, they are some of the more useful parts of current frequentist publications. Of course the Bayesian equivalents are easier to understand but in practice, the numbers of a 95% CI are much more informative than the p-val.
– You don’t mention what I think is the original reason for many of the difficulties with current medical research, ie that the real effect sizes are likely very small and we have already picked the more lower-hanging fruits. This means that it is almost exclusively through large collaborative trials and prospective registries that we will be able to move forward and fine-tune our therapeutic arsenal. This however is deeply incompatible with the current model of building scientific carriers where everyone has an incentive to maximise the number of publications and thus fragmentize the efforts as much as possible.
– It would also be nice if EBM 2.0 reflected upon some of the other criticisms that EBM has received over the last 20-30 years, for example the fact that it is unrealistic that all our medical questions can ever be answered by RCTs and meta-analyses and other kinds of evidence can also valuably inform patient care.
All in all I really applaud your efforts to bring a more critical understanding of the limitations of the current publishing paradigm to front-line physicians, I guess I am just an even more savage skeptic 😅
Keep fighting the good fight!
Aron
Great work! Very excited to see this change. It’s about time.