by Marissa J Carter, PhD MA
Strategic Solutions, Inc.
Bozeman, MT
September 2024
The concept of evidence-based medicine (EBM) is basically about whether a product (drug, biologic, or device) or any kind of intervention works without also simultaneously causing undue harm and should always be framed in terms of a clinical question about a patient. The weight of evidence is both a reflection of its level of evidence and strength.
Over the years, EBM has become synonymous with a hierarchy in which randomized controlled trials (and associated systematic reviews and meta-analyses) are at the summit of a pyramid, with observational studies, non-comparative studies, all the way down to opinions, comprising its lower layers. Although EBM itself is a good idea, its execution in practice can be puzzling to clinicians because of the complexity of the processes used to derive useful information as well as the limitations of those methods.
Let’s take an example: Wagner 2 grade diabetic foot ulcers (DFUs). The Wagner scale is obsolescent and covers a multitude of sins because it was devised at a time when we didn’t fully understand the nature of these kinds of wounds (and I’ll also say we still have some way to go to fully understand them). For example, if I’m a diabetic patient and one day I had an avulsion injury on the foot that could have exposed subcutaneous tissue or even bone that by itself doesn’t make it a Wagner 2 DFU; we still have to establish it is a chronic wound and that anatomically it seems to meets the definition. But, let’s say that’s done and my physician is struggling to heal it 6 months after he first saw it and that the wound itself has had reasonably good basic wound care. What to do? Oh, let’s look at some systematic reviews. But, wait a minute, aren’t we skipping a few steps?
Is the wound bed properly prepared? Have you established there are no ischemic issues? Could there be some biofilm? How sure are you of its etiology? Does the patient have any comorbidities or other issues that could be part of the problem? Because if you haven’t done your homework, systematic reviews are going to confuse you further.
But let’s say you were well trained and start looking at a variety of interventions with the assumption that systematic reviews can tell you the answer. The first one you find (it concerns some cellular/and or tissue-based products or CAMPs if you prefer that terminology) shows some brilliant meta-analyses that have really low p values and high odds of healing. You tick the boxes and try it out but it fails. What went wrong? Unfortunately, the devil is in the details.
There are no requirements for serious training or tests to conduct a systematic review. So, the review you’re looking at could be well done or poorly executed, and as a peer reviewer of systematic reviews I can tell you that the authors of many systematic reviews really don’t know what they’re doing despite the best of intentions. Like surgery, it takes a lot of practice and guidance to become competent and master the nuances. I’ve seen recitations of individual studies that tell you nothing about their real strengths and weaknesses, meta-analyses that should frankly have never been done, and a complete lack of understanding regarding trials designs and the nature of the populations under study. If it takes a village to raise a child as the old proverb goes then it takes an experienced cross-functional team to properly conduct a systematic review and that includes clinicians, statisticians, and methodological experts.
So, caveat emptor. And that includes the fine print. For example, if our hypothetical patient does not meet the inclusion and exclusion criteria of the study, stop right there, and move on. If reimbursement criteria prevent you from fully emulating the study treatment, consider the ramifications. There is a reason why we do clinical research and that is extrapolation of known results to new clinical situations often fails, sometimes for non-obvious reasons. What are the limitations of the review? Does your patient and associated chronic wound really fit in? Have a knowledgeable colleague take a look at a systematic review if you have concerns; ask questions. Consult a network. Translating clinical trial results to an n of one—your patient—is not an exact science so keep good medical notes like a researcher. Learn from failures and share your successes so the wound care community can benefit.
About the author
Marissa Carter PhD is President of Strategic Solutions Inc. She is a biostatistician, epidemiologist, EBM practitioner, and a health economist, spending a lot of her time designing, monitoring, and analyzing clinical studies in the fields of wound care, orthopedics, epidemiology, ophthalmology, and quality of life. Dr Carter’s strengths lie in the many scientific disciplines in which she worked that she brings to bear on medical problems. She is a peer reviewer for many medical journals and most recently in 2019, served as a member on the Contractor Advisory Committee for Topical Oxygen Treatment (TOT). Dr Carter also chairs the Clinical Trial Reporting Group for the Wound Care Collaborative Community (WCCC) and has chaired other work groups; she continues to participate in several other working groups with the WCCC and has worked with both CMS, the FDA, and AHRQ on a number of wound care projects and issues. In addition, she managed the Global Indicators Field Testing Project to pilot the WHO Global Indicators in Latin America (eye care). She holds an MA in biochemistry from Oxford University and a PhD in chemistry from Brandeis University. She is the author or coauthor of more than 150 peer-reviewed articles and book chapters in medicine and her studies have won several awards.
The views and opinions expressed here are those of the author and do not necessarily reflect official policy or position of any other agency, organization, employer or company.