Physicians are encouraged, in ways large and small, to practice “evidence-based” medicine, or EBM. The Centre for Evidence Based Medicine at the University of Oxford defines EBM as “the conscientious, explicit and judicious use of current best evidence in making decisions about the care of individual patients.”
EBM is the basis for developing practice guidelines, and efforts to assess and improve the quality of care are now framed almost exclusively in terms of adherence to evidence-based practice.
In many ways, this is a very good thing. It certainly seems like progress to me to base decisions on the results of well-performed clinical research instead of a hunch (or worse). It would be hard to count the number of patients who have benefited from being treated with aspirin as part of the initial management of an acute MI or who were spared the side effects of now discredited treatments such as the routine use of lidocaine.
The model of advancing clinical research leading to ever-better clinical care is, however, at best incomplete and at worst, dangerously naive.
It is incomplete for at least two important reasons.
First, clinical research, no matter how good, will never be able to address every clinical question of importance to a particular patient. There are just too many scenarios, too many potential combinations of treatments, and way too much variability among patients. It is literally impossible to answer every clinically relevant question. Doctors and patients will always need to fill in the blanks, extrapolate, reason and hope.
Second, evidence, by its nature, can’t account for differences in patients’ values and preferences. Even for well-studied conditions, the “best” treatment may well be different for patients who have different tolerance for risk, or assign different importance to different outcomes, or have different priorities for longevity versus quality of life.
Why is the EBM model naive?
Simply put, it is becoming increasingly apparent that the “evidence” — peer-reviewed, published reports of clinical research studies — is deeply flawed. Publication bias makes it more likely that studies reporting positive treatment effects will appear more often than equally well-designed “negative” studies; pharmaceutical and device companies limit publication of data, skewing the information in the public domain; poorly designed studies abound; fraud exists. There is even a school of thought that most published findings are false.
So where does this leave us? Here are a few suggestions:
- Remain skeptical of clinical evidence.
- Recognize that clinical evidence should inform decision-making, not become a substitute for clinical decision-making.
- Ask yourself how the available evidence speaks to the details of the patient in front of you. Would your patient have qualified for the study you are considering?
- Remember that “evidence-based” is not the same as “right” – to know what is right requires engaging the patient in the decision-making.
What do you think?