Blog | Wednesday, May 16, 2012

3 ways that doctors make decisions


I hope to write regularly on doctors' decision-making. This first installment concerns types of evidence-based studies. Later, I will talk about how and whether doctors actually use such evidence to make decisions.

There are a number of kinds of evidence-based advice which doctors can use to make decisions. The differences among them are instructive.

1. Individual journal articles. These can present findings from a single trial. The potential biases are multiple. Not all findings get published (publication bias, i.e. we never find out about the results which do not support the hypothesis). Many results are never reproduced. To quote one widely read study, "studies that report positive or significant results are more likely to be published and outcomes that are statistically significant have higher odds of being fully reported." Some studies have to be retracted later by the journal which publishes them.

2. Reviews. These can be of many kinds. A narrative review recounts in a summary form the findings of a number of articles. The summary is susceptible to bias from many factors: which articles are chosen, what emphases are given, which author is doing the review. For that reason, some people prefer a ...

3. Systematic review. This combines research articles on a given topic according to strict definitions made a priori. Then the results from these individual articles are combined only if they are comparable, i.e. applying to the same population or disease. Some of the time individual results can be merged in a statistical way, called a meta-analysis. I work on a number of these; they have their own problems (the link is from an orthodontics journal but the issues are generally applicable).

4. Guidelines. Using the above reviews, a group of experts, often physicians, draft official recommendations for a given disease, group of diseases, or use of a given medication or procedure. But guidelines are only as good as the evidence they are based on, the applicability of the guidelines to the particular question that is being considered. Is the patient from the population that was studied? Is the clinical condition the same? Was there any bias that might have affected the results.

To take one example of a well-done review, the American Academy of Neurology and the American Headache Society recently published Evidence-based guideline update: Pharmacologic treatment for episodic migraine prevention in adults. Even such a guideline, however, comes with weaknesses. For some of these we can quote the authors. While good-quality evidence does support the effectiveness of certain medications for treating migraines, "evidence is unavailable to help the practitioner choose one therapy over another."

Another sort of weakness was also reported by these authors, but in a different way, in an often-ignored paragraph at the end of the article. For those that can't read the small type, these are disclosures of possible conflicts of interest. Nearly all the authors have received honorariums from a number of pharmaceutical companies. I don't think this completely obviates the recommendations, but it might if we look closely enough. This is why, to my way of thinking, any such guidelines need to be either free of conflict of interest or ensure balanced conflicts of interests among the writers.

Zackary Berger, MD, ACP Member, is a primary care doctor and general internist in the Division of General Internal Medicine at Johns Hopkins. His research interests include doctor-patient communication, bioethics, and systematic reviews. He is also a poet, journalist and translator in Yiddish and English. This post originally appeared at his blog.