Blog | Tuesday, July 29, 2014

Practice guidelines and quality care


As I have noted previously, I have a “love-hate” relationship with practice guidelines. Love because it is often helpful to refer to a set of evidence-based recommendations as part of clinical decision-making; hate because of all of the shortcomings of the guidelines themselves, as well as the evidence upon which they are based.

A recent piece in JAMA and the editorial that accompanied it reinforced my ambivalence.

The research report addressed a straightforward question: how often do “Class I” recommendations change in successive editions of guidelines on the same subject from the same organization. Recall that Class I recommendations are things that physicians “should do” for eligible patients. They are particularly important, because these recommendations often form the basis for quality metrics, against which physician performance is measured, increasingly with financial consequences. It is not hard to understand why.

First, the recommendations are, by nature, definitive. If a patient meets certain criteria (e.g., has evidence of ischemic vascular disease, and no allergy to aspirin), then she should get the indicated therapy or intervention (aspirin), making the quality assessment fairly straightforward. It is also generally easy to detect if the intervention was made. Finally, it is also easier to engage clinicians using quality metrics that detect “underuse” (patient did not get something he should have) than “overuse” (patient got a treatment or service he should not have).

The authors limited their study to guidelines published jointly by the American College of Cardiology and the American Heart Association. These are generally well-respected documents, and are often held up as models for how guidelines should be developed and promulgated. (Disclosure: I am a card-carrying fellow of both organizations.) They categorized the status of the original Class I recommendations in the subsequent guideline as either retained, downgraded or reversed, or omitted.

So what did the study find? Overall, about 9% of the recommendations were downgraded or reversed in the follow-up guideline.

I don’t know about you, but that seems like a lot to me, especially since the median time interval between the paired guidelines was 6 years. This is even more disturbing when you think about how many years it takes to develop quality metrics based on these guidelines, making it inevitable that some quality metrics will be based on discredited recommendations. The discordance of the newest cholesterol management guidelines with the widely adopted HEDIS measure for LDL management is just one example where this is already the case.

I think this is just 1 more reason why quality measures built around “process” (did you do this or that in the care of a patient) have to give way to measuring outcomes (how well did the patient do under your care).

What do you think?

Ira S. Nash, MD, FACP, is the senior vice president and executive director of the North Shore-LIJ Medical Group, and a professor of Cardiology and Population Health at Hofstra North Shore-LIJ School of Medicine. He is Board Certified in Internal Medicine and Cardiovascular Diseases and was in the private practice of cardiology before joining the full-time faculty of Massachusetts General Hospital. He then held a number of senior positions at Mount Sinai Medical Center prior to joining North Shore-LIJ. He is married with two daughters and enjoys cars, reading biographies and histories, and following his favorite baseball team, the New York Yankees, when not practicing medicine. This post originally appeared at his blog, Ausculation.