Over the past few months, my Chief Medical Officer and I met with department chairs to discuss incentive-based quality and safety metrics. In these meetings we agree upon metrics tied to financial incentives for achieving specified targets. The goal is to achieve a win-win; we improve patient care and the department benefits. We typically include at least 1 metric that is a health care associated infection.
With 1 of our surgical chairs, we were discussing surgical site infection (SSI) rates. We all agreed that these are excellent metrics on which to focus. The more contentious issue was the target rate to be achieved. We suggested a 10% reduction from the current rate. His counter-argument was that his department's SSI rates were already very low and further reduction may not be achievable. He suggested that national benchmarks be used. That's a great idea, except that we no longer have national benchmarks.
The National Healthcare Safety Network (NHSN) used to provide mean pooled infection rates with percentile scoring using data that were updated at regular intervals. This is no longer the case. Now NHSN provides standardized infection ratios (SIRs) calculated by comparing the observed number of infections to the expected. SIRs could be very helpful in this case, but in reality they aren't since the expected numbers of infection are derived from data collected from 2006 to 2008. So using the SIR, all I could tell the surgical chair is how his current SSI rates compare to other programs nationally a decade ago. If the expected number of infections is derived from data that are a decade old, it defeats the entire purpose of the SIR. For this purpose, the SIRs that NHSN produces are worthless.
Hospitals have 2 needs with regards to quality metrics: internal trending (are we getting better or worse over time?), and benchmarking (how do we compare to other hospitals?). The SIRs that are currently being produced can be used for internal trending, but not benchmarking. This leaves hospitals completely in the dark vis-a-vis their comparative performance. CDC is in the process of establishing new baselines for expected infection rates. This will be helpful for a year or so, but then the expected data will become old and benchmarking will again be flawed.
There seems to me to be an easy fix: establish 2 SIRs. A static SIR can use a fixed data set to derive the expected number of infections. This will allow hospitals to be able to internally trend their performance over time. A dynamic SIR would use data from the previous year, updated annually, to allow for comparative performance. This could be easily accomplished.
While we have seen some improvement in NHSN metrics, the overall trend, in my opinion, is that NHSN is moving towards metrics of lesser value (e.g., lab-based automated metrics), and I get the sense that they're not particularly interested in the viewpoint of hospitals. In the value-based reimbursement era, hospitals need valid comparative performance data more than ever, yet CDC appears out of touch and moving in a completely different direction.
Michael B. Edmond, MD, FACP, is a hospital epidemiologist in Iowa City, IA, with a focus on improving the quality and safety of health care, and sees patients in the inpatient and outpatient settings. This post originally appeared at the blog Controversies in Hospital Infection Prevention.