Blog | Monday, June 4, 2018

Upon further review, reexamining the Illinois MRSA Active Surveillance mandate

“Thinking without the positing of categories and concepts in general would be as impossible as breathing in a vacuum”

—Albert Einstein (1949)

A couple weeks ago, another blogger highlighted a study in Clinical Infectious Diseases by Lin et al. that sought to estimate the benefits of the 2007 Illinois state-wide mandate of methicillin-resistant Staphylococcus Aureus (MRSA) active surveillance cultures in ICU settings. The post was titled “Good Intentions Does not Always Mean Good Policy” and concluded “There may be many reasons the hospitals in Illinois overall are seeing an estimated 30% decrease in their hospital-onset MRSA blood stream infection (as most states are) since the 2010 National Healthcare Safety Network baseline, but admission screening isn't one of them.”

I would like to list several reasons why I think we should reconsider the study authors' conclusions. And if you skip to the end, you will read why I think this study might make more valid conclusions about the lack of benefits of chlorhexidine gluconate bathing.

Let's start with validity from Shadish (2001) et al: “We use the term validity to refer to the proximate truth of an inference. When we say something is valid, we make a judgement about the extent to which relevant evidence supports that inference as being true of correct.” Cook and Campbell (1979) outlined four components of validity: statistical conclusion validity, internal validity, construct validity and external validity. I've written about their validity typology here, if you're interested.

Now let's review the Illinois study methods. They included data from 25 ICUs and completed eight, one-day point prevalence surveys after the mandate was initiated (twice annually 2008-2011 and annually in 2012 and 2013). There was no concurrent control group.

Thus, this quasi-experimental study design has very low internal validity. It has no measurement before the intervention (sometimes called historical controls) and no concurrent controls. Shadish labeled this design as a “one-group post-test only design” and summarized its limitations with “this design is rarely useful.”

The Illinois study also lacks statistical validity, since it is underpowered to detect a benefit of active surveillance. Shadish lists low statistical power as the first threat to statistical validity since “the experiment may incorrectly conclude that the relationship between treatment and outcome is not significant.” Since the Illinois study is a negative one—claiming active surveillance for MRSA didn't work—power is particularly important. If you jump ahead to my ICAAC abstract, you will begin to see why the study is likely very underpowered. Point prevalence is very insensitive to changes in acquisition or transmission, so you would need very large studies to see a benefit.

I'll admit that a statewide study has strong external validity, that is generalizability.

But let's focus on construct validity. Construct validity, what Einstein was hinting at in the quote above, describes whether a study measures what it claims to be measuring. For example, if a study claims that active surveillance for MRSA and isolation doesn't prevent MRSA transmission, that study better measure MRSA transmission.

Let's pause here. Now some might say, we don't care about transmission, we just care about MRSA prevalence or MRSA infections or MRSA central-line associated blood stream infections (or even deaths in Avengers movies). Yet, active surveillance for MRSA doesn't work like that; it prevents transmission between patients.

But what if measuring MRSA point prevalence was good enough at detecting MRSA transmission? Thought experiment: what if we found penicillin allergy alerts in the emergency room annoying and so we eliminated them? To see if this was safe, we then checked to see if anyone was having a penicillin allergic reaction every St Patrick's Day for the next five years. Good enough, right? Probably not. Why would we be comfortable saying yearly point prevalence is an adequate way of measuring the benefits of MRSA transmission? I don't think we should and here's why:

Back in 2002 I presented an abstract at ICAAC titled: “Point Prevalence and Clinical Culture Positivity of Vancomycin Resistant Enterococci are Poor Estimates of Infection Control Intervention Impact.” This study was based on the VRE model that we eventually published in Clinical Infectious Diseases (2004). Anyway, we modeled VRE transmission in the ICU under a condition where active surveillance compliance on admission increased from 60% to 100% and I assumed that isolation prevented 71% of transmissions. This is a math model, so we know that the intervention worked, but could we detect it? The answer was yes but only if we used admission/discharge screening cultures. If we used point prevalence, like the Illinois study, we would falsely claim that the intervention didn't work 54% of the time. However, if we used admission/discharge cultures, we would correctly determine that the intervention worked 96% of the time. Here is our conclusion:

“Point prevalence or clinical culture positivity often failed to detect a benefit due to stochastic fluctuations in prevalence and high prevalence of VRE in patients entering the ICU. Studies to assess the benefits of active surveillance for VRE should measure new incident cases. Relying on point prevalence or clinical culture positivity to assess the benefits of infection control interventions may underestimate the magnitude of their benefit and may be responsible for a persistent bias against the broader institution of active surveillance. The benefits of active surveillance and other infection control interventions are probably underestimated.”

Not bad for 16 years ago – replace vancomycin-resistant Enterococcus with MRSA and you can see why the study by Lin et al cannot be used to evaluate the benefits of active surveillance. Our conclusion was partially driven, as we said, by high rates of VRE colonization on admission. What did Lin et al say about their study: “we assessed MRSA prevalence in a region where MRSA is widely endemic both in the community and within healthcare facilities.” Thus, the MRSA situation in Illinois fits closely with what we modeled.

In conclusion – the study by Lin et al should not be used to claim active surveillance was ineffective in Illinois or elsewhere. When drawing inferences, it is important to remember all four components of validity described by Cook and Campbell 40 years ago. We still need to figure out why MRSA has declined by 30% in Illinois and elsewhere. I do wonder why we are so quick to claim CLABSI bundles, CHG bathing (see below) or other interventions have been driving these MRSA reductions and not active surveillance. If I had to guess, it's something yellow and not the data.


Side note: In this same study, the number of hospitals using CHG bathing in their ICUs went from 5 (20%) to 17 (68%). It appears the authors could use these same data and methods to show that CHG doesn't prevent MRSA in ICUs settings. Interestingly since CHG bathing works at the level of transmission and also on individual patients' decolonization, point prevalence data would have higher construct validity for evaluating CHG. The study might even be better powered to detect a benefit of CHG.

Eli N. Perencevich, MD, ACP Member, is an infectious disease physician and epidemiologist in Iowa City, Iowa, who studies methods to halt the spread of resistant bacteria in our hospitals (including novel ways to get everyone to wash their hands). This post originally appeared at the blog Controversies in Hospital Infection Prevention.