Thursday, February 7, 2013
How should we calculate influenza vaccine effectiveness?
You know, I probably should've just been happy with the reports that 2012-2013 influenza vaccine was 62% effective and called it a day. But this morning I read this nice report by Helen Branswell of the Canadian Press describing why vaccine-virus match isn't the only factor that impacts vaccine effectiveness and then I made the mistake of looking at the early-release CDC MMWR report more closely.
Now, I'm an infectious disease physician epidemiologist and led the influenza response at the University of Maryland, Baltimore and UMMS during the 2009 H1N1 pandemic. Thus, I'm no stranger to reading these reports. But for some reason, today, they just didn't make sense. It occurred to me that perhaps if I'm perplexed, others might be, so I've decided to post my questions and concerns and hopefully, as I get answers, I'll post them here.
Some Background: Read the CDC MMWR report from January 11th. and focus on Table 2. This table summarizes the vaccine effectiveness data for the vaccine vs. influenza A, influenza B and both.
Initial Observations: One, it appears that CDC is using a prospective cohort of 1,155 sick patients who presented as outpatients for acute respiratory illness, and not a group (cohort) of all patients eligible for vaccination. Ideally, you'd want to determine the likelihood that influenza vaccine prevents clinical illness, visits to the doctor, hospital admission and mortality.
The vaccine effectiveness in the MMWR report can't tell us that and I'll explain why in #1, below. Additionally, the MMWR report determines vaccine-effectiveness using a case-control method and not a cohort method. This might seem to be an esoteric point, but it could have a big influence on how effective we think a vaccine is. I will try to explain this in #2 and #3, below. Finally, I'm not sure how they decided which patients should be included in the uninfected group for their calculations. I'll explain this a bit more in #4 and show how it could bias the estimates of vaccine effectiveness.
1) Why does CDC utilize outpatient, sick controls in their estimates of vaccine efficacy? I suspect this is an issue of expediency and cost-effectiveness. It would be more expensive to enroll 1,000 patients in September and track them weekly to see if they get the vaccine and then if they develop symptoms and test them. Of course, they can't easily do randomized studies in the US or elsewhere since the vaccine is recommended for just about everybody, so randomizing to no vaccine would be unethical.
Whatever the reason, selecting an entire cohort of patients, already sick enough to visit their doctor, does not tell us how effective the vaccine is in preventing illness, preventing visits to the doctor, preventing hospitalization or preventing death. The MMWR report can only tell us how effective vaccine is in preventing an influenza infection vs. another infection conditional on already being sick enough to go to the doctor's office. What does that mean for people trying to decide if they should get vaccinated?
Also, could it be that selecting this cohort biases the findings in other ways? What if vaccinated patients would be more likely to seek medical care for their symptoms? What if those that develop acute respiratory illness are different or sicker than healthy controls in a systematic way? These could impact the measure of vaccine effectiveness.
Additionally, using outpatient, sick controls leaves out two very important groups: hospitalized patients and healthy populations that never developed an illness in the first place. I suspect that declining funding for CDC and other groups is behind this. You get what you pay for. However, none of the reports I've read explain this limitation when reporting vaccine effectiveness. They should.
[Author's Note: I've added a second post describing a bit more why CDC selected this cohort of patients.]
2) Why does the CDC measure vaccine effectiveness using odds ratios even when they have a cohort of patients? To explain further, a case-control study would be one where they find 1,000 (or any number) of influenza positive patients and then look back and see if they were vaccinated and then find another set of 1,000 influenza negative controls (healthy, sick, whatever) and see if they were vaccinated. Here they identified a cohort of patients with acute respiratory illness first and then determined their influenza status and vaccine status retrospectively. Thus, this is a "retrospective" cohort study. Just because the cohort was established conditional on them having an outpatient visit for a respiratory complaint, does not invalidate that this is a cohort. This matters since they report odds ratios and not relative risks. And as a reminder, when baseline or initial risk is high, the odds ratio can over-estimate the relative risk. To find out how and why this is important read this BMJ article.
3) Did measuring vaccine effectiveness using an odds ratio (OR) method (as it appears the CDC did) vs. the relative-risk (RR) method, as normally used in cohort studies, matter? The question here is not a theoretical one, as above, but rather I'm asking if we used the exact numbers in the MMWR report but used a cohort or relative-risk method, would we get a different estimate of effectiveness? Short answer: Yes
If we take the table showing attack rates in vaccinated vs unvaccinated for influenza A only (from Table 2 here), we get very different results based on the method used to calculate efficacy.
Using the CDC or OR method, vaccine effectiveness (VE) = (1-OR)*100 or (1-ad/bc)*100. Using that method, the calculated VE=53.4%. (CDC reports 55% in their table, since they adjusted for site.)
Using the RR (cohort) method, the VE = (1-RR)*100 where the RR= (a/(a+b)) / (c/(c+d)). Using that method, the calculated VE=44.1%.
This is a very big difference with a 9.3% absolute reduction in effectiveness by method alone! It seems that since site level variation is not a big driver of the effectiveness, the RR approach might be more accurate. Of note, when you do the above analysis for the vaccine vs influenza A or B, the VE falls from 62% using the = OR approach to 47% using the RR approach.
*I hope someone can explain why they are analyzing cohort data using case-control methods. For more information on how I calculated these estimates, see this paper by Walter Orenstein, et al from 1985. It appears this case-control method is standard in the influenza vaccine literature.
4) Why did CDC leave the influenza B positive patients out of their calculation of the effectiveness of the vaccine versus influenza A and vice-versa? When looking at Table 2 above, one thing struck me as odd. When doing the three effectiveness calculations, they used the same control group. To me, if you don't have influenza A, you should be included in the "uninfected group" for testing the effectiveness of the vaccine against influenza A. To see if this matters, I added in the 180 patients who were influenza B positive AND influenza A negative that the CDC left out of their calculation.
Here, if I use the CDC (case-control or odds-ratio method) I find a VE = 41% and if I use the cohort method, I find a VE = 34.4%. These results are so different from those reported in MMWR, that I'd be very interested to know why they chose to leave influenza B patients out.
OK. For influenza A, the vaccine effectiveness was reported as 55% in the MMWR report. Depending on how I calculated the vaccine effectiveness, I found that it ranged from 53.4% to 34.4%, with the more accurate estimate likely closer to 34%. A pretty huge range, don't you think?
Perhaps these reports should calculate effectiveness in a number of different ways and provide them in a sensitivity analysis. Better yet, we should fund prospective cohort studies that include healthy patients and measure the true effectiveness of the vaccine. Even better, a universal influenza vaccine would render this all moot, but that's in the future.
Eli N. Perencevich, MD, ACP Member, is an infectious disease physician and epidemiologist in Iowa City, Iowa, who studies methods to halt the spread of resistant bacteria in our hospitals (including novel ways to get everyone to wash their hands). This post originally appeared at the blog Controversies in Hospital Infection Prevention.
Contact ACP Internist
Send comments to ACP Internist staff at email@example.com.
- QD: News Every Day--Study finds financial bias doe...
- Larry the vomit simulator
- Preventing shingles
- QD: News Every Day--Adults widely consume suppleme...
- Is it safe?
- Is colonoscopy the best colon cancer screening tes...
- Progress notes are a poor match between billing an...
- QD: News Every Day--Empathetic doctors get rewarde...
- New norovirus strain strikes the U.S.
- QD: News Every Day--Gift restrictions among med st...
Members of the American College of Physicians contribute posts from their own sites to ACP Internistand ACP Hospitalist. Contributors include:
Albert Fuchs, MD, FACP, graduated from the University of California, Los Angeles School of Medicine, where he also did his internal medicine training. Certified by the American Board of Internal Medicine, Dr. Fuchs spent three years as a full-time faculty member at UCLA School of Medicine before opening his private practice in Beverly Hills in 2000.
And Thus, It Begins
Amanda Xi, ACP Medical Student Member, is a first-year medical student at the OUWB School of Medicine, charter class of 2015, in Rochester, Mich., from which she which chronicles her journey through medical training from day 1 of medical school.
Zackary Berger, MD, ACP Member, is a primary care doctor and general internist in the Division of General Internal Medicine at Johns Hopkins. His research interests include doctor-patient communication, bioethics, and systematic reviews.
Controversies in Hospital
Run by three ACP Fellows, this blog ponders vexing issues in infection prevention and control, inside and outside the hospital. Daniel J Diekema, MD, FACP, practices infectious diseases, clinical microbiology, and hospital epidemiology in Iowa City, Iowa, splitting time between seeing patients with infectious diseases, diagnosing infections in the microbiology laboratory, and trying to prevent infections in the hospital. Michael B. Edmond, MD, FACP, is a hospital epidemiologist in Richmond, Va., with a focus on understanding why infections occur in the hospital and ways to prevent these infections, and sees patients in the inpatient and outpatient settings. Eli N. Perencevich, MD, ACP Member, is an infectious disease physician and epidemiologist in Iowa City, Iowa, who studies methods to halt the spread of resistant bacteria in our hospitals (including novel ways to get everyone to wash their hands).
db's Medical Rants
Robert M. Centor, MD, FACP, contributes short essays contemplating medicine and the health care system.
Juliet K. Mavromatis, MD, FACP, provides a conversation about health topics for patients and health professionals.
Dr. Mintz' Blog
Matthew Mintz, MD, FACP, has practiced internal medicine for more than a decade and is an Associate Professor of Medicine at an academic medical center on the East Coast. His time is split between teaching medical students and residents, and caring for patients.
Toni Brayer, MD, FACP, blogs about the rapid changes in science, medicine, health and healing in the 21st century.
Vineet Arora, MD, FACP, is Associate Program Director for the Internal Medicine Residency and Assistant Dean of Scholarship & Discovery at the Pritzker School of Medicine for the University of Chicago. Her education and research focus is on resident duty hours, patient handoffs, medical professionalism, and quality of hospital care. She is also an academic hospitalist.
John H. Schumann, MD, FACP, provides transparency on the workings of medical practice and the complexities of hospital care, illuminates the emotional and cognitive aspects of caregiving and decision-making from the perspective of an active primary care physician, and offers behind-the-scenes portraits of hospital sanctums and the people who inhabit them.
Ryan Madanick, MD, ACP Member, is a gastroenterologist at the University of North Carolina School of Medicine, and the Program Director for the GI & Hepatology Fellowship Program. He specializes in diseases of the esophagus, with a strong interest in the diagnosis and treatment of patients who have difficult-to-manage esophageal problems such as refractory GERD, heartburn, and chest pain.
Mike Aref, MD, PhD, FACP, is an academic hospitalist with an interest in basic and clinical science and education, with interests in noninvasive monitoring and diagnostic testing using novel bedside imaging modalities, diagnostic reasoning, medical informatics, new medical education modalities, pre-code/code management, palliative care, patient-physician communication, quality improvement, and quantitative biomedical imaging.
William Hersh, MD, FACP, Professor and Chair, Department of Medical Informatics & Clinical Epidemiology, Oregon Health & Science University, posts his thoughts on various topics related to biomedical and health informatics.
David Katz, MD
David L. Katz, MD, MPH, FACP, is an internationally renowned authority on nutrition, weight management, and the prevention of chronic disease, and an internationally recognized leader in integrative medicine and patient-centered care.
Richard Just, MD, ACP Member, has 36 years in clinical practice of hematology and medical oncology. His blog is a joint publication with Gregg Masters, MPH.
Kevin Pho, MD, ACP Member, offers one of the Web's definitive sites for influential health commentary.
Michael Kirsch, MD, FACP, addresses the joys and challenges of medical practice, including controversies in the doctor-patient relationship, medical ethics and measuring medical quality. When he's not writing, he's performing colonoscopies.
Elaine Schattner, MD, FACP, shares her ideas on education, ethics in medicine, health care news and culture. Her views on medicine are informed by her past experiences in caring for patients, as a researcher in cancer immunology, and as a patient who's had breast cancer.
Mired in MedEd
Alexander M. Djuricich, MD, FACP, is the Associate Dean for Continuing Medical Education (CME), and a Program Director in Medicine-Pediatrics at the Indiana University School of Medicine in Indianapolis, where he blogs about medical education.
Rob Lamberts, MD, ACP Member, a med-peds and general practice internist, returns with "volume 2" of his personal musings about medicine, life, armadillos and Sasquatch at More Musings (of a Distractible Kind).
David M. Sack, MD, FACP, practices general gastroenterology at a small community hospital in Connecticut. His blog is a series of musings on medicine, medical care, the health care system and medical ethics, in no particular order.
Reflections of a Grady
Kimberly Manning, MD, FACP, reflects on the personal side of being a doctor in a community hospital in Atlanta.
The Blog of Paul Sufka
Paul Sufka, MD, ACP Member, is a board certified rheumatologist in St. Paul, Minn. He was a chief resident in internal medicine with the University of Minnesota and then completed his fellowship training in rheumatology in June 2011 at the University of Minnesota Department of Rheumatology. His interests include the use of technology in medicine.
Technology in (Medical)
Neil Mehta, MBBS, MS, FACP, is interested in use of technology in education, social media and networking, practice management and evidence-based medicine tools, personal information and knowledge management.
Peter A. Lipson,
Peter A. Lipson, MD, ACP Member, is a practicing internist and teaching physician in Southeast Michigan. The blog, which has been around in various forms since 2007, offers musings on the intersection of science, medicine, and culture.
Why is American Health Care So Expensive?
Janice Boughton, MD, FACP, practiced internal medicine for 20 years before adopting a career in hospital and primary care medicine as a locum tenens physician. She lives in Idaho when not traveling.
World's Best Site
Daniel Ginsberg, MD, FACP, is an internal medicine physician who has avidly applied computers to medicine since 1986, when he first wrote medically oriented computer programs. He is in practice in Tacoma, Washington.
Other blogs of note:
American Journal of
Also known as the Green Journal, the American Journal of Medicine publishes original clinical articles of interest to physicians in internal medicine and its subspecialities, both in academia and community-based practice.
A collaborative medical blog started by Neil Shapiro, MD, ACP Member, associate program director at New York University Medical Center's internal medicine residency program. Faculty, residents and students contribute case studies, mystery quizzes, news, commentary and more.
Michael Benjamin, MD, ACP member, doesn't accept industry money so he can create an independent, clinician-reviewed space on the Internet for physicians to report and comment on the medical news of the day.
The Public Library of Science's open access materials include a blog.
One of the most popular anonymous blogs written by an emergency room physician.