Blog | Tuesday, March 22, 2011

Radiologists' experience matters in mammography outcomes

There's a new study out on mammography with important implications for breast cancer screening. The main result is that when radiologists review more mammograms per year, the rate of false positives declines.

Image from the National Cancer InstituteThe stated purpose of the research,* published in the journal Radiology, was to see how radiologists' interpretive volume, essentially the number of mammograms read per year, affects their performance in breast cancer screening. The investigators collected data from six registries participating in the NCI's Breast Cancer Surveillance Consortium, involving 120 radiologists who interpreted 783,965 screening mammograms from 2002 to 2006. So it was a big study, at least in terms of the number of images and outcomes assessed.

First, and before reaching any conclusions, the variance among seasoned radiologists' everyday experience reading mammograms is striking. From the paper: "We studied 120 radiologists with a median age of 54 years (range, 37 to 74 years); most worked full time (75%), had 20 or more years of experience (53%), and had no fellowship training in breast imaging (92%). Time spent in breast imaging varied, with 26% of radiologists working less than 20% and 33% working 80% to 100% of their time in breast imaging. Most (61%) interpreted 1,000 to 2,999 mammograms annually, with 9% interpreting 5,000 or more mammograms."

So they're looking at a diverse bunch of radiologists reading mammograms, as young as 37 and as old as 74, most with no extra training in the subspecialty. The fraction of work effort spent on breast imaging, presumably mammography, sonograms and MRIs, ranged from a quarter of the group (26%) who spend less than a fifth of their time on it and a third (33%) who spend almost all of their time on breast imaging studies.

The investigators summarize their findings in the abstract: "The mean false-positive rate was 9.1% (95% CI; 8.1% to 10.1%), with rates significantly higher for radiologists who had the lowest total (P=.008) and screening (P=.015) volumes. Radiologists with low diagnostic volume (P=.004 and P=.008) and a greater screening focus (P=.003 and P=.002) had significantly lower false-positive and cancer detection rates, respectively. Median invasive tumor size and proportion of cancers detected at early stages did not vary by volume.

This means is that radiologists who review more mammograms are better at reading them correctly. The main difference is that they are less likely to call a false positive. Their work is otherwise comparable, mainly in terms of cancers identified.**

Why this matters is because the costs of false positives: emotional (which I have argued shouldn't matter so much), physical (surgery, complications of surgery, scars) and financial (costs of biopsies and surgery) are said to be the main problem with breast cancer screening by mammography. If we can reduce the false positive rate, breast cancer screening becomes more efficient and safer.

TIME provides the only major press coverage I found on this study, and suggests the findings may be counter-intuitive. I guess the notion is that radiologists might tire of reading so many films, or that a higher volume of work is inherently detrimental.

But I wasn't at all surprised, nor do I find the results counter-intuitive. The more time a medical specialist spends doing the same sort of work, say examining blood cells under the microscope, as I used to do, routinely, the more likely that doctor will know the difference between a benign variant and a likely sign of malignancy.

Finally, the authors point to the potential problem of inaccessibility of specialized radiologists--an argument against greater requirements, in terms of the number of mammograms a radiologist needs to read per year to be deemed qualified by the FDA and Mammography Quality Standards Act and Program. The point is that in some rural areas, women wouldn't have access to mammography if there's more stringency on radiologists' volume. But I don't see this accessibility problem as a valid issue. If the images were all digital, the doctor's location shouldn't matter at all.

*The work, put forth by the Group Health Research Institute and involving a broad range or investigators including biostatisticians, public health specialists, radiologists from institutions across the U.S., received significant funding from the American Cancer Society, the Longaberger Company's Horizon of Hope Campaign, the Breast Cancer Stamp Fund, the Agency for Healthcare Research and Quality and the NCI.

**I recommend a read of the full paper and in particular the discussion section, if you can access it through a library or elsewhere. It's fairly long, and includes some nuanced findings I could not fully cover here.

This post originally appeared at Medical Lessons, written by Elaine Schattner, ACP Member, a nonpracticing hematologist and oncologist who teaches at Weill Cornell Medical College, where she is a Clinical Associate Professor of Medicine. She shares her ideas on education, ethics in medicine, health care news and culture. Her views on medicine are informed by her past experiences in caring for patients, as a researcher in cancer immunology and as a patient who's had breast cancer.