Thursday, February 28, 2013

Cultivating creativity in medical training, FedEx style

Over the holidays, I took full advantage of this opportunity to read a book from start to finish. I chose Daniel Pink's Drive. It was actually recommended by @Medrants and I read it partly to understand why pay-for-performance often fails to accomplish its goals for complex tasks, such as patient care. However, the thing I found most interesting about this book was the way in which creativity is deliberately inspired and cultivated by industry.

I could not help but think about why we don't deliberately nurture creativity in medical trainees. Why am I so interested in creativity? Perhaps it is the countless trainees I have come across who are recruited to medical school and residency because of their commitment to service, and who also happen to have an exceptionally creative spirit. Unfortunately, I worry too many of them have their spirit squashed during traditional medical training.

I am not alone. I have seen experts argue the need to go from the traditional medical education that is fundamentally oppressive, inhibits critical thinking, and rewards conformity. Apart from the criticism, it is of course understandable why medical training does not cultivate creativity. Traditional medical practice does not value creativity. Patients don't equate "creative doctors" as the "best doctors." In fact, doctors who may be overly creative are accused of quackery.

So, why bother with cultivating creativity in medical training? Well, for one thing, creativity is tightly linked to innovation, something we can all benefit from in medical education and healthcare delivery. While patients may not want a "creative approach" to their medical care, creativity is the key spice in generating groundbreaking medical research, developing a new community or global health outreach program, or testing an innovative approach to improving the system of care that we work in.

Lastly, one key reason to cultivate creativity in medical trainees is to keep all those hopeful and motivated trainees engaged so that they can find joy in work and realize their value and potential as future physicians. In short, the healthcare system stands to benefit from the changes that are likely to emanate from creative inspired practicing physicians.

So what can we do to cultivate and promote creativity among medical trainees? While there are many possibilities including the trend to implement scholarly concentrations programs like the one I direct, one idea I was intrigued by was the use of a "FedEx Day."

FedEx Days originated in an Australian software company, but became popularized by Daniel Pink and others in industry. For a 24-hour period, employees are instructed to work on anything they want, provided it is not part of their regular job. The name "FedEx" stuck because of the "overnight delivery" of the exceptionally creative idea to the team, although there are efforts being undertaken to provide this idea with a new name.

Some of the best ideas have come from FedEx Days or similar approaches, like 3M's post-its or Google's Gmail. I haven't fully figured out how duty hours play into this yet, so before you report me or write this off, consider the following. Borrowing on the theories of Daniel Pink, we would conclude that trainees would gladly volunteer their time to do this because of intrinsic motivation to work on something that they could control and create. And to all the medical educators who can't possibly imagine how would we do this during a jam packed training program, let's brainstorm a creative solution together!

Vineet Arora, MD, is a Fellow of the American College of Physicians. She is Associate Program Director for the Internal Medicine Residency and Assistant Dean of Scholarship & Discovery at the Pritzker School of Medicine for the University of Chicago. Her education and research focus is on resident duty hours, patient handoffs, medical professionalism, and quality of hospital care. She is also an academic hospitalist, supervising internal medicine residents and students caring for general medicine patients, and serves as a career advisor and mentor for several medical students and residents, and directs the NIH-sponsored Training Early Achievers for Careers in Health (TEACH) Research program, which prepares and inspires talented diverse Chicago high school students to enter medical research careers. This post originally appeared on her blog, FutureDocs.

All the wrong questions

Should marijuana be legal, for either medical or recreational use? I think the best initial answer to this is: It's a crummy question! We are good at those.

It's a crummy question, because it calls for answers based on unsubstantiated opinion. Answering it does not invoke or even encourage any relevant evidence, or precedent. So what would a better question be? How about: On what basis should any particular substance be legal for either medical or recreational use?

There are many advantages to this new question. For one thing, since it has no emotionally-charged word like "marijuana" in it, it avoids provocation. It invites thought, rather than potentially thoughtless passion, the proverbial knee-jerk response. For another, it almost requires consideration of whether, and how, the question has been answered already. For instance, alcohol and tobacco are currently legal for recreational use: Why? What criteria pertain in these cases?

As for medical use: Benzodiazepines, such as Valium, are legal. Not only are these drugs potentially addictive, but they are among the few addictive substances (along with alcohol) from which withdrawal can be lethal. Benzos are incomparably more dangerous than marijuana. Why are these drugs legal and in current use? What are the relevant criteria?

Similarly, Dilaudid, which is a synthetic version of morphine, related to it and heroin, and many times more potent than either, is legal and in current use. Cocaine is legal and in current use in every hospital emergency room. Why? What are the relevant criteria? Asking whether or not marijuana might have legitimate medical application is a lot less provocative once we've conceded that cocaine already does. (It is used, by the way, in a dilute solution to treat severe nosebleeds.)

Should assisted suicide be legal? This is another poor question, inviting nothing but emotional responses and perhaps some religious moralizing. A better question is: Are there any circumstances in which allowing death to occur represents the best means of alleviating pain and preserving dignity? We should wrestle with that one--thinking of ourselves or the person we love best in the world in the hot seat--first. Then, we might constructively move on to: What, exactly, do we mean by "allowing"?

Even such words as "marijuana" and "suicide" are tepid in comparison to "abortion," a topic I broach only rarely, with caution and some degree of trepidation. But even where passions are most inflamed, we might turn down the heat by asking better questions. What evidence-based approaches most reliably reduce the frequency of abortions, legal or otherwise, in any given society? Since reducing the demand for abortions is good for all concerned, we might constructively start a discussion there, and might manage to avoid calling one another names or throwing things.

A recent, and already notorious, meta-analysis by Katherine Flegal and colleagues at the CDC suggests that death rates do not necessarily rise or fall with body weight. Asking, in reaction to this--is obesity important after all?--is misguided and silly. Good questions are: Who were the thin and heavy people with higher and lower death rates? Is extra body weight sometimes helpful, and if so, when? Is extra body weight sometimes harmful, and if so, when? We in fact have answers to all of these questions, but they are lost in a haze of hyperbolic nonsense if we fail to ask them.

And then, there is the vexing issue of the lingering, post-Newtown moment: What of that Constitutionally-protected right to bear arms? Asking this question, or any question remotely like it, is a surefire way to get everybody reaching for their respective triggers. It, and all variations on the theme of asking what the Founders really meant, may be diverting for Constitutional scholars, but it's mostly a boondoggle for the rest of us.

The exact words of the Second Amendment are: "A well regulated Militia, being necessary to the security of a free State, the right of the people to keep and bear Arms, shall not be infringed."

What are good questions that derive from this? Two occur to me, and they are as obvious as they are simple: Which people? And, what arms?

I trust we can agree, from the most devout pacifist to the NRA leadership, that the Second Amendment cannot possibly mean all people. It can't mean prison inmates, incarcerated for violent crime. It probably can't mean felons on parole after violent crime, either. It can't mean people committed in a psychiatric ward for paranoid schizophrenia. It can't mean 2-year-olds. I don't think any of this is even remotely controversial.

But if "the people" does not, and cannot, mean all people, and if the Founders did not further specify which people, then that is a question we are obligated to ask and answer. Which people? Once we agree it requires an answer, a potentially constructive dialogue ensues.

And, similarly, what arms? Those who feel the Founders anticipated our era of Bushmasters must concede that if so, they envisioned nuclear weapons as well. The Founders either were omniscient and prescient, or they weren't. If they were, and they did envision nuclear weapons, why didn't they say: arms except nuclear weapons? Did they mean individuals should have a right to nuclear arms? Is this a case even the NRA wants to make?

If not, the Founders left it to us to determine what arms made sense. So that becomes a good question: What arms do make sense? We don't have to answer it. We just have to recognize the legitimacy of the question.

Again, I suspect that across the spectrum of opinion here we can agree that individual citizens should not possess a biological weapon, such as smallpox, capable of wiping out the entire population indiscriminately. Individual citizens should not have nuclear missile launch codes. No need to go on, the point is clear. The right to bear arms can't possibly mean all arms. So we are invited to ask: What arms? Again, the question could lead to constructive dialogue unencumbered of hostility.

When I write about the factors contributing to epidemic obesity and ill health, as I often do, I routinely get pushback. Those who think everything is a matter of personal responsibility--that we would all be thin and healthy if we weren't lazy gluttonsconsider me a member of the nanny-state food police. Those who believe in the extreme of environmental determinism aren't shy about calling me a food industry apologist, when I suggest that our food supply may have something to do with demand.

But the barrage of resistance and reproach from those who don't like my answers leads me perennially back to the same conclusion. I think we are wonderfully adept as a culture at asking all the wrong questions.

Epidemiology should trump ideology, mine, as well as yours. For our efforts to do good for real people in the real world, they have to be based on the actual evidence of what our actions do to and for actual people in the real world, not hypothetical abstractions born of hazy hope or morbid fantasy.

Gertrude Stein famously told us: "A difference, to be a difference, must make a difference." Data-driven public policy would address what differences our differences of opinion actually make, and thereby give us a far better platform for action, based on better answers.

But, of course, our only hope of moving in that promising direction begins with asking the right questions.

David L. Katz, MD, FACP, MPH, FACPM, is an internationally renowned authority on nutrition, weight management, and the prevention of chronic disease, and an internationally recognized leader in integrative medicine and patient-centered care. He is a board certified specialist in both Internal Medicine, and Preventive Medicine/Public Health, and Associate Professor (adjunct) in Public Health Practice at the Yale University School of Medicine. He is the Director and founder (1998) of Yale University's Prevention Research Center; Director and founder of the Integrative Medicine Center at Griffin Hospital (2000) in Derby, Conn.; founder and president of the non-profit Turn the Tide Foundation; and formerly the Director of Medical Studies in Public Health at the Yale School of Medicine for eight years. This post originally appeared on his blog at The Huffington Post.

QD: News Every Day--More aggressive cancers striking young women more frequently

More young women ages 25 to 39 are being diagnosed with aggressive, more advance forms of breast cancer, and they have lower survival rates, an analysis of breast cancer trends in the U.S. found.

Breast cancer is the most common malignant tumor in adolescent and young adult women ages 15 to 39, accounting for 14% of all cancer in men and women in the age group. The individual average risk of a woman developing breast cancer in the U.S. was 1 in 173 by the age of 40 years when assessed in 2008.

To analyze the trend, researchers reviewed data from three SEER registries for the years 1973-2009, 1992-2009, and 2000-2009 to review localized disease confined to the breast, regional diseases that spread to contiguous and adjacent organs such as the lymph nodes or chest wall, and distant disease that metastasized to bones, the brain, or lungs, for example.

Results appeared in the Feb. 27 issue of JAMA.

Since 1976, there has been a steady increase in the incidence of distant disease breast cancer in 25- to 39-year-old women, from 1.53 per 100,000 in 1976 to 2.90 per 100,000 in 2009. The researchers note that this is an absolute difference of 1.37 per 100,000, representing an average compounded increase of 2.07% (95% confidence interval, 1.57% to 2.58%) per year over the 34-year interval.

"The trajectory of the incidence trend predicts that an increasing number of young women in the United States will present with metastatic breast cancer in an age group that already has the worst prognosis, no recommended routine screening practice, the least health insurance, and the most potential years of life," the authors wrote.

Also, the rate of increasing incidence of distant disease was inversely proportional to age at diagnosis. While the greatest increase occurred in 25- to 34-year-old women, progressively smaller increases occurred in older women by 5-year age intervals and no statistically significant incidence increase occurred in any group 55 years or older.

The increases were seen in all races, in urban and rural areas, and were statistically significant in black and non-Hispanic white populations. Incidence for women with estrogen receptor-positive subtypes increased more than for women with estrogen receptor-negative subtypes.

Researchers concluded, "Whatever the causes--and likely there are more than one--the evidence we observed for the increasing incidence of advanced breast cancer in young women will require corroboration and may be best confirmed by data from other countries. If verified, the increase is particularly concerning, because young age itself is an independent adverse prognostic factor for breast cancer, and the lowest 5-year breast cancer survival rates as a function of age have been reported for 20- to 34-year-old women. The most recent national 5-year survival for distant disease for 25- to 39-year-old women is only 31 percent according to SEER data, compared with a 5-year survival rate of 87 percent for women with locoregional breast cancer."
Wednesday, February 27, 2013

Vital statistic

I've always had nagging doubts about filling out death certificates.

An excellent article in the trade paper "American Medical News" by Carolyne Krupa explores the "inexactitude" of the custom.

As Krupa points out, doctors are never taught how to fill out the documents. She quotes Randy Hanzlick, MD, chief medical examiner for Fulton County, Ga.: "Training is a big problem. There are very few medical schools that teach it," he said. "For many physicians, the first time they see it is when they are doing their internship or residency and one of their patients dies. The nurse hands them a death certificate and says, 'Fill this out.'"

That's pretty much how it works. Though sometimes the person that comes calling with the death certificate is a hospital clerk. And she will make you fill out the form carefully, using only "allowable" causes of death.

Cause of death on this 1937 death certificate? "Senile gangrene."

Of course, everyone dies from the same thing: lack of oxygen to the brain. But you can't list that. Nor can you list common "jargon-y" favorites like "cardiopulmonary arrest," "respiratory failure," "sepsis," or "multi-system organ failure." All of which are true, but too inexact to be useful.

It's intimidating to be the one to "pronounce" someone dead, and be the final arbiter of the cause. Isn't that why we have medical examiners/pathologists?

We don't autopsy patients much anymore, a trend that concerns many in the industry but doesn't seem likely to change. That leaves interns and residents (at teaching hospitals) and community docs (in the real world) in charge of filling out these important statistical and historic documents.

When you care for a patient that dies in the hospital, your guess as to the cause can be pretty close. But without allowing for processes and instead requiring specifics ("pneumonia" instead of "respiratory failure") it's no wonder that when I was a resident, it seemed as though every patient died of a heart attack ("myocardial infarction"). This was one of the 'allowable' causes that seemed to apply whether it made the most sense or not.

If someone is really old and their body starts giving out, we can nearly always choose to say it's because of their heart giving out. But what they most likely die from is "brain failure"–but there's no category or term for that. The brain is the conductor of the body's orchestra; but aside from 'stroke' ("cerebrovascular accident or disease") we usually don't list the brain in any of the causes (though stroke itself is #3 after heart disease and cancer).

Imagine getting a call from the police that a patient has died at home-a patient that you may not even know (when covering for a colleague, for example). How could I possibly know what the cause of death is?

Turns out our best guesses have to suffice. I'd favor a system that produces more reliable data.

This post by John H. Schumann, MD, FACP, originally appeared at GlassHospital. Dr. Schumann is a general internist. His blog, GlassHospital, seeks to bring transparency to medical practice and to improve the patient experience.

Implementing the learning health care system can be facilitated using the principles of evidence-based medicine

The enthusiasm for big data and for the use of analytics and business intelligence with that data is reaching a fevered pitch. I share that enthusiasm, but also know from both my clinical and my informatics experience that knowledge will not emanate just by turning on the data spigot from the growing number of electronic health record (EHR) systems now in operational use. However, if we approach the problem properly, I believe we can achieve the goals of the learning healthcare system as eloquently laid out in various reports from the Institute of Medicine (IOM) [1, 2].

One sensible approach was published recently in Annals of Internal Medicine [3]. The authors were from Group Health Cooperative in Seattle, a leader in the use of data and information systems to improve the quality and outcomes of care. The paper is summarized well by a figure that shows a continuous cycle of design-implementation-evaluation-adjustment of improved care, with interaction with the external environment through scanning for identification of problems and solutions and dissemination to share what has been learned in their setting.

A complementary approach to learning from EHR and other clinical data can be to apply the basic approach of evidence-based medicine (EBM) [4]. In some ways, EBM is antagonistic to EHR data analytics, with the former giving the most value to evidence from controlled experiments, especially randomized controlled trials (RCTs), while the latter makes use of real-world observational data that may be incomplete, incorrect, and inconsistent.

But I maintain that we can look to the process of EBM to guide us in how to best approach the "evidence" of EHR data analytics and the learning health system. EBM is not just about finding RCTs. Rather, it uses a principled approach to find and apply the best evidence to make clinical decisions. In particular, EBM done most effectively uses four steps:
--ask an answerable question,
--find the best evidence,
--critically appraise the evidence, and
--apply it to the patient situation.
When I teach EBM, I emphasize that the first step of asking an answerable question may be the most important. It is not enough, for example, to ask if a test or treatment works. Rather, we need to know at a minimum whether it works relative to some alternative approach in a particular patient population or setting. This same approach is obviously necessary in the learning health system. Just as RCTs do not inform us passively, neither will EHR data analytics approaches.

In the second step, the principle from EBM is very much the same, even if the techniques of obtaining evidence are very different. The "evidence" in the case of the learning health system is the data in EHR and other systems that, as noted above, may be incomplete, incorrect, and inconsistent. We therefore need to determine if we have the proper data and, if so, whether it can applied to answer our question.

For the third step, just as with EBM, we must critically appraise our evidence. Can we trust the inferences and conclusions from the data? Are there confounding variables of which we may not be aware? This may be critical with EHR data where assignment of cause and effect could be difficult, if not impossible. The solution likely comes back to asking the right question, i.e., one we can have confidence in the correct answer.

Finally, we have to ask, can the data be applied in our setting? Just as some RCTs answer questions in patient populations very different from those of the clinician making decisions, it must be ascertained if the results obtained from this approach can be applied to a specific patient or setting.

The growing quantity of clinical data in operational clinical systems provides a foundation for the learning healthcare system. However, we must approach the questions we ask and how we answer them with caution and a sound methodology. The approach of EBM offers a framework for carrying out this very different but complementary work.

1. Eden J, Wheatley B, McNeil B, and Sox H, eds. Knowing What Works in Health Care: A Roadmap for the Nation. 2008, National Academies Press: Washington, DC.
2. Smith M, Saunders R, Stuckhardt L, and McGinnis JM, Best Care at Lower Cost: The Path to Continuously Learning Health Care in America. 2012, Washington, DC: National Academies Press.
3. Greene SM, Reid RJ, and Larson EB, Implementing the learning health system: from concept to action. Annals of Internal Medicine, 2012. 157: 207-210.
4. Straus SE, Glasziou P, Richardson WS, and Haynes RB, Evidence-Based Medicine: How to Practice and Teach It, 4e. 2010, New York, NY: Churchill Livingstone.

This post by William Hersh, MD, FACP, Professor and Chair, Department of Medical Informatics & Clinical Epidemiology, Oregon Health & Science University, appeared on his blog Informatics Professor, where he posts his thoughts on various topics related to biomedical and health informatics.

QD: News Every Day--Diagnostic errors may cause 150,000 cases of patient harm annually

As many as 150,000 missed diagnoses annually could result in considerable harm to patients, according to a study and editorial.

To determine what diseases providers missed and why, researchers reviewed medical records of diagnostic errors detected at a Veterans Affairs facility and a large private health system. Electronic health records were programmed to detect unexpected hospitalizations or emergency return visits after a primary care index visit from October 2006 through September 2007.

Results appeared online Feb. 25 at JAMA Internal Medicine.

From among 190 cases, 68 unique diagnoses were missed, with the most common being pneumonia (6.7%), decompensated congestive heart failure (5.7%), acute renal failure (5.3%), cancer (primary) (5.3%), and urinary tract infection or pyelonephritis (4.8%).

Reasons included a breakdown during patient-practitioner clinical encounter (78.9%), referrals (19.5%), patient-related factors (16.3%), follow-up and tracking of diagnostic information (14.7%), and performance and interpretation of diagnostic tests (13.7%). A total of 43.7% of cases involved more than one factor.

Breakdowns involving the patient-practitioner clinical encounter were most often mistakes (such as cognitive errors) related to the medical history (56.3%), the physical exam (47.4%), ordering diagnostic tests for further workup (57.4%), and failure to review previous documentation (15.3%).

Researchers noted two more documentation-related problems. First, no differential diagnosis was documented at the first visit in 81.1% of cases. Second, practitioners copied and pasted previous progress notes into the index visit note in 7.4% of cases, which contributed to more than one-third (35.7%) of errors.

Potential severity of injury associated with the delayed or missed diagnosis was classified as moderate to severe in 86.8% of cases (ratings of 4-8 on an 8-point scale), with a mode of 5 (considerable harm). There was a modal severity rating of 5 across all 5 processes.

Most of the errors involved missed diagnosis of a large variety of common conditions as opposed to either a few selected conditions or rare or unusual diseases, the researchers noted.

And, the two sites differed in what was missed, because the VA center employed predominantly internists who cared for older veterans who generally had more comorbidities, while at the integrated health center, family practitioners cared for an overall younger population.

The findings show that there's still a need for clinical examination skills in an age of electronic health records and team-based care, researchers noted.

"Although the current literature highlights isolated cognitive difficulties among practitioners (eg, biases) and various interventions have been suggested to improve diagnostic decision making (eg, the use of checklists or second opinions), few cognitive obstacles have been sufficiently examined in the complex 'real-world' primary care environment, and few interventions have been satisfactorily tested," the researchers wrote. "Using the lens of missed opportunities in care rather than errors, institutions could create a new focus on discovering, learning from, and reducing diagnostic errors."

An editorial noted that with more than half a billion primary care visits annually in the United States, there could be at least 50,000 missed diagnostic opportunities annually, "most resulting in considerable harm."

Generalized further, the editorial continued, autopsy-based estimates of hospital deaths from diagnostic errors and another half billion visits annually to non-primary care physicians suggest that more than 150,000 patients annually might have experienced misdiagnosis-related harm.

The editorial continued that improvements could include training physicians in the most effective use of computer-based diagnostic decision support tools, or enabling electronic health records to monitor diagnostic performance continuously and give timely, specific feedback to providers.

"One critical step toward this last approach would be mandatory, structured recording and coding of presenting symptoms, rather than simply diagnoses, in our electronic health record systems," the editorialist wrote. "This step alone, if consistently performed, would radically transform our ability to track and reduce diagnostic errors."
Tuesday, February 26, 2013

Hospitals are still awful: movement toward patient centered care and Eric Topol's idea

First a disclaimer: People often receive compassionate, considerate and effective care at hospitals. They have countless interactions which impart the miracle of human caring and enrich their lives. It is also institutionally prevalent to have haphazard care with poor communication, near misses and avoidable misery.

I have been working at a university hospital emergency room as part of a mini-fellowship in bedside ultrasound. It is the first time I have spent significant time entirely dedicated to an emergency room since I was a medical resident about a quarter of a century ago. As an internal medicine physician who works in hospitals, I have spent one or two hours at a time relatively often in emergency rooms taking care of patients who were admitted to me on their way to the medical floor, but that is not the same as staying there, seeing the more and the less ill, the folks who may go home and may get admitted, watching the rhythm of the department over time.

People come to emergency rooms for many reasons. Often they come because they need to see a doctor, but can't get in to one in an office because the doctor is busy or doesn't accept their insurance, or they don't have any way to pay, no money and no insurance. They come in because there is something wrong that they have decided needs to be dealt with now. The problem may be a true emergency, something that if left another day will lead to death or disability, or just something that has become intolerable and appears, from the patient's view, to have reached a level where any delay in treatment is unthinkable.

They also come in, brought by ambulances or police or concerned family or friends, for drug overdoses, stab wounds, car and motorcycle accidents, assaults. They come in with no regard to whether the doctors in the emergency room are already busy, and they do not pace themselves. Three patients with stab wounds may arrive in 15 minutes, topped by a cardiac arrest. Usually the universe doesn't do this to us, but milder versions happen all the time. The acute treatment of the critically ill patient is often beautifully choreographed, efficient and successful. Treatment of the less critical patient, not so much.

Patients are brought back to the actual department where, in this ER, they are evaluated in curtained bays, with privacy of their stories ensured only by the ambient noises of crashing and yelling and beeping. Some newer emergency departments actually have rooms with doors, but not the one I'm hanging out in now.

They are evaluated by resident physicians, attending emergency room doctors and sometimes students. They are cared for by nurses whose attention is constantly pulled in many directions by a constant flow of patients with varying urgency of need. After a patient is evaluated and an initial treatment plan is developed, they get IVs, usually, medications, sometimes, lab tests usually, radiological procedures frequently, and often a bedside ultrasound by someone like me, in training. Then they wait. And wait. And their relatives, who have to go to work in the morning, which is now only a few hours away, sit and wait. Occasionally someone comes by to tell them what they are waiting for, but not very often.

Their labs are completed, and if there is nobody else more critically ill, some doctor in the team checks them and thinks again about what should be their ultimate outcome. And they wait, not knowing what is happening. They wait, lying on plastic covered gurneys which are covered with sheets that slide down and bunch up underneath them. Sometimes, but not often, primary doctors or consultants who are familiar with them are contacted. If they are admitted to the hospital they are moved to a more comfortable room in a new building (which must seem like heaven in contrast with the ER) but they have to wait hours to be seen by the admitting physician and moved to said room.

After 25 years in internal medicine practice, I am much more familiar with what happens to patients when they do reach the hospital wards. They tell their story, which they have told at least five times already, with multiple interruptions, to a new crew of people, nurses, specialists, new doctors from a different shift. They worry that the whole story that they told before has not been communicated and that what is being done to them may be wrong or unnecessary because of miscommunication. They hear about planned tests, have tests, wait for hours for results, or days, or never hear the results at all. They get treatments delivered by nurses along with explanations given by the nurses, which only occasionally bear any resemblance to what they doctor was thinking when the treatment was ordered. (This is not the fault of the nurse, but due to the system in which nurses and doctors rarely discuss treatment plans in any meaningful way.) They also get explanations from specialists which differ from those given by hospitalists, and maybe get to spend a little more time talking to social workers or discharge planners who sometimes have a better idea of the big picture than anyone else on the team.

The inevitable result of all of this is that patients, except those who are unusually generous of spirit, are frustrated and often grouchy, occasionally spitting mad. They are also not made well in the most expedient of manners, and often are made sick on the way to being made well, or instead of being made well.

Eric Topol, MD, a renowned cardiologist and inventor of novel medications, and more recently a questioner of tradition, employed by the Scripps Research Institute studying innovative medicine, has given a brief video talk about ways in which hospital stays and doctor visits might be replaced by video chats and remote transmission of physiological data. I think that he is being short sighted and has forgotten that many people who end up in hospitals do so because there is no unpaid human who will or can care for them outside of a hospital, either because they have become so darn sick they can't even make it to the bathroom, or because they are homeless or marginally housed, and that three-dimensional health care is fundamentally what humans do for each other. Still, I love the fact that he is talking about ways to radically change medicine.

Many organizations are developing systems to make medical care more "patient centered." This term was initially coined in the 1960s and was defined as systems that "take into account the patient, the social context in which he lives, and the complementary system devised by society to deal with the disruptive effects of illness."

We physicians sometimes think of this kind of thing as fluff, and unworthy of our skills in fighting off death and disease in their myriad forms. Movement in the direction of patient centeredness, with attention to the systems which make medical care unkind, is vitally important, and should legitimately absorb a significant portion of physicians' considerable problem solving skills.

Janice Boughton, MD, ACP Member, practiced in the Seattle area for four years and in rural Idaho for 17 years before deciding to take a few years off to see more places, learn more about medicine and increase her knowledge base and perspective by practicing hospital and primary care medicine as a locum tenens physician. She lives in Idaho when not traveling. Disturbed by various aspects of the practice of medicine that make no sense and concerned about the cost of providing health care to every American, she blogs at Why is American Health care so expensive?, where this post originally appeared.

New definitions for CKD! Medrants version 1.0

This represents my first attempt at explaining the new CKD definitions. I invite my readers, especially my loyal renal readers, to suggest modifications. This rant will become the basis for a regular talk, and I want to get it right. Thanks in advance for your suggestions.

Ten years after we have new definitions for CKD. Soon after they established the initial stages, authors began to argue that we should divide stage 3 into 3a and 3b. Now they have.

For those who want to read all the details: KDIGO 2012 Clinical Practice Guideline for the Evaluation and Management of Chronic Kidney Disease.

Here is my synopsis of the diagnosis and definitions for CKD:

In order to make a diagnosis of CKD the patient should have at least one of the following:
1) Have an estimated GFR (eGFR) < 60 for 3 months
2) Clear evidence of kidney disease =
--Albuminuria (AER Z30 mg/24 hours; ACR Z30 mg/g [Z3 mg/mmol])
--Urine sediment abnormalities
--Electrolyte and other abnormalities due to tubular disorders
--Abnormalities detected by histology
--Structural abnormalities detected by imaging
--History of kidney transplantation

What are the main points here?
Do not diagnose CKD for patients admitted to the hospital with eGFR 75 unless they have clear evidence of kidney disease. Do not diagnose CKD until you have excluded acute kidney injury (AKI). Patients with a transient creatinine elevation (for example from obstruction or volume contraction) do not have CKD, unless 3 months elapse with continued decreased eGFR.

How should we estimate eGFR?
KDIGO prefers the CKD-EPI formula. A recent JAMA article concluded: "The CKD-EPI equation classified fewer individuals as having CKD and more accurately categorized the risk for mortality and ESRD than did the MDRD Study equation across a broad range of populations." Comparison of Risk Prediction Using the CKD-EPI Equation and the MDRD Study Equation for Estimated Glomerular Filtration Rate

What do all these initials mean? We currently have three creatinine based formulae for estimating GFR. Cockcroft-Gault predates the two more recent models. It uses age, weight, gender and creatinine for estimates. The weight portion does lead to some challenges – ideal weight or actually weight. MDRD comes from the Modifying Diet in Renal Disease study. It uses age, gender, race and creatinine. The more recent CKD-Epi model uses the same variables as MDRD. Most labs still use MDRD.

Here is my favorite spot for renal calculations: eGFR.

When do the formulae not work?
Each formula works by estimating the numerator in the creatinine clearance formula: UV/P. Ucr * V gives the total production of creatinine in 24 hours. Each formula estimates that production, but really is estimating muscle mass, as that is the main source of creatinine. Therefore, if a formula either markedly underestimates or overestimates muscle mass, then the formula will fail. Here are my cautions, i.e., when I eschew any formula:
1) Excess muscle mass: these formulae likely would underestimate GFR in elite body builders or some professional athletes with incredible muscle mass
2) Decreased muscle mass:
--Muscular dystrophies
--Major spinal cord injuries
--Major amputations
--Anorexia nervosa patients

How have the classifications changed?
The old classification had 5 levels:
Stage -- eGFR
1-- >90
2 -- 60-89
3 -- 30-59
4 -- 15-29
5 -- <15

But important epidemiological analyses made clear that stage 3 was too broad. The new staging system starts by dividing stage 3 into 3a and 3b. 3a (45-59) patients have a much lower burden of renal associated complications than do 3b patients:
Stage -- eGFR
1 -- >90
2 -- 60-89
3A -- 45-59
3B -- 30-44
4 -- 15-29
5 -- <15

They go on further to combine the eGFR stages with albuminuria staging. Since I particularly use urine protein/creatinine ratio (PCR) I will produce that chart:
A1 = PCR = <0.15
A2 = PCR = 0.15 – 0.50
A3 = PCR = > 0.50

The higher the A level the more closely we should follow the patient and the more aggressively we should work to delay progression to end-stage.

How and when should we label a patient with the diagnosis CKD?

We should become conservative in applying this label to patients. Once we label the patient they are handicapped in obtaining life insurance, disability insurance, etc.

We should always provide the stage along with the label. Patients do not have CKD, rather the patient has Stage 3b CKD.

The normal estimation equation suffices for 3b or higher, but prior to labeling a patient as having CKD Stage 3a, KDIGO recommends adding a cystatin C measurement and calculating a new estimated GFR. Here is a calculator for this more complex formula.


KDIGO has made staging more complicated, but more useful. The new staging stresses the importance of Stage 3b, which I believe is an advance. It de-emphasizes the risks to Stage 3a patients. The new staging combines proteinuria because proteinuria also predicts complications.

I suspect that it will take significant time to absorb this new staging. I hope this overview will help some of you find this more accessible and more interesting.

db is the nickname for Robert M. Centor, MD, FACP. db stands both for Dr. Bob and da boss. He is an academic general internist at the University of Alabama School of Medicine, and is the Associate Dean for the Huntsville Regional Medical Campus of UASOM. He also serves as a frequent ward attending at the Birmingham VA Hospital. This post originally appeared at his blog, db's Medical Rants.

QD: News Every Day--Computerized order entry reduces errors, but may not affect patient harm

Computerized provider order entry (CPOE) systems can prevent 17.4 million medication errors in inpatient acute-care settings in the U.S. annually, a study found, although it couldn't determine how this might reduce harm to patients.

Researchers conducted a systematic literature review and meta-analysis to examine how CPOEs affected medication error rates. Results appeared online Feb. 21 at the Journal of the American Medical Informatics Association.

More than one-third of U.S. acute-care hospitals had adopted CPOE by 2008. Among the more than 2,800 hospitals that responded to a survey, larger hospitals (more than 400 beds) were more likely to have adopted CPOE (56%) compared with medium-sized (35%) or small hospitals (30%); urban hospitals were more likely than rural ones (41% vs 28%; P less than 0.001); major teaching hospitals were more likely than non-teaching hospitals (53% vs 32%; P less than 0.001); and not-for-profit hospitals (37%) were more likely to do so than public hospitals (31%) and private for-profit hospitals (32%).

After pooling data from nine studies about the impact of CPOE, researchers estimated that medication error rates were 48% lower after implementing a CPOE (95% confidence interval [CI], 41% to 55%).

"Given this effect size, and the degree of CPOE adoption and use in hospitals in 2008, we estimate a 12.5% reduction in medication errors, or 17.4 million medication errors averted annually," the researchers wrote. However, it is unclear whether reduced medication errors would translate into reduced patient harm from medications. Several studies looked at reduced patient harm, but are not enough to form a meta-analysis, the researchers added.
Monday, February 25, 2013

Help me, I'm deciding

I went to a meeting of a panel this past week under the auspices of the Institute of Medicine. Called the Evidence Communication Innovation Collaborative [yes, I know], the group discusses a number of topics around the general ambit of communicating medical evidence to patients. We spent a lot of time, productively, on the topic of decision aids. A lot of people at the meeting really like them. (Here's a collection of decision aids, which includes a short primer on what they are; the Wikipedia article is informative.)

I like them, too, and it's not hard to understand why. Decision aids are the fuel of shared decision making. Information should not be concentrated in the hands of the doctor; rather, it should be presented to the decider, which we presume is the patient, the ordinary person, in a way relevant to them.

But they are not the panacea.

We forward-thinking doctors know in our heart of hearts that decision-making should be shared with the patient; unfortunately, not all patients think that way. Some still rely on the physician to make their decisions.

Even the alternatives, and the risks and benefits attached to each one, are not so obvious without some thought. And that thought is not the view from nowhere, as a philosopher put it, but dependent on the point of view of the person thinking. The risks, benefits, and preferred alternatives depend on the kind of person doing the choosing. And who's to say that a patient from one race, say, or economic stratum, will react to alternatives the same way as another?

Count me encouraged but skeptical. There were a lot of people in that room ready to share decision making. But a decision aid is only as good as the decisions it includes. We need to know a lot more about how people make decisions, and how they talk to their doctors, before we can expect such aids to do more than reproduce our current health care system's inequalities and insensitivities.

Zackary Berger, MD, ACP Member, is a primary care doctor and general internist in the Division of General Internal Medicine at Johns Hopkins. His research interests include doctor-patient communication, bioethics, and systematic reviews. He is also a poet, journalist and translator in Yiddish and English. This post originally appeared at his blog.

Learning a new language: an insight into EMRs

OK, I'll admit it: I had no idea. I thought that the whining and griping by other doctors about EMRs was just petulance by a group of people who like to be in charge and who resist change. I thought that they were struggling because of their lack of insight into the real benefits of digital records, instead focusing on their insignificant immediate needs. I thought they were a bunch of dopes.

Yep. I am a jerk.

My transition to a new practice gave me the opportunity to dump my old EMR (with all the deficiencies I've come to hate) and get a new, more current system.* I figured that someone like me would be able to learn and master a new EMR with ease. After all, I do understand about data schema, structured and unstructured data, I know about MEDCIN, SNOMED, and HL-7 interfaces. Gosh darn it, I am a card-carrying member of the EMR elite! A new product should be a piece of cake! I'll put my credentials at the bottom of this post, in case you are interested.**

So, imagine my shock when I was confused and befuddled as I attempted to learn this new product. How could someone who could claim a bunch of product enhancements as my personal suggestions have any problem with a different system? The insight into the answer to this sheds light onto one of the basic problems with EMR systems.

Problem 1: different languages As I struggled to figure out my new system, it occurred to me that I felt a lot like a person learning a new language. Here I was: an expert in German linguistics and I was now having to learn Japanese. Both are systems of written and spoken code that accomplish the same task: communication of data from one person to another. Both do so using many of the same basic elements: subjects, objects, nouns, verbs. Both are learned by children and spoken by millions of people. But both are very, very different in many ways.

The reason for my feeling this way is that, at their core, EMR products are computer programs. They are written by engineers with physicians (many of whom have left clinical practice to work for the EMR company) consulting to help shape the product. The object of the program may be physician use, but their heart is that of an engineer. So the storage of the data, the organization of the medical information, the location of where anything can be found, is based much more on the nature of the programmer than anything else.

Problem 2: strengths vs. weaknesses The idea of an EMR is (reputedly) to simplify the task of health care providers in documenting care and retrieving the information quickly. The reality is that some things are of higher priority to one EMR manufacturer than another. Tasks that were simple in my old system (putting in labs, generating letters with structured data, getting a quick overview of a person's record) are difficult in the new system. The new system, however, does other tasks much better (auto-completion of lab data, management of referrals, interfacing with patient portal, etc.).

I am amazed at how many steps it takes to do tasks my old EMR vendor did quickly. Why did they make it so hard? It comes down to priorities, and for whatever reason (CCHIT, Meaningful Use, Moon Phase) some things get high priority, while others are consigned to the "later" pile.

Problem 3: the system The fundamental reason EMR systems are so difficult is not the nature of the programmers making it or the doctors using it; it is that EMRs are grown in the hot-house of a chaotic and arbitrary health care system.

It makes no clinical sense that there are a gazillion ICD-9 codes, but there are, and any EMR system wanting success needs to devote lots of effort to ICD-9 (and soon to ICD-10, yippee). The structure of most office notes are not to give the best clinical information in the simplest format; notes are generated for the sake of proper billing, including a 10:1 ratio of useless to useful information. Most notes are like a small gift contained in a large box of packing material, with the majority of information simply getting in the way of what is really wanted. EMR systems are well-designed to generate lots and lots of packing material.

The system I chose does the E/M office visit very well, but does so at the cost of hiding useful information and de-emphasizing what is most clinically helpful for the sake of E/M codes, or what will qualify the practice for "meaningful use" money. I don't fault the system for it, since we doctors spend far more of our time focused on E/M codes and "meaningful use" than on patient care. That is one of the big reasons I left my old practice.

The reality is that EMR systems are designed to finesse the payment system more than they are for patient care. That is because the thing we call "health care" refers to the payment system, not to actual patient care. My frustration with my current EMR system is not that it doesn't do its job well (it still is better than my old one, I think). It's that it is grown on a planet where the honor being a healer is being consumed by the curse of being a provider. Patients don't matter as much as payment in our system, so EMR systems will follow those priorities. Those who don't will not succeed.

So to those I have scorned in the past, I bow my head in shame. I got good at using a complex tool that allowed me to manage the insanity of our system. It turns out that my skill was a very narrow one.

It makes me feel like a piece of scheisse (たわごと).

*For those wondering, I was on Centricity by GE and am now using eClinicalWorks.

**My Geek Credentials: --I did my residency at Indiana University, the land where Clem McDonald, one of the pioneers of electronic records made our records electronic when personal computers were still new (I attended from 1990 to 1994). It was there I became a believer in computerized records. --In practice, I installed MedicaLogic's EMR in 1996, as one of the first users of their Windows based product, Logician. --Within 2 years I was on the user group board, and was elected president in 1998. I was a regular speaker at the conferences and known for my profuse production of clinical content (called "Encounter Forms") --In 2003, I applied for and won the HIMSS Davies Award for ambulatory care for our practice, recognizing our achievements with EMR in an ambulatory setting. --After that, I served on several committees for HIMSS, gave talks for multiple other groups (NHQA, National Governors Association), giving the keynote talks for the HIMSS series given around the country to convince docs to adopt EMR. --In 2011, I participated in a CDC Public Health Grand Rounds as a speaker from the physician perspective on the subject of Electronic Medical Records and "Meaningful Use."

After taking a year-long hiatus from blogging, Rob Lamberts, MD, ACP Member, returned with "volume 2" of his personal musings about medicine, life, armadillos and Sasquatch at More Musings (of a Distractible Kind), where this post originally appeared.

The danger in CT scans

The United States is higher in its use of computed tomography (CT) scans than any other industrialized Country. There were about 3 million scans done in the U.S. in 1980. By 2007 that number had risen to 70 million. A number of articles published in medical journals over the past few years have reported that excess radiation delivered by these scans will cause cancer deaths in some patients they were meant to help. One study from The National Cancer Institute estimated there would be about 29,000 future cancers related to scans done in 2007 alone. Experts have estimated that as many as a third of all imaging exams do not help the patient or contribute to better outcomes. Let me repeat that: Experts have estimated that as many as a third of all imaging exams do not help the patient or contribute to better outcomes.

Do patients understand the risk of CT scans? A new study from the University of Washington showed 1/3 of people getting a CT scan didn't even know the test exposed their body to radiation. They also underestimated the amount of radiation delivered by a CT scan.

A CT scan delivers a mega-dose of radiation, as much as 500 times that of a conventional X-ray.

Patients, especially children, who have multiple CT scans are naturally at higher risk from excess ionizing radiation. In medicine we find "incidentalomas" all the time. If I get a chest-X ray on a patient that I suspect of pneumonia, it might also show a small blip that cannot be explained. The recommendation from the radiologist may be to follow up with a CT scan. It takes clinical judgment to weight the risks versus the benefits of getting that scan. Perhaps the better choice is to deal with the infection and repeat the chest X-ray in 3 months and see if it is still there.

Because CT scans are painless, usually covered by Medicare and Insurance (at high $$ cost), protect physicians from "missing something" and facing malpractice risk, and are often recommended to follow-up on an "incidentaloma," it is an overused test.

The CT scanner is a great advance in medicine and the ability to image the body in transverse sections and visualize organs and the brain has been truly life-saving. But patients should know and understand the risks. Here are some ways the patient can get involved:
--If a doctor orders a CT scan for a child, the parent should ask the technician to use pediatric-appropriate settings.
--Do not let a doctor or institution repeat a scan that was recently done (for example, if you get second opinions or are seen at a different place). All scans can be electronically shared, even via a flash drive if needed.
--Ask if a "low-dose" scan is appropriate.
--Try to avoid using the Emergency Department for health care. Your chances of getting a CT scan for a headache, car accident, stomach ache, pelvic pain or kidney stone is extremely high if you go to an ED. The doctor wants to cover all possibilities (even those that have low probability) in a short period of time. Bingo: order the CT scan.
--It's OK to ask, "How could the test result change my (or my child's care), if at all?"
--It's OK to ask, "Can you recommend an alternative, such as an ultrasound or MRI, that doesn't involve radiation?"

Understanding risks and benefits makes everyone healthier.
This post originally appeared at Everything Health. Toni Brayer, MD, FACP, is an ACP Internist editorial board member who blogs at EverythingHealth, designed to address the rapid changes in science, medicine, health and healing in the 21st Century.
Friday, February 22, 2013


Finally, the purists out there who require demonstration of efficacy by a randomized clinical trial before attempting a novel therapy can now breathe a great sigh of relief. The New England Journal of Medicine has just published a randomized, controlled trial that demonstrates the clinical utility of fecal transplantation for Clostridium difficile infection. In fact, fecal transplants worked so well that the trial was terminated early after an interim analysis.

Patients in the study all had C. difficile infection with at least one relapse. They were randomized to one of three study arms: (1) a 4-day course of oral vancomycin followed by bowel lavage then fecal transplant via nasoduodenal tube; (2) a 14-day course of oral vancomycin; or (3) oral vancomycin plus bowel lavage. In the transplant group, 13 of 16 patients were cured after 1 fecal infusion (2 of the remaining 3 were cured after a second infusion). In contrast only 4 of 13 in the vanco group, and 3 of 13 in the vanco plus lavage group were cured. Bottom line: fecal transplantation had an overall cure rate of 96%.

There remain two barriers for patients to access this highly effective therapy: (1) very few physicians perform the procedure, in part, I think, because there is no reimbursement despite the several person-hours required to prepare the fecal solution and administer it; and (2) insurance companies will not reimburse for donor testing, which costs approximately $1,500.

So we've proven what we already knew. Now it's time to look at more interesting questions: Does fecal transplantation work for irritable bowel syndrome and inflammatory bowel disease?

Michael B. Edmond, MD, FACP, is a hospital epidemiologist in Richmond, Va., with a focus on understanding why infections occur in the hospital and ways to prevent these infections, and sees patients in the inpatient and outpatient settings. This post originally appeared at the blog Controversies in Hospital Infection Prevention.

QD: News Every Day--With 7 million patients facing a primary care shortfall, two-thirds of pre-med students mull specialty careers

Seven million people live in areas where the expected increase in demand for providers is greater than 10% of supply, and 44 million people live in areas with an expected increase in demand above 5% of supply, a study found.

Health care reform will likely demand 7,200 additional primary care providers, or 2.5% of the current supply, researchers reported in Health Affairs. This is unlikely to cause disruptions across the board, but rural regions will likely bear the brunt of the shortfall.

Policies to encourage primary care providers to work in these areas of shortage may be as important as policies aimed at increasing the overall supply, the authors noted.

Meanwhile, more than two-thirds of students considering medical school plan to becomes specialists rather than go into primary care, according to a survey of those who used services of a test prep company.

The survey of 543 students who used Kaplan Test Prep services for their MCATs found that 68% say they plan to become specialists, with 86% say the main reason as academic/personal interest. Only 2% cited better salary.

Also, the survey reported that 71% would prefer a three-year medical school program to a four-year option, with all other factors being equal. Medical schools are offering this option, often to encourage students to enter primary care and to get them into communities faster.

A Kaplan spokesperson noted that medical school debt is a notorious factor.

Curing Clostridium difficile with, um, feces

[Author's note: This post is grosser than most. You may not want to read it over lunch.]

Last year I warned that Clostridium difficile (C. diff) infections are becoming more common.

C. diff is a bacterium that infects the colon causing severe, sometimes life-threatening, diarrhea. C. diff infection is frequently a complication of antibiotic use. Antibiotics can kill the normal bacteria in the colon and establish an opportunity for C. diff to proliferate. After a course of antibiotics, a person can remain susceptible for a few months, and subsequent exposure to C. diff, usually in a health care setting, can lead to infection.

The mainstay of C. diff treatment is more antibiotics, typically vancomycin or metronidazole. But these antibiotics don't always work, and in many cases the C. diff infection is not eradicated and the diarrhea recurs.

For over 50 years investigators have suspected that restoring normal gut bacteria could treat C. diff infection. In 1950s the bacterium C. diff had not yet been isolated, but the severe colon infection that sometimes followed antibiotic use was well known. In 1958, physicians in Denver treated patients with C. diff colitis with enemas containing feces from healthy people. They reported that their patients rapidly and dramatically improved and urged further study of this treatment.

Since then, antibiotic treatment for C. diff was discovered, and the idea of curing C. diff by restoring normal bacteria languished, mostly because the thought of treating a patient by giving him feces is aesthetically so unappealing. Nevertheless as C. diff became more prevalent in recent years, and as antibiotic treatments became less effective, many gastroenterologists have resorted in desperation to treating these very sick patients with donated feces, either by enema, or through a colonoscope, or through a tube inserted through the nose to the small intestine. Invariably the success rates were extremely high, but this treatment never gained legitimacy, partially because of the lack of a rigorous trial comparing it to accepted antibiotic treatment, and partially because of the enormous yuck factor.

Recently the New England Journal of Medicine published online a study that should convince the skeptics, if not the squeamish. Researchers in The Netherlands randomized patients with C. diff infection who had already failed one course of antibiotic treatment. The patients were randomized into three groups. One group received the standard antibiotic treatment of vancomycin for 14 days. A second group received vancomycin for 14 days followed by a solution that flushes out the intestines by causing diarrhea (similar to a colonoscopy preparation). The third group received vancomycin for 4 days, the solution that flushes out the intestines, and then an infusion of feces through a tube inserted through their nose into the small intestine.

The research protocol made many strides in minimizing the unpleasantness of the stool infusion, and patients tolerated it very well. The infused "material" was provided by anonymous donors who were screened for infectious diseases. I'll spare you the details of how the donated material was prepared, but the very curious can read the New York Times article about this study. Suffice it to say that the patients don't see the infused solution. They only experience a plastic tube in their nose.

The results were quite dramatic. In fact, the study was stopped early because the differences between groups were so great. 81% of the patients receiving the feces infusion were cured after the first infusion, and most of the rest were cured with a second. In the antibiotic group about a third were cured, and in the group receiving vancomycin followed by the intestinal flushing solution, only about a quarter were cured. Many of the patients receiving antibiotics requested the feces infusion after the trial ended.

This should convince physicians and patients that if a first course of antibiotic treatment has failed, fecal infusion is a rational next step. It is hoped that eventually researchers will find and culture the bacteria that are responsible for inhibiting the growth of C. diff so that eventually patients can swallow capsules of live cultured bacteria, eliminating the need to deal with human waste.

Learn more:
When Pills Fail, This, er, Option Provides a Cure (NY Times)
Faecal transplants succeed in clinical trial (Nature)
Duodenal Infusion of Donor Feces for Recurrent Clostridium difficile (NEJM Original Article)
Fecal Microbiota Transplantation — An Old Therapy Comes of Age (NEJM Editorial)
My previous posts about C. diff:

Clostridium difficile Infections on the Increase
A New Treatment for Clostridium difficile

Albert Fuchs, MD, FACP, graduated from the University of California, Los Angeles School of Medicine, where he also did his internal medicine training. Certified by the American Board of Internal Medicine, Dr. Fuchs spent three years as a full-time faculty member at UCLA School of Medicine before opening his private practice in Beverly Hills in 2000. Holding privileges at Cedars-Sinai Medical Center, he is also an assistant clinical professor at UCLA's Department of Medicine. This post originally appeared at his blog.
Thursday, February 21, 2013

Soft drinks create hard choices

Responding to our justifiably increasing preoccupation with widespread obesity, the Coca-Cola Company has released a masterful television ad on the subject. They characterize their own efforts, and invite us all to "come together" to combat this scourge. The whole "come together" concept receives great emphasis, with evocative images from the (presumably) good old days of: "I'd like to buy the world a Coke ..."

Predictably, the collective response of my friends and colleagues in public health has been less than warm and bubbly. Sensing a blend of propaganda, evasion, hypocrisy, and desperation in Coke's efforts, my clan has largely reacted with their own blend of dismissal, derision, and disgust. In essence, they have invited us all to lose this lunch, and roll our eyes.

I confess, I am sorely tempted to join them. But before we can lose our lunch, we are perhaps obligated to chew on it. And before rolling our eyes, we may need to read the writing on the wall, fine print, and all.

Before that chewing and reading begins, I do want to insert a disclaimer. I am the furthest thing from a food industry apologist. I have devoted years of my life to the development of programs for children and adults alike that reveal the all-too-often lamentable truth about the so-called "food" supply. At every opportunity, I have highlighted the fact that "betcha' can't eat just one" was far more than a clever ad campaign; it was a threat to public health, backed up, at least in the case of Kraft, by nutritional biochemists and neuroscientists using functional MRI scans to determine how to maximize the number of calories it takes for us to feel full. And I have noted repeatedly, as I will continue to do, that as we got fat and our kids got diabetes, somebody was chuckling about it all the way to the bank.

Nor do I have even a little love for the Coca-Cola Company. I consider their flagship offering a chemistry experiment in a cup. I haven't had a soda in some 35 years since I first saw that light. Coca-Cola has systematically opposed public health campaigns to reduce soda consumption, deflected criticism, denied epidemiologic truths, and distorted their own contributions to epidemic obesity. I have, at least in moments of private rage, considered them an evil empire. Regarding my brief encounter with their CEO, I can only say I felt the dark side of the Force was strong with him.

And when it comes to polished and compelling ads that obscure any semblance of truth, Coca-Cola has an impressive track record. They have given us polar bears enjoying Coke as they frolic in their winter wonderland.

This is wrong in so many ways it's hard to know where to start. For one thing, polar bears don't drink soda. For another, that's not likely to help them much, because we are blithely destroying their winter wonderland. And guess what? Concocting chemical potions in factories to drink out of plastic bottles when a glass of water would do nicely is part of the reason, as such industrial activity contributes to global warming and the melting of Arctic ice on which the livelihood of real polar bears depends. So, no, Coke is not offering polar bears a drink. It's part of the reason they may have nothing left to eat. But, of course, only part of a much bigger reason.

Reacting to Coke's misleading depiction of polar bears, the Center for Science in the Public Interest engaged musician Jason Mraz, to give us the "real" bears. I fully support this campaign to show what might happen if polar bears actually did drink Coke. But of course, these aren't "real" bears, because as noted, polar bears don't drink soda. So, the "real" issue is that we may not be smarter than the average bear after all. Bears are still eating and drinking what bears should eat and drink, to the extent we aren't making it impossible for them. We, on the other hand, have been drinking Coca-Cola out of ever-larger containers.

This just isn't about bears and the choices they make. It's about us, and the choices we make. And we apparently have some hard ones. We have water, but choose to drink Coke. We have broccoli, but choose to eat bologna. There are no bears involved. We have met the enemy, and it is us.

Yes, we are also the victim. Yes, the food industry really has manipulated us with foods engineered to specifications born of functional MRI scans. But come on: Does anyone think Coke is good for them? Does anyone not living under a rock think you can drink a gallon of that stuff daily and not suffer any consequences? Is there really anyone left who has not heard the rumors about sugar? And does anyone bemoaning the unbearable (pun intended) burden of a soda tax truly not know where to find a water fountain?

Coke is quite right about one thing: We are all in this together.

Consider that when McDonald's, another good contender for the food industry's evil empire award, gave us McLean Deluxe, we didn't buy it. The product expired not for want of supply, but for want of demand. Folks, that's not McDonalds' problem. It's yours, and mine. It's our kids' problem.

Similarly, remember Alpha-Bits cereal? If you haven't seen it lately, here's why, courtesy of some inside information. Post reduced both the salt and sugar content, actually making the product more nutritious, and people stopped buying it. Sales plummeted from about $80 million a year, to $10 million.

Most product reformulations that allegedly give us better nutrition are actually lateral moves, fixing one thing, breaking another. Salt is reduced, but sugar is increased. Sugar is reduced, but trans fat is increased, and so on. I have an intimate view of all this, courtesy of my work with the NuVal program, which has established a detailed nutrient database for over 100,000 foods it has scored. All too often, banner ads implying better nutrition are entirely misleading. Low-fat peanut butter is substantially less nutritious than regular. Multigrain breads may or may not be whole grain.

But on those rare occasions when the food industry actually gives us better products, we don't buy them.

Which brings us back to Coke: What, exactly, do we want from them?

As I see it, against a backdrop of a growing burden of national and global chronic disease in which they are complicit, Coke has four options. They can (1) ignore the public health problem, and keep on keeping on; (2) acknowledge the public health problem, but say it's not their problem, and keep on keeping on; (3) confess their corporate sins and absolve themselves with ceremonial suicide; or (4) change.

Choices one and two have pretty much run their course. Shareholders are unlikely to bless option three. Which leaves us with option four: change. Change their product formulations. Change their inventory. And change their messaging. Stop talking about frolicking polar bears, and start talking about obesity. And while we have cause to be suspicious about Coca-Cola's motives, that's just what the new ad appears to be doing.

Yes, they sell us chemistry experiments in a cup. Yes, they help us become fat diabetics. But they are also a large company, employing a lot of people. If we simply want to drive a stake through their corporate heart, the result would be a lot of newly-unemployed people, still prone to obesity and diabetes while drinking Pepsi, or Mountain Dew, or Dr. Pepper, while perusing the want ads.

And yes, the new ad about obesity is only in response to mounting pressure from a concerned public, and restive federal authorities. But is it bad or surprising that supply-side changes are responsive to a changing demand? The business of business, after all, is business, and keeping the customer satisfied.

If we want truly meaningful changes in the quality of our food and drink, we will in fact require changes in both supply and demand. It won't help if they build it, and we don't come. There are ways to propagate a shared taste for change, and such a course might allow for substantial improvements in the public health without blowing up the Fortune 500.

Admittedly, the new Coke ads addressing obesity are slick. Stunningly slick. In other words, they are just plain good, working over the chords of emotional response exactly as intended. A testimony to what really deep pockets and top advertising talent can do. This could be just another reason to hate Coke, I suppose.

But on the other hand, the simpler times when Coke was an innocent pleasure are not a Madison Avenue fabrication; they actually happened. We baby-boomers lived through them. There was a time before ultra-uber-gulps and widespread childhood obesity, and soda seemed an innocuous pleasure, whether or not it ever really was. If that has changed over time, then so must we, and so must Coca-Cola.

What would such change look like? Probably something like the new ad.

As a closing aside, I attended the meeting of my local school district wellness committee this week, as they took on the task of complying with Connecticut nutrition standards. The gentleman who runs the high school store noted that by complying with the new regulations, he would lose business to the array of fast-food outlets accessible to the students just across a parking lot. And, I suspect he's exactly right.

I share my colleagues' visceral opposition to everything Coke. But I think we may be letting our abdominal viscera get the better of vital organs situated higher up. Soft drinks do exist; they are big business. Doing something about that involves hard choices.

Change, incremental change, is the most promising and plausible of them. So we have to allow for it if what we want is progress. If we won't accept change without calling it hypocrisy, then we don't really want progress. We want revenge.

David L. Katz, MD, FACP, MPH, FACPM, is an internationally renowned authority on nutrition, weight management, and the prevention of chronic disease, and an internationally recognized leader in integrative medicine and patient-centered care. He is a board certified specialist in both Internal Medicine, and Preventive Medicine/Public Health, and Associate Professor (adjunct) in Public Health Practice at the Yale University School of Medicine. He is the Director and founder (1998) of Yale University's Prevention Research Center; Director and founder of the Integrative Medicine Center at Griffin Hospital (2000) in Derby, Conn.; founder and president of the non-profit Turn the Tide Foundation; and formerly the Director of Medical Studies in Public Health at the Yale School of Medicine for eight years. This post originally appeared on his blog at The Huffington Post.

The fact-filled infection control guideline

I'm not sure what about this tweet got me to thinking about infection control. Before hopping on Twitter this morning, I was happily building Lego scenes with my kids and thinking about this afternoon's Indiana-Iowa basketball game (Dan - thanks for the tickets!). In infection control, there isn't a direct equivalent to the "mindless symmetry" in political journalism mentioned by Jay Rosen, which treats talking points on both sides of the aisle as equivalent without considering the facts. However, there is a similar "mindless" glossing-over of the facts by public health and society guideline committee members that appears in every hospital acquired infection guideline (HAI)--recommendations based on minimal data. Instead, many (can I suggest most) of the recommendations in HAI guidelines are based on uncontrolled before-after quasi-experimental studies, expert opinion and perpetuated dogma.

Mike Edmond, MD, FACP, pointed out a few days ago what can happen when a medical specialty, such as hospital epidemiology, recommends policies like mandatory masks for unvaccinated healthcare workers during influenza season, which are based on minimal data. I'm not even going to mention mandatory influenza vaccination for health care workers. But what about other claims in guidelines and by policy makers? Do we have enough evidence to support many of our interventions including most stewardship recommendations? And what about the claim that MDR-Gram negative outbreaks could be controlled if not for the unwilling health care worker?

What happens when we perpetuate opinion and dogma? Although 270-page hand hygiene guidelines may make us feel good, I'm worried that they prevent us from identifying areas where we need research (hand hygiene improvement interventions, anyone?) and lead us to spending days and weeks implementing ineffective or even harmful interventions. Does anyone stop to think how these fact-challenged guidelines might be hurting our patients and eroding our reputations? It seems to me that we shouldn't be spending our political capital implementing "expert opinion" since it will hinder our efforts when we actually are armed with evidence-based interventions. Imagine that day!

So my wish is that guideline committees only include recommendations based on evidence, not opinion or dogma, no matter how hard politically that is for them in the short term. In the long term, if we insist on evidence, we might actually get evidence; someone might notice and start funding infection prevention studies. (e.g. What do you mean we don't know how to halt the spread of MDR-GNRs??) And if our guidelines are shorter and filled with evidence-based recommendations, clinicians in the field will be able to focus on interventions that actually work and not spend their valuable time on willy-nilly dogma-of-the-day recommendations that harm our reputations or worse, our patients.

Eli N. Perencevich, MD, ACP Member, is an infectious disease physician and epidemiologist in Iowa City, Iowa, who studies methods to halt the spread of resistant bacteria in our hospitals (including novel ways to get everyone to wash their hands). This post originally appeared at the blog Controversies in Hospital Infection Prevention.

QD: News Every Day--More diabetics meeting goals in controlling their disease

More diabetic patients are controlling their A1c, blood pressure and LDL cholesterol levels, a study found.

To determine the prevalence of people with diabetes who meet the American Diabetes Association's hemoglobin A1c, blood pressure, and LDL cholesterol (ABC) recommendations, researchers reviewed data from the National Health and Nutrition Examination Surveys for nearly 5,000 people from 1988-1994, 1999-2002, 2003-2006, and 2007-2010.

Results appeared online Feb. 15 at Diabetes Care.

In 2007-2010, 52.5% of people with diabetes achieved A1c levels of 7% or less (53 mmol/mol), 51.1% achieved blood pressure levels less than 130/80 mmHg, 56.2% achieved LDL levels less than 100 mg/dL, and 18.8% achieved all three of the criteria. These were all significant improvements over the time period of 1988-1994 (all P less than 0.05).

Statin use significantly increased between 1988-1994 (4.2%) and 2007-2010 (51.4%, P less than 0.01). Compared with non-Hispanic whites, Mexican Americans were less likely to meet A1c and LDL goals (P less than 0.03), and non-Hispanic blacks were less likely to meet blood pressure and LDL goals (P less than 0.02). Compared with non-Hispanic blacks, Mexican Americans were less likely to meet A1c goals (P less than 0.01).
Wednesday, February 20, 2013


Twenty-five years ago this month, the New England Journal of Medicine published a special report on something that's become medical gospel: aspirin.

That's right. Not as in "take two and call me in the morning," but in the realm of the randomized double-blinded placebo-controlled trial. Or what we generally consider the gold standard of evidence in medical research.

If you've often heard that bit of jargon but always wondered why it's so exalted, break it down:
--randomized: the assignment of the treatment (aspirin) or placebo ('inert' sugar pill) is not given in any planned sequence.
--double-blinded: neither the researchers nor the subjects know who is taking what (everything is coded so that analysts can find out at the end).
--placebo-controlled: the study compares the treatment against placebo to see if it's helpful or harmful.

Even though acetylsalicylic acid's properties as a pain reliever and fever reducer had been known in the time of Hippocrates, it was in 1899 that Bayer first patented and marketed what came to be known as aspirin worldwide.

A mere 89 years later, researchers from the "Physicians Health Study" did something unusual. Citing aspirin's "extreme beneficial effects on non-fatal and fatal myocardial infarction"--doctor speak for heart attacks--the study's Data Monitoring Board recommended terminating the aspirin portion of the study early (the study also was looking at the effects of beta-carotene). In other words, the benefit in preventing heart attacks was so clear at 5 years instead of the planned 12 years of study that it was deemed unethical to continue blinding participants or using placebo.

Turns out that aspirin inhibits platelets, tiny specialized blood cells whose job it is to stop your cuts from bleeding. In heart attacks, platelets clump inside the arteries of the heart depriving the heart muscle of vital oxygen. Using aspirin to inhibit their function is a key mechanism of preventing this phenomenon.

The amazing thing is that it took decades to organize an elegant and simple enough study with enough power (statistical heft) to show that good ol' aspirin could really make a difference.

And that it was "just" aspirin. Shows how far we have yet to go in building medical knowledge.

This post by John H. Schumann, MD, FACP, originally appeared at GlassHospital. Dr. Schumann is a general internist. His blog, GlassHospital, seeks to bring transparency to medical practice and to improve the patient experience.

A lesson on mental illness care: connecting two tragedies

For the past month I've been trying to formulate a blog that could capture my thoughts about mental illness and the prevention of violence. At this point my ideas are still not crystallized, but perhaps writing this will help.

A few days before Christmas I received a phone call from a former patient's mother. She called while I was at the mall with my family doing some last minute shopping. I had taken the day off work. My patient, who I will call "Mark," and his family had left the state of Georgia and my care approximately 6 months prior. Fighting to contain her grief Mark's mother told me that her son, who was just 25 years old, had taken his own life.

It came as a shock, though admittedly during the brief time that I doctored Mark I had been very concerned about his well-being. His mother said that she wanted me to know because I had worked so hard to help her son. As I listened to the story of the months leading up to his suicide I was flooded with questions: Could I have prevented this? How did he kill himself? Had he found another physician after he moved? Had he been seeing a psychiatrist, as I had recommended?

I had only cared for Mark for three or four months. When we first met, early in 2012, his mother and he were desperate. She called me one evening after clinic hours. I was at my son's saxophone lesson and stepped outside to take the call. She found my medical practice and phone number on Google. She thought I might be able to help. He'd had a tough childhood. His sister was severely disabled. Then, he suffered a traumatic life event in college. Mark, though obviously very intelligent, had dropped out, unable to function. While he was my patient Mark confided that he was desperate to be independent and get back to normal functioning, but felt crippled by his health. He was a very likable young man who I connected with.

He described multiple symptoms: head pressure, mental fogginess, intense pain and burning all over his body coursing from his center outward and down his extremities, nausea, heartburn, post-nasal drip, an intensely dry mouth, insatiable thirst, difficulty swallowing, loss of appetite, change in his bowel habits, weight loss and muscle wasting. Mark felt that he was dying from a medical condition that remained undiagnosed. As he explained it, his trouble had started while was under the care of a psychiatrist. He attributed some of his symptoms to a medication, a serotonin re-uptake inhibitor, Effexor, which he felt had permanently changed him.

He asked if I could test him for permanent damage caused by traces of the drug that might remain in his blood stream months after his last dose. He had left his psychiatrist's care wanting another opinion and a thorough evaluation of these physical symptoms that were relentless and incapacitating.

I embarked on a very thorough medical evaluation, including a plethora of blood tests, an MRI of his brain, a neurology and an allergy and immunology consultation. I knew all the while that the root problem was very likely his underlying psychiatric condition. Mark acknowledged ongoing depressed mood and severe long-standing anxiety, but was primarily concerned about his physical health. I asked to speak with his psychiatrist, but his preference was that I evaluate his condition independently, and he refused. When questioned about thoughts of self-harm or harming others Mark stated, "I could never do that to my mother."

After frequent lengthy office visits and phone calls over a period of several months I was not able to arrive at a unifying medical diagnosis that explained my patient's condition. I was, however, increasingly concerned about his psychological health and referred him to another psychiatrist. I had become aware of underlying paranoid overtones in his affect, which I felt were delusional. He had been concerned about a pharmacy contaminating his prescriptions with a substance that made him ill. He asked me if I knew what the substance was (I had never heard of it), and asked me to investigate it. He expressed suspicion about various commercial labs and preferred that I send his lab specimens to a smaller lab that he had researched and chosen. He felt this lab would do a more accurate job with his lab testing. He asked me my opinion on his future career. He said he was very interested in the military, and asked if I thought that might be a good direction for him. Inwardly I cringed at the thought, and tried to steer him toward a more flexible career choice, and one that would not involve use of firearms.

After several months of working closely with Mark his mother informed me that the family would be moving out of state. Although the timing was not ideal, his father could not turn down the job opportunity and Mark could not stay on his own. Despite my referrals he had never established with a new psychiatrist. In a last ditch attempt to get him some help, I made a phone call to a psychiatrist who I knew and trusted. The psychiatrist agreed to see Mark several times prior to his move. It was the best we could think of.

I felt that I needed to clearly articulate my clinical impression to Mark's mother prior to their departure, which was that my patient was suffering from a psychiatric condition that caused a disorder of thinking in the form of paranoia and delusions. I mentioned schizophrenia. Mark's mother acknowledged that this diagnosis had been previously suggested, but that she and Mark wanted another opinion.

At the time of his last visit Mark brought in a fairly organized list of the symptoms he was suffering from and how they impacted his ability to function. He wanted me to write a letter attesting to the fact that he was unable to work or go to school because of his condition. I agreed to write a letter describing his condition, which was difficult given the fact that there was no psychiatrist involved and his diagnosis appeared to be primarily psychiatric. I explained this to Mark and had a direct conversation with him about my clinical impression.

The visits to my psychiatrist referral never occurred. My patient moved later that summer and I had no further contact until the phone call in December. The news about my patient's tragic suicide came one week after the shooting at Newtown, where, as we all know, another young man with significant psychiatric illness inexplicably sacrificed not only his life, but the lives of 26 children and teachers. I immediately wondered if my patient had shot himself, but somehow during our brief phone conversation, I could not bring myself to ask his mom how he died; it seemed irrelevant to her grief at the time. These two events cast a shadow over my holiday season.

I continue to try to make reason of these two tragedies, hoping to arrive at a pithy lesson by connecting the two that I can bring to clinical practice to avoid future heartbreak. What makes it so difficult to get patients with psychiatric illness the help that they need? In this case it was not problems of access, but the underlying disease process itself made my patient resistant to care.

I am still searching for broader answers, but perhaps I will start with a call back to my patient's mother to find out more details. In the meantime, I remain highly skeptical that improved mental health care alone, without restricting access to firearms will be enough to curb gun violence in our country.

Juliet K. Mavromatis, MD, FACP, is a primary care physician in Atlanta, Ga. Previous to her primary care practice, she served on the general internal medicine faculty of Emory University, where she practiced clinical medicine and taught internal medicine residents for 12 years, and led initiatives to improve the quality of care for patients with diabetes. This work fostered an interest in innovative models of primary care delivery. Her blog, DrDialogue, acts as a conversation about health topics for patients and health professionals. This post originally appeared there.