Despite all of the talk about medical errors and patient safety, almost no one likes to talk about diagnostic errors. Yet doctors misdiagnose patients more often than we would like to think. Sometimes they diagnose patients with illnesses they don't have. Other times, the true condition is missed. All in all, diagnostic errors account for 17 percent of adverse events in hospitals according to the "Harvard Medical Practice Study," a landmark study that looks at medical errors.

Traditionally, these errors have not received much attention from researchers or the public. This is understandable. Thinking about missed diagnosis and wrong diagnosis makes everyone - patients as well as doctors - queasy. Especially because there is no obvious solution. But this past weekend the American Medical Informatics Association (AMIA) made a brave effort to spotlight the problem, holding its first-ever "Diagnostic Error in Medicine" conference.

Hats off to Bob Wachter, Associate Chairman of the Department of Medicine at the University of California, San Francisco, and the keynote speaker at the conference. On Monday, Wachter shared some thoughts on diagnostic errors through his blog, "Wachter's World."

Wachter begins by pointing out that a misdiagnosis lacks the concentrated shock value that is needed to grab the public imagination. Diagnostic mistakes "often have complex causal pathways, take time to play out, and may not kill for hours [i.e., if a doctor misses myocardial infarction in a patient], days (missed meningitis) or even years (missed cancers)." In short, to understand diagnostic errors you need to pay attention for a longer period of time - not something that's easy to do in today's sound-bite driven culture.

Diagnostic errors just aren't media friendly. When someone is prescribed the wrong medication and they die, the sequence of events is usually rapid enough that the story can be told soon after the tragedy occurs. But the consequences of a mistaken diagnosis are too diffuse to make a nice, punchy story. As Wachter puts it: "They don't pack the same visceral wallop as wrong-site surgery."

Finally, Wachter observes, it's hard to measure diagnostic errors. It's easy get an audience's attention by telling them that "the average hospitalized patient experiences one medication error a day" or that "the average ICU patient has 1.7 errors per day in their care."

But we don't have equally clean numbers on missed diagnoses. As a result, he points out, "it's difficult to convince policy makers and hospital executives, who are now obsessing about lowering the rates of hospital-acquired infections and falls" to focus on a problem that is much more difficult to tabulate.

This is a recurring problem in programs that strive to improve the quality of care: we are mesmerized by the idea of "measuring" everything. Yet, too often, what is most important cannot be easily measured.
Wacther recognizes the urgency of the problem: "As quality and safety movements gallop along, the need to" address Diagnostic Errors "grows more pressing," he writes. "Until we do, we will face a fundamental problem: a hospital can be seen as a high quality organization - receiving awards for being a stellar performer and oodles of cash from P4P programs - if all of its 'pneumonia' patients receive the correct antibiotics, all its 'CHF' patients are prescribed ACE inhibitors, and all its 'MI' patients get aspirin and beta blockers.

"Even if every one of the diagnoses was wrong."

Why So Many Errors?

Medicine is shot through with uncertainty; diseases do not always present neatly, in textbook fashion, and every human body is unique. These are just a few reasons why diagnosis is, perhaps, the most difficult part of medicine.

But misdiagnosis almost always can be traced to cognitive errors in how doctors think. When diagnosis is based on simple observation in specialties like radiology and pathology, which rely heavily on visual interpretation, error rates probably range from 2 percent to 5 percent, according to Drs. Eta S. Berner and Mark L. Graber, writing in the May issue of the American Journal of Medicine.

By contrast, in clinical specialties that rely on "data gathering and synthesis" rather than observation, error rates tend run as high as 15 percent. After reviewing "an extensive and ever-growing literature" on misdiagnosis, Berner and Graber conclude that "diagnostic errors exist at nontrivial and sometimes alarming rates. These studies span every specialty and virtually every dimension of both inpatient and outpatient care."

As the table below reveals, numerous studies show that the rate of misdiagnosis is "disappointingly high" both "for relatively benign conditions" and "for disorders where rapid and accurate diagnosis is essential, such as myocardial infarction, pulmonary embolism, and dissecting or ruptured aortic aneurysms."

STUDY NAME: Shojania et al (2002)
ASSESSED CONDITION: Tuberculosis of the lungs (bacterial infection)
FINDINGS: Reviewing autopsy studies specifically focused on the diagnosis of lung TB, researchers found that 50% of these diagnoses were not suspected by physicians before the patient died.

STUDY: Pidenda et al (2001)
CONDITION: Pulmonary embolism ( a blood clot blocks arteries in the lungs)
FINDINGS: This study reviewed diagnosis of fatal dislodged blood clots over a 5-yr period at a single institution. Of 67 patients who died of pulmonary embolism, clinicians didn't suspect the diagnosis in 37 (55%) of them.

STUDY: Lederle et al (1994), von Kodolitsch et al (2000)
CONDITION: Ruptured aortic aneurysm (when a weakened, bulging area in the aorta ruptures)
FINDINGS: These two studies reviewed cases at a single medical center over a 7-yr period. Of 23 cases involving these aneurysms in the abdomen, diagnosis of rupture was initially missed in 14 (61%); in patients presenting with chest pain, doctors missed the need to dissect the bulging part of the aorta in 35% of cases.

STUDY: Edlow (2005)
CONDITION: Subarachnoid hemorrhage (bleeding in a particular region of the brain)
FINDINGS: This study, an updated review of published studies on this particular type of brain bleeding, shows about 30% are misdiagnosed on initial evaluation.

STUDY: Burton et al (1998)
CONDITION: Cancer detection
FINDINGS: Autopsy study at a single hospital: of the 250 malignant tumors found at autopsy, 111 were either misdiagnosed or undiagnosed, and in just 57 of the cases the cause of death was judged to be related to the cancer.

STUDY: Beam et al (1996)
CONDITION: Breast cancer
FINDINGS: Looked at 50 accredited centers agreed to review mammograms of 79 women, 45 of whom had breast cancer. The centers missed cancer in 21% of the patients.

STUDY: McGinnis et al (2002)
CONDITION: Melanoma (skin cancer)
FINDINGS: This study, the second review of 5,136 biopsy samples found that diagnosis changed in 11% (1.1% from benign to malignant, 1.2% from malignant to benign, and 8% had a change in doctors' ranking of how abnormal the cells were) of the samples over time, suggesting a not insignificant initial error rate.

STUDY: Perlis (2005)
CONDITION: Bipolar disorder
FINDINGS: The initial diagnosis was wrong in 69% of patients with bipolar disorder and delays in establishing the correct diagnosis were common.

STUDY: Graff et al (2000)
CONDITION: Appendicitis (inflamed appendix)
FINDINGS: Retrospective study at 12 hospitals of patients with abdominal pain and operations for appendicitis. Of 1,026 patients who had surgery, there was no appendicitis in 110 (10.5%); of 916 patients with a final diagnosis of appendicitis, the diagnosis was missed or wrong in 170 (18.6%).

STUDY: Raab et al (2005)
CONDITION: Cancer pathology (microscopic examination of tissues and cells to detect cancer)
FINDINGS: The frequency of errors in diagnosing cancer was measured at 4 hospitals over a 1-yr period. The error rate of pathologic diagnosis was 2% - 9% for gynecology cases and 5% - 12% for non-gynecology cases; errors ran from what tissues the doctors used, to preparation problems, to misinterpretations of tissue anatomy when viewed under microscope.

STUDY: Buchweitz et al (2005)
CONDITION: Endometriosis (tissue similar to the lining of the uterus is found elsewhere in the body)
FINDINGS: Digital videotapes of the inside of patients' bodies were shown to 108 gynecologic surgeons. Surgeons agreed only 18 percent of the time as to how many tissue areas were actually affected by this condition.

STUDY: Gorter et al (2002)
CONDITION: Psoriatic arthritis (red, scaly skin coupled with join inflammation)
FINDINGS: 1 of 2 patients with psoriatic arthritis visited 23 joint and motor specialists; the diagnosis was missed or wrong in 9 visits (39%).

STUDY: Bogun et al (2004)
CONDITION: Atrial fibrillation (abnormal heart beat in the upper chambers of the heart)
FINDINGS: Review of doctor readings of electro-cardiograms [a graphical recording of the change in body electricity due to cardiac activity] that concluded a patient suffered from this abnormal heart beat found that: 35% of the patients were misdiagnosed by the machine, and the error was detected by the reviewing clinician only 76% of the time.

STUDY: Arnon et al (2006)
CONDITION: Infant botulism (toxic bacterial infection in newborns' intestines)
FINDINGS: Study of 129 infants in California suspected of having botulism during a 5-yr period; only 50% of the cases were suspected at the time of admission.

STUDY: Edelman (2002)
CONDITION: Diabetes (high blood sugar due to insufficient insulin)
FINDINGS: Retrospective review of 1,426 patients with laboratory evidence of diabetes showed that there was no mention of diabetes in the medical record of 18% of patients.

STUDY: Russell et al (1988)
CONDITION: Chest x-rays in the Emergency Department
FINDINGS: One third of x-rays were incorrectly interpreted by the Emergency Department staff compared with the final readings by radiologists.

Overconfidence

Misdiagnosis rarely springs from a "lack of knowledge per se, such as seeing a patient with a disease that the physician has never encountered before," Berner and Grave explain. "More commonly, cognitive errors reflect problems gathering data, such as failing to elicit complete and accurate information from the patient; failure to recognize the significance of data, such as misinterpreting test results; or most commonly, failure to synthesize or 'put it all together.'"

The breakdown in clinical reasoning often occurs because the physician isn't willing or able to "reflect on [his] own thinking processes and critically examine [his] assumptions, beliefs, and conclusions." In a word, the physician is too "confident."

Indeed, Berner and Graber find an inverse relationship between confidence and skill. In one study they reviewed, the researchers looked at diagnoses made by medical students, residents and physicians and asked them how certain they were that they were correct. The good news is that while medical students were less accurate, they also were less confident; meanwhile the attending physicians were the most accurate, and highly confident. The bad news is that the residents were more confident than the others, but significantly less accurate than the attending physicians. In another study, researchers found that residents often stayed wedded to an incorrect diagnosis even when a diagnostic decision support system suggested the correct diagnosis.

In a third study of 126 patients who died in the ICU and underwent autopsy, physicians were asked to provide the clinical diagnosis and also their level of uncertainty. Level 1 represented complete certainty, level 2 indicated minor uncertainty, and level 3 designated major uncertainty. Here the punch line is alarming: clinicians who were "completely certain" of the diagnosis before death were wrong 40% of the time.

Overconfidence or the belief that "I know all I need to know" may help explain what the researchers describe as a "pervasive disinterest in any decision support or feedback, regardless of the specific situation." Studies show that "physicians admit to having many questions that could be important at the point of care, but which they do not pursue. Even when information resources are automated and easily accessible at the point of care with a computer, one study found that only a tiny fraction of the resources were actually used."

Research shows that physicians tend to ignore computerized decision-support systems, often in the form of guidelines, alerts, and reminders. "For many conditions, consensus exists on the best treatments and the recommended goals," Berner and Graber point out. Nevertheless, a comprehensive review of medical practice in the United States found that the care provided deviated from recommended best practices half of the time. In one study, the researchers suggest that the high rate of noncompliance with clinical guidelines relates to "the sociology of what it means to be a professional" in our health care system: "Being a professional connotes possessing expert knowledge in an area and functioning relatively autonomously." Many physicians have yet to learn that 21st century medicine is too complex for anyone to know everything - even in a single specialty. Medicine has become a team sport.

But while it's easy to blame medical "arrogance," for the high rate of errors, "there is ubstantial evidence that overconfidence - that is, miscalibration of one's own sense of accuracy and actual accuracy - is ubiquitous and simply part of human nature" Berner and Graber write. "A striking example derives from surveys of academic professionals, 94% of whom rate themselves in the top half of their profession. Similarly, only 1% of drivers rate their skills below that of the average driver."

In another study published in the same issue of AMJ, Pat Croskerry and Geoff Norman note that such equanimity regarding one's own skills can lead to what's called "confirmation bias." People "anchor" on findings that support their initial assumptions. Given a set of information, it's much easier to pull out the data that proves you right and pat yourself on the back than it is to look at the contradictory evidence and rethink your assumptions. Indeed, Croskerry and Norman observe:,"it takes far more mental effort to contemplate disconfirmation--by considering all the other things it might be-- than confirmation."

Making things all the more difficult is the fact that, at a certain point, the alternative to confirmation bias - what Croskerry and Norman call "consider the opposite" - becomes impractical. If a doctor embraces uncertainty he could easily become paralyzed.

What doctors need to do is to simultaneously make a decision - and keep an open mind. Often, a doctor must embark on a course of treatment as a way of diagnosing the condition - all the time knowing that he may be wrong.

Too often, Berner and Graber observe, physicians narrow the diagnostic hypotheses too early in the process, so that the correct diagnosis is never seriously considered. Reliance on advanced diagnostic tests can encourage what they call "premature closure." After all, high-tech diagnostic technologies offer up hard-and-fast- data, fostering the illusion that the physician has vanquished medicine's ambiguity.

But in truth advanced diagnostic tools can miss critical information. The problem is not the technology, but how we use it. Some observers suggest that the newest and most sophisticated tools are more likely to produce false negatives because doctors accept the results so readily.

"In most cases, it wasn't the technology that failed," explains Dr. Atul Gawande in Complications: A Surgeon's Notes on an Imperfect Science. "Rather, the physician did not consider the right diagnosis in the first place. The perfect test or scan may have been available, but the physician never ordered it." Instead, he ordered another test - and believed it.

"We get this all the time," Bill Pellan of Florida's Penallas-Pasca County Medical Examiner's Office told the New York Times a few years ago. "The doctor will get our report and call and say: 'But there can't be a lacerated aorta. We did a whole set of scans.'

"We have to remind him we held the heart in our hands."

In the second part of the post, we'll address the fact that most physicians have no way of knowing how often they may be missing the diagnosis because they don't receive any feed-back: they never find out how the story ends. Maggie will also talk about "the most powerful tool in the history of medicine" - the autopsy.

Part II

Sometimes physicians are overly confident; sometimes they narrow their hypothesis too early in the diagnostic process. Sometimes they rely too heavily on advanced diagnostic tests and accept the results too quickly. As I explained in part one of this post, these are some of the reasons why physicians misdiagnose their patients up to 15 percent of the time. Of all medical errors, misdiagnosis is the one that we talk about least - in part, because we don't know what to do about it, in part because most doctors have no way of knowing how many diagnostic errors they make.

"Complacency" (i.e. the attitude that "nobody's perfect") also is a factor, reports Drs. Eta S. Berner and Mark L. Graber in the May issue of the American Journal of Medicine. "Complacency reflects tolerance for errors, and the belief that errors are inevitable," they write, "combined with little understanding of how commonplace diagnostic errors are. Frequently, the complacent physician may think that the problem exists, but not in his own practice..."

Autopsies

It is crucial to recognize that physicians are not simply deceiving themselves: in our fragmented health care system many honestly don't know when they have mis-diagnosed a patient. No one tells them - including the patient.

Sometimes a patient who isn't getting better simply leaves the doctor and finds someone else. His original doctor may well assume that he was finally cured. Or the patient may be discharged from the hospital, relapse three months later, and go to a different ER where he discovers that his symptoms have returned because he was, in fact, misdiagnosed. The doctors who cared for him at the first hospital have no way of knowing; they think they cured him. In other cases, the patient gets better despite the wrong diagnosis. (It is surprising how often bodies heal themselves.) Meanwhile, both doctor and patient assume that the diagnosis was right and that the treatment "worked."

In still other cases, the patient dies, and because everyone assumes that the diagnosis was correct, it is listed as the "cause of death" - when in fact, another condition killed the patient.

When giving talks to groups of physicians on diagnostic errors, Graber says that he frequently "asks whether they have made a diagnostic error in the past year. Typically, only 1% admit to having made such a mistake."

Here, we reach the heart of the problem: what Berner and Graber call "the remarkable discrepancy between the known prevalence of diagnostic error and physician perception of their own error rate." This gap "has not been formally quantified and is only indirectly discussed in the medical literature," they note "but [it] lies at the crux of the diagnostic error puzzle, and explains in part why so little attention has been devoted to this problem."

One cannot expect doctors to learn from their mistakes unless they have feedback: At one time, autopsies provided physicians with the information they needed. And the results were regularly discussed at "mortality and morbidity" conferences where doctors became Monday-morning quarterbacks, discussing what they could have done differently.

But today, "autopsies are done in 10 percent of all deaths; many hospitals do none," notes Dr. Atul Gawande in Complications: A Surgeons Notes on an Imperfect Science. "This is a dramatic turnabout. Throughout much of the twentieth century, doctors diligently obtained autopsies in the majority of all death...Autopsies have long been viewed as a tool of discovery, one that has been used to identify the cause of tuberculosis, reveal how to treat appendicitis, and establish the existence of Alzheimer's disease.

"So what accounts for the decline?" Gawande asks. "In truth, it's not because families refuse - to judge from recent studies, they still grant their permission up to 80 percent of the time. Instead, doctors once so eager to perform autopsies that they stole bodies [from graves] have simply stopped asking.

"Some people ascribe this to shady motives," Gawande continues. "It has been said that hospitals are trying to save money by avoiding autopsies, since insurers don't pay for them, or that doctors avoid them in order to cover up evidence of malpractice. And yet," he points out, "autopsies lost money and uncovered malpractice when they were popular, too."

Gawande doesn't believe that fear of malpractice has driven the decline in autopsies. Instead," he writes, "I suspect, what discourages autopsies is medicine's twenty-first-century, tall-in-the-saddle confidence."

This is an important point. Autopsies have fallen out of fashion in recent years: "Between 1972 and 1995, the last year for which statistics are available, the rate fell from 19.1 percent of all deaths to 9.4 percent. A major reason for the decline over this period is that "imaging technologies such as CT scanning and ultrasound have enabled doctors to 'see' such obvious internal causes of death as tumors before the patient dies", says Dr. Patrick Lantz, associate professor of pathology at Wake Forest University Baptist Medical Center. Nowadays an autopsy seems a waste of time and resources.

Gawande agrees: "Today we have MRI scans, ultra-sound, nuclear medicine, molecular testing, and much more. When somebody dies, we already know why. We don't need an autopsy to find out...Or so I thought..." Gawande then goes on to tell the story of a autopsy that rocked him. He had completely misdiagnosed a patient.

What Autopsies Show

The autopsy has been described as "the most powerful tool in the history of medicine" and the "gold standard" for detecting diagnostic errors. Indeed, Gawande points out that three studies done in 1998 and 1999 reveal that autopsies "turn up a major misdiagnosis in roughly 40 percent of all cases."

A large review of autopsy studies concluded that "in about a third of the misdiagnoses the patients would have been expected to live if property treatment had been administered," Gawande reports. "Dr. George Lundberg, a pathologist and former editor of the Journal of the American Medical Association, has done more than anyone to call attention to these figures. He points out the most surprising fact of all: the rate at which misdiagnosis is detected in autopsy studies have not improved since at least 1938."

When Gawande first heard these numbers he couldn't' believe them. "With all of the recent advances in imaging and diagnostics . . . it's hard to accept that we have failed to improve over time." To see if this really could be true, he and other doctors at Harvard put together a simple study. They went back into their hospital records to see how often autopsies picked up missed diagnosis in 1960 and 1970, before the advent of CT, ultrasound, nuclear scanning and other technologies, and then in 1980, after those technologies became widely used.

Gawande reports the results of the study: "The researchers found no improvement. Regardless of the decade, physicians missed a quarter of fatal infections, a third of heart attacks and almost two-thirds of pulmonary emboli in their patients who died."

But these numbers may exaggerate the rate of error. As Berner and Graber observe, "autopsy studies only provide the error rate in patients who die." One can assume that the error rate is much lower in patients who survived.

"For example, whereas autopsy studies suggest that fatal pulmonary embolism is misdiagnosed approximately 55% of the time, the misdiagnosis rate for all cases of pulmonary embolism is only 4%..." a large discrepancy also exists regarding the misdiagnosis rate for myocardial infarction: although autopsy data suggest roughly 20% of these events are missed, data from the clinical setting (patients presenting with chest pain or other relevant symptoms) indicate that only 2% to 4% are missed."

Still, they acknowledge that when laymen are trained to pretend to be a patient suffering from specific symptoms, studies show that "internists missed the correct diagnosis 13% of the time. Other studies have found that physicians can even disagree with themselves when presented again with a case they have previously diagnosed."

On the question of whether the diagnostic error rate has changed over time, Berner and Graber quote researchers who suggest that the near-constant rate of misdiagnosis found at autopsy over the years probably reflects two factors that offset each other:
1. diagnostic accuracy actually has improved over time (more knowledge, better tests, more skills);

2. but as the autopsy rate declines, there is a tendency to select only the more challenging clinical cases for autopsy, which then have a higher likelihood of diagnostic error. A long-term study of autopsies in Switzerland (where the autopsy rate has remained constant at 90%) supports the theory that the absolute rate of diagnostic errors is, as suggested, decreasing over time.
Nevertheless, nearly everyone agrees, the rate of diagnostic errors remains too high.

We need to revive the autopsy, Gawande argues. For "autopsies not only document the presence of diagnostic errors, they also provide an opportunity to learn from one's errors (errando discimus) if one takes advantage of the information.

"The rate of autopsy in the United States is not measured any more," he observes, "but is widely assumed to be significantly <10%. To the extent that this important feedback mechanism is no longer a realistic option, clinicians have an increasingly distorted view of their own error rates.

"Autopsy literally means "to see for oneself," Gawande observes, and despite our knowledge and technology, when we look we are often unprepared for what we find. Sometimes it turns out that we had missed a clue along the way or made a genuine mistake. Sometimes we turn out wrong despite doing everything right.

"Whether with living patients or dead, we cannot know until we look... But doctors are no longer asking such questions. Equally troubling, people seem happy to let us off the hook. In 1995, the United States National Center for Health Statistics stopped collecting autopsy statistics altogether. We can no longer even say how rare autopsies have become."

If they are going to reflect on their mistakes, physicians need to "see for themselves."