Which brings me to the false dichotomies. For an excellent example there's this piece in the Washington Monthly. You don't even have to read past the subhead to find the comparison:
Last year there wasn’t a single fatal airline accident in the developed world. So why is the U.S. health care system still accidentally killing hundreds of thousands? The answer is a lack of transparency.I've added emphasis on a particularly important part here, and I'll get back to it in a bit.
Among the various passages in which Mr Allen compares medical care to to airport safety one stands out, although the others have similar problems:
In no other realm—certainly not any as inherently dangerous as health care—do we accept the argument that meaningful comparisons of results are impossible just because those being compared face somewhat different circumstances. Some airports have shorter runways and are more congested than others; some have to deal with frequent snow or thunderstorms, nearby mountain ranges, or lakes and rivers that attract unusual numbers of flocking birds. No two are exactly the same. Yet we don’t therefore conclude that there is no point in comparing the safety record of one airport versus another, much less say that it is acceptable for a certain number of people to be routinely killed on approach or takeoff. We demand that all airports, and everyone else involved in aviation, do what it takes to get accidents to as close to zero as possible, and that they use reams of performance data to make that happen.The principle problem with this analogy is that pilots of Airbus A380s don't try to land at Courchevel Airport. Moreover, there are rigorous safety standards regarding airplanes; if your plane isn't deemed to be in good, safe condition you don't take off. Patients in hospitals do not come in with such preconditions; their problems come in varying severity, but equally important is their underlying condition. It's a lot easier to treat a healthy 26 year old for pneumonia that a 78 year old with morbid obesity. In point of fact, Mr Allen's data gathering prescription has been tried. Here's Atul Gawande reporting on the attempt:
For six years, from 1986 to 1992, the federal government released an annual report that came to be known as the Death List, which ranked all the hospitals in the country by their death rate for elderly and disabled patients on Medicare. The spread was alarmingly wide, and the Death List made headlines the first year it came out. But the rankings proved to be almost useless. Death among the elderly or disabled mostly has to do with how old or sick they are to begin with, and the statisticians could never quite work out how to apportion blame between nature and doctors. Volatility in the numbers was one sign of the trouble. Hospitals’ rankings varied widely from one year to the next based on a handful of random deaths. It was unclear what kind of changes would improve their performance (other than sending their sickest patients to other hospitals). Pretty soon the public simply ignored the rankings.An uncomfortable truth in medicine, one doctors never admit to patients, is that much of its practice is capricious. What saves one patient may quite easily kill another.
What makes this comparison so terribly wrong is that Mr Allen himself knows about these issues, he even prefaces the above excerpt by glibly brushing the issue aside, "But these are adjustments that can be made, and made all the more fairly and definitively the more data we have about just who is receiving what treatments and with what results." Not to put too fine a point on it, Mr Allen is simply wrong. In countless cases one patient will survive what killed dozens before, and vice versa. Medicine is not nearly as amenable to the types of statistical "adjustment" that Mr Allen prescribes, nor the industrial safety pushed by former treasury secretary, and gloated over by Mr Allen, Paul O'Neill.
Furthermore, let's recall the subhead where I added that emphasis. Mr Allen in this case is not in fact comparing health care to aviation safety, he is comparing health care to records of aviation safety. This is akin to oft-maligned practice in medical research of abandoning, or burying, studies that do not show desired results and pushing only those that do. In this case Mr Allen tells us that airline safety has been able to achieve remarkable increases by virtue of its openness and transparency. His evidence? The records gathered in an open and transparent system. Except that there are significant segments of aviation safety that are not included in his database of developed world incidents.
Worsening the dichotomy further is the nature of each practice. Malcolm Gladwell, in his book, Outliers detailed the conditions present in most plane crashes:
In a typical crash, for example, the weather is poor— not terrible, necessarily, but bad enough that the pilot feels a little bit more stressed than usual. In an overwhelming number of crashes, the plane is behind schedule, so the pilots are hurrying. In 52 percent of crashes, the pilot at the time of the accident has been awake for twelve hours or more, meaning that he is tired and not thinking sharply. And 44 percent of the time, the two pilots have never flown together before, so they’re not comfortable with each other. Then the errors start—and it’s not just one error. The typical accident involves seven consecutive human errors. One of the pilots does something wrong that by itself is not a problem. Then one of them makes another error on top of that, which combined with the first error still does not amount to catastrophe. But then they make a third error on top of that, and then another and another and another and another, and it is the combination of all those errors that leads to disaster.While Mr Allen tells us that there is a database that tracks even when pilots make wrong turns on runways, he either isn't aware or omits that such an error, by itself, is not particularly likely to cause an adverse event. Contrariwise, in medicine single mistakes are often enough to cause an adverse event. In fact, Mr Allen mentions a checklist used to manage infections from central line placement. The five steps are all tantamount to ensuring that the central line is placed sterilely; the problem with sterility though, is that skipping any one of these steps likely to render the observation of the others moot.
Where Outliers conjectured that increases in aviation safety have been realized because of the industry's push to get junior flight officers to speak up, Mr Allen asserts that transparency is the reason that airline safety has improved:
If the airline industry and its regulators had clung to the [calling average good enough], the average rate of airline fatalities would likely be little better than it was in the 1950s, when flying was at least three times as dangerous, on average, as it is today.What's notable about this, beyond its assertion of causality, is that it makes no sense. Imagine that we did not start keeping strict aviation safety records until the 1970s, but everything else remained the same. Would we now imagine that there were no safety improvements between 1950 and 1970? No. To be fair, Mr Allen does propose that we institutionalize a system allowing any team member to anonymously report errors and near misses; it is not, however, mentioned as responsible for the increase in aviation safety.
Beyond the problems with this dichotomy there are other issues in Mr Allen's reporting. Take his indictment of hospital infection rates in nursing home patients:
[Some will say] that the reason their hospital has such high infection rates is that many of their patients come from nursing homes, where lethal bacteria are rampant. (In the case of our investigation, I always pointed out that we were reporting the infections that their own employees had marked as not present at the time the patient arrived, meaning they were acquired in the hospital itself.)The caveat here, which I've italicized, is wholly inadequate. Sure, the intake nurse noted that particular infections weren't present in a patient upon intake. What Mr Allen surely knows, because he discussed it earlier in the piece, is that bacteria–even drug resistant strains–can be carried by asymptomatic patients. Here's how I know that he knows this:
[A] commonsense method used throughout Europe to drive down the number of hospital-acquired MRSA infections: swab the noses of patients before they are admitted, and if they test positive for MRSA, isolate them from other patients.In medical parlance these patients are carrying MRSA, they are not infected with it. This is an important point because nursing homes, much like hospitals, result in many people becoming carriers of pathogens they were not before. As anyone who has ever had conjunctivitis in one eye and spread it to the other with their own fingers can tell you, infections can quite easily come from you.
This harkens back to my earlier point about how to collect information on these things that means something. If I have a patient who carries MRSA in his nares why should my hospital numbers reflect a nosocomial case of MRSA infection simply because he picked his nose and then also picked at his sutures? While he may have acquired that infection in the hospital, he did not acquire it because of the hospital, or the medical care–excepting that it may be the reason for his sutures.
It is undeniably true that there are issues and substantial room for improvement. The aforementioned Atul Gawande even wrote a book stating a prescription plainly in its title, The Checklist Manifesto. There is plenty of evidence that installing checklists does wonders in ameliorating many of the problems Mr Allen rightly raises. What is often missed on this issue–and other related topics–by non-health care professionals is the sheer capriciousness of a lot of medical practice. In large part this is due to the way that we culturally approach medicine; however, that does not make perpetuating the misunderstanding acceptable.