Why remembering what worked doesn’t work

In WW2 the US Air Force had a growing problem: their bombers were taking casualties at too high a rate for them to conduct the operations they needed to pursue the war effort. A decision was taken to armour the bombers, but there is only a limited amount of armour you can put on a plane before it can’t fly. So the commanders reviewed planes returning from sorties and found the damage tended to cluster in very specific areas (as shown in figure 1).

Figure 1. Averaged location of damage on returning bombers. Red dots indicate increased density.

Figure 1. Averaged location of damage on returning bombers. Red dots indicate increased density.

They saw the damage tended to accumulate around the wings, central fuselage and tail gunner, and moved to put the armour around these areas. It was a Hungarian mathematician who pointed out at the last minute this would do nothing as these regions were where the plane could get hit and survive, and in fact the undamaged regions were the critical areas to protect, as planes hit in the cockpit, engines and fuel tanks were the ones crashing. The commanders followed his advice and the planes survival rates did indeed go up (a more detailed description of this story and its implications can be found here).

When explained this phenomenon, known as survivor bias, seems very obvious. But the instinct to exaggerate the factors behind survivors whilst discounting the factors behind non-survivors is of critical importance in our decision-making every day, and a powerful force in medical history. A real impact of this bias has been demonstrated in assessing surgery [1, 2], HIV treatment [3], lung cancer treatment [4] and genetic disorders [5]. Careful analysis of this phenomenon shows it can dramatically affect the quality of diagnosis (see [6] for a very detailed discussion) and hence medical outcomes.

Figure 2. Focusing on individual extreme cases is almost the worst possible way to fall victim to a survivor bias in selection (image from  Ahoyuniverse ).

Figure 2. Focusing on individual extreme cases is almost the worst possible way to fall victim to a survivor bias in selection (image from Ahoyuniverse).

Medical staff are regularly called upon to make diagnoses, very often under pressures of time, resources and patient co-operation and clarity. However survivor biases can affect our understanding of what works and what doesn’t. Imagine the following scenario: A group of patients over time are diagnosed with a specific knee injury. If half of these are very quickly x-rayed and sent to surgery then the consulting physiotherapist will have little contact with them. If the other half are deemed not to need surgery and follow their rehabilitation and half of them recover well it is natural for the physiotherapist to conclude that the rehabilitation method has a 50% success rate. But in fact it is 25%, as half of the presenting patients never went through this course of treatment. When under high-pressure at work it is easy to fall prey to this without knowing it, but focusing only on the groups we recall through contact and recovery is a perfect example of survivor bias affecting our judgment.

Survivor bias is the primary mechanism by which our minds erase our failures behind the scenes, and make us remember only our successes. Sure this makes us feel good about ourselves but gives a largely false image of our abilities and track record. It is uncomfortable to consider but the reality is that everyone has a rose-tinted view of our personal history. Having read this far you’re probably nodding away. The natural response is “sure I’m not perfect, but I’m pretty good, and I know colleagues and friends who make these types of errors all the time”. The thing is: they do. But they think the same thing and you’re both right. So how to overcome this constant error your mind? Well as before the key is not to rely on memory but keep written records that honestly track accuracy of diagnoses and patient recovery rates. It’s not easy, although with modern software this can be easier than it was, and requires a certain honesty about our own successes. But the reward can be better diagnosis for the patients and a greater understanding our own abilities.

In 1950, having saved the lives on thousands of airmen he had never met, Abraham Wald and his wife were killed in a plane crash in India, where he was giving a lecture series on invitation from the Indian government. A tragic circumstance, and a reminder that for all our analysis, coincidence will always be a factor too.

1. Sy, R.W., et al., Survivor treatment selection bias and outcomes research: a case study of surgery in infective endocarditis. Circ Cardiovasc Qual Outcomes, 2009. 2(5): p. 469-74.

2. Tleyjeh, I.M. and L.M. Baddour, The potential impact of survivor treatment selection bias on the perceived efficacy of valve surgery in the treatment of infective endocarditis. Clin Infect Dis, 2007. 44(10): p. 1392-3.

3. Glesby, M.J. and D.R. Hoover, Survivor treatment selection bias in observational studies: examples from the AIDS literature. Ann Intern Med, 1996. 124(11): p. 999-1005.

4. Naimi, A.I., et al., Estimating the effect of cumulative occupational asbestos exposure on time to lung cancer mortality: using structural nested failure-time models to account for healthy-worker survivor bias. Epidemiology, 2014. 25(2): p. 246-54.

5. Anderson, C.D., et al., The effect of survival bias on case-control genetic association studies of highly lethal diseases. Circ Cardiovasc Genet, 2011. 4(2): p. 188-96.

6. Miller, D.P., M. Gomberg-Maitland, and M. Humbert, Survivor bias and risk assessment. Eur Respir J, 2012. 40(3): p. 530-2.