There’s a nice paper just out in BMC Health Services Research by Kristofferson and colleagues where they looked at hospital mortality stats in Norway and counted deaths in three different ways:
- exclude patients transferred between hospitals and count deaths in hospital
- exclude patients transferred between hospitals and count deaths within 30 days wherever they happened
- count patients weighted by the proportion of time in each hospital, and count deaths within 30 days wherever they happened
OK, that’s not every possibility but the point is to test how sensitive a league table would be to changing this definition. The assumption is often made that mortality is the best statistic to fall back on when all else fails, but the notion that a patient is either dead or alive is all very well until you get down to the fine details of how you count these deaths… and then it gets complicated.
They found a considerable number of hospitals moving in and out of being “outliers” when the definition of mortality changed. This is no great surprise to anyone who has analysed comparative hospital stats, or has looked ito the methodological literature on it. But it remains the case that league tables get a lot of attention both from journalists and bureaucrats.
As further reading I cannot recommend highly enough the book “Performance measurement for health system improvement” and the landmark JRSS paper.
PS: the graphs in the Kristofferson paper are bad: inadequately lablled, ugly and confusing