Monthly Archives: December 2014

Best dataviz of 2014

I expect everyone in the dataviz world would tell you this year was better than ever. It certainly seemed that way to me. I’m going to separate excellent visualisation for the purpose of communicating data from that for communicating methods.

In the first category, the minute I saw “How the Recession Reshaped the Economy, in 255 Charts“, it was so clearly head and shoulders above everything else that I could have started writing this post right then. It’s beautiful, intriguing and profoundly rich in information. And! quite unlike anything I’d seen in D3 before, or that’s to say it brings together a few hot trends, like scrolling to go through a deck, in exemplary style.

recession

Next, the use of JavaScript as a powerful programming language to do all manner of clever things in your web browser. Last year I was impressed by Rasmus Bååth’s MCMC in JavaScript, allowing me to do Bayesian analyses on my cellphone. This year I went off to ICOTS in Flagstaff AZ and learnt about StatKey, a pedagogical collection of simulation / randomisation / bootstrap methods, but you can put your own data in so why not use them in earnest? It is entirely written in JavaScript, and you know what that means – it’s open source, so take it and adapt it, making sure to acknowledge the work of this remarkable stats dynasty!

statkey

So, happy holidays. If the good Lord spares me, I expect to enjoy even more amazing viz in 2015.

Leave a comment

Filed under JavaScript, Visualization

An audit of audits

In England, and to some extent other parts of the UK (it’s confusing over here), clinical audits with a national scope are funded by HM Government via the Healthcare Quality Improvement Partnership (HQIP). Today, they have released a report from ongoing work to find out how these different audits operate. You can download it here. I am co-opted onto one of the sub-groups of the NHS England committee that decides which projects to fund, and as a statistician I always look for methodological rigour in these applications. The sort of thing that catches my eye, or more often worries me by its absence: plans for sampling, plans for data linkage, plans for imputing missing data, plans for risk adjustment and how these will be updated as the project accumulates data. Also, it’s important that the data collected is available to researchers, in a responsible way, and that requires good record-keeping, archiving and planning ahead.

I’ve just looked through the audit-of-audits report for statistical topics (which are not its main focus) and want to pick up a couple of points. In Table 3, we see that the statistical analysis plan is the area most likely to be missed out of an audit’s protocol. It’s amazing really, considering how central that is to their function. 24/28 work streams provide a user manual including data dictionary to the poor devils who have to type in their patients’ details late at night when they were supposed to have been at their anniversary party long ago (that’s how I picture it anyway); this really matters because the results are only as good what got typed in at 1 am. 4 of them take a sample of patients, rather than aiming for everyone, and although they can all say how many they are aiming for, only one could explain how they check for external validity and none could say what potential biases existed in their process. 20/28 use risk-adjustment, 16 of whom had done some form of validation.

Clearly there is some way to go, although a few audits achieve excellent standards. The problem is in getting those good practices passed along. Hopefully this piece of work will continue to get support and to feed into steady improvements in the audits.

Leave a comment

Filed under healthcare