Monthly Archives: September 2016

Futility audit

Theresa May’s “racial disparity audit” announced on 27 August, is really just a political gesture that works best if it never delivers findings. I’m reminded of the scene in Yes, Minister (or is it The Thick Of It? Or both?) where the protagonists are all in trouble for something and when the prime minister announces that there will be a public inquiry to find out what went wrong, they are delighted. They know that inquiries are the political equivalent of long grass, with the intention being that everybody involved has retired by the time it reports*.

larry-and-theresa

Larry knows better than to look for mice in 300,000 different places.

It’s not entirely clear what is meant by audit here. Not in the accountants’ sense, surely. Something more like clinical audit? Audit, done properly, is pretty cool. Timely information on performance can get fed back to professionals who run public services, and they can use those data to examine potential problems and improve what they do. But when central agencies examine the data and make the call, it is not the same thing. The trouble is that, whatever indicators you measure, indicators can only indicate; it takes understanding of the local context to see whether it really is a problem.

But there’s another, more statistical problem in this plan: it is impossible to deliver all those goals in the announcement from the prime ministers office:

  • audit to shine a light on how our public services treat people from different backgrounds
  • public will be able to check how their race affects how they are treated on key issues such as health, education and employment, broken down by geographic location, income and gender
  • the audit will show disadvantages suffered by white working class people as well as ethnic minorities
  • the findings from this audit will influence government policy to solve these problems

So that pulls together data across the country from all providers of health services, all schools and colleges, all employers. There needs to be sufficient numbers to break them down into categories by ethnicity (18 categories are used by the Census in England), location at sufficient scale to influence policy (152 local authorities, presumably), income (maybe deciles?) and gender (in this context, they probably need more than two, let’s allow four). Also, social class has been dropped into the objectives, so they will need to collect at least three categories there.

This gives about 300,000 combinations. Inside each of these, sufficient data are needed in order to give precise estimates of fairly rare (one hopes) adverse outcomes. Let’s say maybe 200 people’s data. On total, data from 60,000,000 people, which is just short of the entire UK population, but that includes babies etc, who are not relevant to some of the indicators above. Oh dear. Now, those data need to be collected in a consistent and comparable way, analysed and fed back, including a public-friendly league table from the sounds of it, in timely fashion, say within six months of starting.

I’m being fast and loose with the required sample size, because there are some efficiency savings through excluding irrelevant combinations, multilevel modeling, assumptions of linearities or conditional independence etc, but it is still hopeless. I suspect then that this was never intended actually to happen, but just to be a sop to critics who regard our current government as representing the interests of white UK citizens only, while throwing some scraps to disenchanted white working class voters who chose Brexit and might now be disappointed that police are not going door to door rounding up Johnny Foreign.

One more concern and then I’ll be done: when politicians ask experts to do something, and everybody says no, they sometimes like to look for trimmed down versions such as a simpler analysis based on previously collected data. After all, it would be embarrassing to admit that you couldn’t do a project. However, that would be a serious mistake because of the inconsistencies and problems in making the extant sources commensurate. I hope any agency or academic department approached says no to this foolish quest.

* – you might like to compare with Nick Bostrom’s criticism of the great number of twenty-year predictions for technology: close enough to be exciting, but still after the predictor’s retirement.

1 Comment

Filed under Uncategorized

email list vs RSS feed vs Twitter vs periodical

About a year or two ago, I signed off my last e-mail list and rather assumed that they were a thing of the past. They were increasingly choked with announcements of self-promoting hype ‘articles’, of the “5 Amazing Things Every Great Data Scientist Does While Taking A Dump” variety. Now, to promote a workshop I’m organising, I find myself back on a couple and they’re far, far better than they were. In fact, there seem to be things on them that I hadn’t heard about by other means. It’s so hard to keep up with all the cool developments around the data world now, much harder than 10 years ago, and that’s wonderful but also time-consuming and potentially distracting from the kind of Deep Work that we are actually paid to do.

I got into Twitter instead (@robertstats), and that also served as an outlet for many little quick points I wanted to make, that were too small to constitute a blog post. And through Twitter I have learned about more people and ideas than I can even begin to count. But at the same time, that massively cut my blog output, which I regret somewhat, and intend to boost again a bit more.

The third source was other people’s blogs. It feels to me (without any data) that blogs are declining in popularity but the ones that make a genuine substantive contribution remain active. I used to get RSS feeds of new postings through Google and then later through WordPress.com (who host this blog), and I suppose I still do get those feeds, but never look at them. I really mean never! It’s just not immediate in the way the email is, and not compelling in the way that Twitter is. But it’s easy to post to Twitter every time you blog, and you could even set up some kind of bot to do it for you. So, I have to accept that those blogs that are not syndicated in any other way are going to get missed. It’s unfortunate but you can’t catch everything. The really good ones get tweeted by their readers if nothing else.

The crappy websites full of self-promotion still exist, and perhaps there are even more of them now, but somehow they seem to be controlled better and don’t sneak through. Maybe they fell foul of their own One Deep Learning Trick That Will Change Everything You Know About Everything, and got classified in the trash with 0.01 loss function. For my part, I only follow people who retweet with discretion. There are plenty of data people out there who seem to fire off everything that passes through their own feeds without reading it first, and although you feel you’re missing out on a great party, it’s best to just unfollow them. They won’t notice. And if you look a little deeper, you realise these people often have no Amazing Data Science to show for themselves but a whole lotta tweets; don’t forget what our former Prime Minister said on the subject.

I don’t read magazines on these sorts of subjects, except for Significance, which I am obliged to receive as an RSS (different kind of RSS there folks) fellow, and that often has something good. But I have started subscribing to the New York Times (digital). At the time it was far and away the best newspaper in the world for data journalism, dataviz and such, and I think they still have the lead but have lost some of their best team members while competitors grew into the field. Nevertheless I learn quite a lot from it as a well-curated, wide-covering international newspaper.

So, now I have two carefully chosen mailing lists, which send a daily digest, and I read them maybe once a week, taking no more than 10 seconds (literally) on each email. I get some tables of contents from journals, which are almost never interesting, but have occasional gems, so they get the same rough treatment. I read the paper but probably not as much as I should, and I am (as my homeboy Giovanni Cerulli put it) an avid consumer of Twitter, which signposts me off to all the blogs and publications and websites I might need.

I think the message here is that, as a data person, you need to think carefully about how you curate your own flow of information about new developments. It can easily take up too much of your time and disrupt your powers of concentration, but at the same time you can’t cloister yourself away or you will soon be a dinosaur. Our field is moving faster than ever and it’s a really exciting time to be working in it.

Leave a comment

Filed under Uncategorized

How the REF hurts isolated statisticians

In the UK, universities are rated by expert panels on the basis of their research activities, in a process called the REF (Research Excellence Framework). The resulting league table not only influences prospective students’ choices of where to study, but also the government’s allocation of funding. More money goes to research-active institutions in a ‘back the winner’ approach that aims explicitly to produce a small number of excellent institutions out of the dense (and possibly over-supplied) field that exists at present. The recent publication of the Stern Review into this process has been widely welcomed. I have been involved with institutional rankings, albeit hospitals rather than universities, for a long time, and of all the scoring systems and league tables that could be produced, the REF’s 2014 iteration is as close to a perfectly bad system as could be conceived. It might have been written by a room full of demons pounding at infernal typewriters until a sufficient level of distortion and perversity was achieved. Universities are incentivised to neglect junior researchers and save the money until a last minute frenzied auction to headhunt established academics nearing retirement. The only thing that counts is a few peer-reviewed papers by a few academics, and despite assurances of holistic, touchy-feely assessment, everybody knows it comes down to some kind of summary statistic of the journal impact factors.

Stern tries to tackle some of that, and I won’t rehash the basics as you can read that elsewhere. I want to focus on the situation that isolated statisticians, in the ASA’s sense of the term, find themselves in. Many statisticians in academia end up ‘isolated’, in that they are the only statistician in another department. Whatever their colleagues’ background, and whatever the job description may say, the isolated statistician exists to some extent as a helpdesk for the colleagues who are lacking in stats skills. I am one such, the only statistician in a faculty of 282 academic staff. Most of my publications are the result of colleagues’ projects, and only occasionally as a result of my own methodological interests. Every university department has to submit its best (as defined by REF) outputs into one particular “unit of assessment”, which in our case is “Allied Health Professions, Dentistry, Nursing and Pharmacy”.

This mapping of departments into units goes largely uncriticised — because it largely doesn’t matter — but it excludes those people like isolated statisticians who don’t belong to the same profession as the rest of the unit. All my applied work with clinical / social worker colleagues, which is the bulk of the day job, can count (and of course, I chip into so many applied projects that I actually look like a superhero in the metric of the REF), but any methodological spin-offs do not, yet they are the bit that really is Statistics, the bit that I would want to be acknowledged if I were looking for a job in a statistics department. I’m not looking for that job, but a lot of young applied jobbing statisticians are. Why is it necessary to have that crude categorisation of whole departments to a unit of assessment? It doesn’t strike me as making the assessment any easier for the REF staff, because they rate the individual submissions and then aggregate them across units. The work-around is to have joint appointments into different university departments, so applied work counts here and methodological there, except that REF would not allow that. You must belong to one unit. This may not matter so much to statisticians, who have the most under-supplied and sexiest job of the new century, because we can always up sticks and head for Silicon Valley or the City, but is it really the intention of the REF to promote professional ghettos free from methodologists throughout academia? We have seen from the psychology crisis of replication what happens when people get A Little Knowledge and only ever talk to others like themselves.

Leave a comment

Filed under Uncategorized