In the UK, universities are rated by expert panels on the basis of their research activities, in a process called the REF (Research Excellence Framework). The resulting league table not only influences prospective students’ choices of where to study, but also the government’s allocation of funding. More money goes to research-active institutions in a ‘back the winner’ approach that aims explicitly to produce a small number of excellent institutions out of the dense (and possibly over-supplied) field that exists at present. The recent publication of the Stern Review into this process has been widely welcomed. I have been involved with institutional rankings, albeit hospitals rather than universities, for a long time, and of all the scoring systems and league tables that could be produced, the REF’s 2014 iteration is as close to a perfectly bad system as could be conceived. It might have been written by a room full of demons pounding at infernal typewriters until a sufficient level of distortion and perversity was achieved. Universities are incentivised to neglect junior researchers and save the money until a last minute frenzied auction to headhunt established academics nearing retirement. The only thing that counts is a few peer-reviewed papers by a few academics, and despite assurances of holistic, touchy-feely assessment, everybody knows it comes down to some kind of summary statistic of the journal impact factors.
Stern tries to tackle some of that, and I won’t rehash the basics as you can read that elsewhere. I want to focus on the situation that isolated statisticians, in the ASA’s sense of the term, find themselves in. Many statisticians in academia end up ‘isolated’, in that they are the only statistician in another department. Whatever their colleagues’ background, and whatever the job description may say, the isolated statistician exists to some extent as a helpdesk for the colleagues who are lacking in stats skills. I am one such, the only statistician in a faculty of 282 academic staff. Most of my publications are the result of colleagues’ projects, and only occasionally as a result of my own methodological interests. Every university department has to submit its best (as defined by REF) outputs into one particular “unit of assessment”, which in our case is “Allied Health Professions, Dentistry, Nursing and Pharmacy”.
This mapping of departments into units goes largely uncriticised — because it largely doesn’t matter — but it excludes those people like isolated statisticians who don’t belong to the same profession as the rest of the unit. All my applied work with clinical / social worker colleagues, which is the bulk of the day job, can count (and of course, I chip into so many applied projects that I actually look like a superhero in the metric of the REF), but any methodological spin-offs do not, yet they are the bit that really is Statistics, the bit that I would want to be acknowledged if I were looking for a job in a statistics department. I’m not looking for that job, but a lot of young applied jobbing statisticians are. Why is it necessary to have that crude categorisation of whole departments to a unit of assessment? It doesn’t strike me as making the assessment any easier for the REF staff, because they rate the individual submissions and then aggregate them across units. The work-around is to have joint appointments into different university departments, so applied work counts here and methodological there, except that REF would not allow that. You must belong to one unit. This may not matter so much to statisticians, who have the most under-supplied and sexiest job of the new century, because we can always up sticks and head for Silicon Valley or the City, but is it really the intention of the REF to promote professional ghettos free from methodologists throughout academia? We have seen from the psychology crisis of replication what happens when people get A Little Knowledge and only ever talk to others like themselves.