Category Archives: research

The peer-review log

As an academic, I started a page on this blog site that documented each peer review I did for a journal. I never quite got round to going back in time from the start, but there isn’t much of interest there that you won’t get from the stuff I did capture. Now that I am hanging up my mortarboard, it doesn’t make sense to be a page any more so I am moving it here. Enjoy the schadenfreude if nothing else.


Statisticians are in short supply, so scientific journals find it hard to get one of us to review the papers that have been submitted to them. And yet the huge majority of these papers rely heavily on stats for their conclusions. As a reviewer, I see the same problems appearing over and over, but I know how hard it is for most scientists to find a friendly statistician to help them make it better. So, I present this log of all the papers I have reviewed, anonymised, giving the month of review, study design and broad outline of what was good or bad from a stats point of view. I hope this helps some authors improve the presentation of their work and avoid the most common problems.

I started this in November 2013, and am working backwards as well as recording new reviews, although the retrospective information might be patchy.

  • November 2012, randomised controlled trial, recommended rejection. Sample size was based on an unrealistic Minimum Clinically Important Difference from prior research uncharacteristic of the primary outcome, and thus the study was unable to demonstrate benefit, and unethical because the primary outcome was about efficiency of the health system while benefit to patients had already been demonstrated, yet the intervention was withheld in the control group. Power to detect adverse events was even lower as a result, yet bold statements about safety were made. A flawed piece of work that put hospital patients at risk with no chance of ever demonstrating anything, this study should never have been approved in the first place. Of interest to scholars of evidence-based medicine, this study has now been printed by Elsevier in a lesser journal, unchanged from the version I reviewed. Such is life; I only hope the authors learnt something from the review to outweigh the reward they felt at finally getting it published.
  • November 2013, cross-sectional survey, recommended rejection. Estimates were adjusted for covariates (not confounders) when it was not relevant to do so, grammar was poor and confusing in places, odds ratios were used when relative risks would be clearer, t-tests and chi-squareds were carried out and reported without any hypothesis being clearly stated or justified
  • November 2013, exploratory / correlation study, recommended major revision then rejection when authors declined to revise the analysis. Ordinal data analysed as nominal, causing an error crossing p=0.05.
  • March 2014, randomised controlled trial, recommended rejection. Estimates were adjusted for covariates when it was not relevant to do so, bold conclusions are made without justification.
  • April 2014, mixed methods systematic review, recommended minor changes around clarity of writing and details of one calculation.
  • May 2014, meta-analysis, recommended acceptance – conducted to current best practice, clearly written and on a useful topic.
  • July 2014, ecological analysis, recommended major revision. Pretty ropy on several fronts, but perhaps most importantly that any variables the authors could find had been thrown into an “adjusted” analysis with clearly no concept of what that meant or was supposed to do. Wildly optimistic conclusions too. Came back for re-review in September 2014 with toned-down conclusions and clarity about what had been included as covariates but the same issue of throwing the kitchen sink in. More “major revisions”; and don’t even think about sending it voetstoots to a lesser journal because I’ll be watching for it! (As of September 2015, I find no sign of it online)
  • July 2014, some other study I can’t find right now…
  • September 2014, cohort study. Clear, appropriate, important. Just a couple of minor additions to the discussion requested.
  • February 2015, secondary analysis of routine data, no clear question, no clear methods, no justification of adjustment, doesn’t contribute anything that we haven’t already known for 20 years and more. Reject.
  • February 2015, revision of some previously rejected paper where the authors try to wriggle out of any work by refuting basic statistical facts. Straight to the 5th circle of hell.
  • March 2015, statistical methods paper. Helpful, practical, clearly written. Only the very merest of amendments.
  • April 2015, secondary analysis of public-domain data. Inappropriate analysis, leading to meaningless conclusions. Reject.
  • April 2015, retrospective cohort study, can’t find the comments any more… but I think I recommended some level of revisions
  • September 2015, survey of a specific health service in a hard-to-reach population. Appropriate to the question, novel and important. Some amendments to graphics and tables were suggested. Minor revisions.
  • March 2016, case series developing a prognostic score. Nice analysis, written very well, and a really important topic. My only quibbles were about assuming linear effects. Accept subject to discretionary changes.
  • October 2016, cohort study. Adjusted for stuff that probably isn’t confounding, and adjusting (Cox regression) for competing risks when they should be recognised as such. Various facts about the participants that are not declared. Major revisions.
  • October 2016 diagnostic study meta-analysis. Well done, clearly explained. A few things could be spelled out more. Minor revisions.
  • November 2016, kind of a diagnostic study…, well-done, well-written, but very limited in scope and hard to tell what the implications for practice might be. Left in the lap of the gods editors.
  • December 2016, observational study of risk factors, using binary outcomes but would be more powerful with time-to-event if possible. Competing risks would have to be used in that case. Otherwise, nice.

Leave a comment

Filed under research

Performance indicators and routine data on child protection services

The parts of social services that do child protection in England get inspected by Ofsted on behalf of the Department for Education (DfE). The process is analogous to the Care Quality Commission inspections of healthcare and adult social care providers, and they both give out ratings of ‘Inadequate’, ‘Requires Improvement’, ‘Good’ or ‘Outstanding’. In the health setting, there’s many years’ experience of quantitative quality (or performance) indicators, often through a local process called clinical audit and sometimes nationally. I’ve been involved with clinical audit for many years. One general trend over that time has been away from de novo data collection and towards recycling routinely collected data. Especially in the era of big data, lots of organisations are very excited about Leveraging Big Data Analytics to discover who’s outstanding, who sucks, and how to save lives all over the place. Now, it may not be that simple, but there is definitely merit in using existing data.

This trend is just appearing on the horizon for social care though, because records are less organised and electronic, and because there just hasn’t been that culture of profession-led audit. Into this scene came my colleagues Rick Hood (complex systems thinker) and Ray Jones (now retired professor and general Colossus of UK social care). They wanted to investigate recently open-sourced data on child protection services and asked if I would be interested to join in. I was – and I wanted to consider this question: could routine data replace Ofsted inspections? I suspected not! But I also suspected that question would soon be asked on the cash-strapped corridors of the DfE, and I wanted to head it off with some facts and some proper analysis.

We hired master data wrangler Allie Goldacre, who combed through, tested and verified and combined together the various sources:

  • Children in Need census, and its predecessor the Child Protection and Referrals returns
  • Children and Family Court Advisory and Support Service records of care proceedings
  • DfE’s Children’s Social Work Workforce statistics
  • SSDA903 records of looked-after children
  • Spending statements from local authorities
  • Local authority statistics on child population, deprivation and urban/rural locations.

Just because the data were ‘open’ didn’t mean they were useable. Each set had its own quirks and each local authority had its own problems and definitions in some cases. The data wrangling was painstaking and painful! As it’s all in the public domain, I’m going to add the data and code to my website here, very soon.

Then, we wrote this paper investigating the system and this paper trying to predict ‘Inadequate’ ratings. The second of these took all the predictors in 2012 (the most complete year for data) and tried to predict Inadequates in 2012 or 2013. We used the marvellous glmnet package in R and got down to three predictors:

  • Initial assessments within the target of 10 days
  • Re-referrals to the service
  • The use of agency workers

Together they get 68% of teams right, and that could not be improved on. We concluded that 68% was not good enough to replace inspection, and called it a day.

But lo! Soon afterwards, the DfE announced that they had devised a new Big Data approach to predict Inadequate Ofsted scores, and that (what a coincidence!) it used the same three indicators. Well I never. We were not credited for this, nor indeed had our conclusion (that it’s a stupid idea) sunk in. Could they have just followed a parallel route to ours? Highly unlikely, unless they had an Allie at work on it, and I get no impression of the nuanced understanding of the data that would result from that.

Ray noticed that the magazine Children and Young People Now were running an article on the DfE prediction, and I got in touch. They asked for a comment and we stuck it in here.

A salutary lesson that cash-strapped Gradgrinds, starry eyed with the promises of big data after reading some half-cocked article in Forbes, will clutch at any positive message that suits them and ignore the rest. This is why careful curation of predictive models matters. The consumer is generally not equipped to make the judgements about using them.

A closing aside: Thomas Dinsmore wrote a while back that a fitted model is intellectual property. I think it would be hard to argue that coefficients from an elastic-net regression are mine and mine only, although the distinction may well be in how they are used, and this will appear in courts around the world now that they are viewed as commercially advantageous.

1 Comment

Filed under research

Complex systems reading

Tomorrow I’ll be giving a seminar in our faculty on inference in complex systems (like the health service, or social services, or local government, or society more generally). It’s the latest talk on this subject that is really gelling now into something of a manifesto. Rick Hood and I intend to send off the paper version before Xmas, so I won’t say more about the substance of it here (and the slides are just a bunch of aide-memoire images), other than to list the references, which contains some of my favourite sources on data+science:

mr-death

I deliberately omit the methodologically detailed papers from this list, but in the main you should look into Bayesian modelling, generalised coarsening, generalised instrumental variable models, structural equation models, and their various intersections.

Leave a comment

Filed under Bayesian, research

Roman dataviz and inference in complex systems

I’m in Rome at the International Workshop on Computational Economics and Econometrics. I gave a seminar on Monday on the ever-popular subject of data visualization. Slides are here. In a few minutes, I’ll be speaking on Inference in Complex Systems, a topic of some interest from practical research experience my colleague Rick Hood and I have had in health and social care research.

Here’s a link to my handout for that: iwcee-handout

In essence, we draw on realist evaluation and mixed-methods research to emphasise understanding the complex system and how the intervention works inside it. Unsurprisingly for regular readers, I try to promote transparency around subjectivities, awareness of philosophy of science, and Bayesian methods.

4 Comments

Filed under Bayesian, healthcare, learning, R, research, Stata, Visualization

Complex interventions: MRC guidance on researching the real world

The MRC has had advice on evaluating “complex interventions” since 2000, updated 2008. By complex interventions, they mean things like encouraging children to walk to school, not complex in the sense of being made up of many parts, but complex in the sense that the way it happens and the effect it has is hard to predict because of non-linearities, interactions and feedback loops. Complexity is something I have been thinking and reading about a lot recently; it really is unavoidable in most of the work I do (I never do simple RCTs; I mean how boring is it if your life’s work is comparing drug X to placebo using a t-test?) and although it is supertrendy and a lot of nonsense is said about it, there is some wisdom out there too. However, I always found the 2000/8 guidance facile: engage stakeholders, close the loop, take forward best practice. You know you’re not in for a treat when you see a diagram like this:

bobbins-flowchart

 

Now, there is a new guidance document out that gets into the practical details and the philosophical underpinnings at the same time: wonderful! There’s a neat summary in the BMJ.

What I particularly like about this, and why it should be widely read, is that it urges all of us researchers to be explicit a priori about our beliefs and mental causal models. You can’t measure everything in a complex system, so you have to reduce it to the stuff you think matters, and you’d better be able to justify or at least be clear about that reduction. It acknowledges the role that context plays in affecting the results observed and also the inferences you choose to make. And it stresses that the only decent way of finding out what’s going on is to do both quantitative and qualitative data collection. That last part is interesting because it argues against the current fashion for gleeful retrospective analysis of big data. Without talking to people who were there, you know nothing.

My social worker colleague Rick Hood and I are putting together a paper on this subject of inference in complex systems. First I’ll be talking about it in Rome at IWcee (do come! Rome is lovely in May), picking up ideas from economists, and then we’ll write it up over the summer. I’ll keep you posted.

Leave a comment

Filed under research

It pays to be Bayes

Yes, indeed. I’m looking forward to getting this sent off for publication. Wilson asked my advice about his pilot RCT in stroke rehab, and the outcomes were so complicated it took me a long time to work out what to do to utilise all the information without dropping any. Thankfully I had been reading Song & Lee’s book “Basic and Advanced Bayesian Structural Equation Modeling (with applications in the medical and behavioral sciences)” and was able to fit one of these wonderfully flexible models to the data.

It was truly one of the most satisfying projects I’ve ever contributed to, because his clinical expertise and my stats added to up something that neither of us could have done alone, and actually changed the results substantively! But you’ll have to wait to read our findings…

Leave a comment

Filed under research

New paper: ethnicity of newly-qualified nurses and their job prospects

I have a paper just out in the International Journal of Nursing Studies where colleagues and I surveyed newly qualified nurses who studied in London, first on the last day of their course, and then six months later. We asked about confidence and feeling prepared for various aspects of job hunting, and what success they had experienced. The causality is complex but it appeared to be very consistent across all the ‘outcome’ measures that ethnic minority nurses had worse prospects. There’s a lot of questions that arise from this that justify new research, for example focussing on the work placement and the peer support environment.

This is part of a larger project run by NHS London which will have a press release and launch at the King’s Fund on 19 November.

Leave a comment

Filed under research