Tag Archives: data

The peer-review log

As an academic, I started a page on this blog site that documented each peer review I did for a journal. I never quite got round to going back in time from the start, but there isn’t much of interest there that you won’t get from the stuff I did capture. Now that I am hanging up my mortarboard, it doesn’t make sense to be a page any more so I am moving it here. Enjoy the schadenfreude if nothing else.


Statisticians are in short supply, so scientific journals find it hard to get one of us to review the papers that have been submitted to them. And yet the huge majority of these papers rely heavily on stats for their conclusions. As a reviewer, I see the same problems appearing over and over, but I know how hard it is for most scientists to find a friendly statistician to help them make it better. So, I present this log of all the papers I have reviewed, anonymised, giving the month of review, study design and broad outline of what was good or bad from a stats point of view. I hope this helps some authors improve the presentation of their work and avoid the most common problems.

I started this in November 2013, and am working backwards as well as recording new reviews, although the retrospective information might be patchy.

  • November 2012, randomised controlled trial, recommended rejection. Sample size was based on an unrealistic Minimum Clinically Important Difference from prior research uncharacteristic of the primary outcome, and thus the study was unable to demonstrate benefit, and unethical because the primary outcome was about efficiency of the health system while benefit to patients had already been demonstrated, yet the intervention was withheld in the control group. Power to detect adverse events was even lower as a result, yet bold statements about safety were made. A flawed piece of work that put hospital patients at risk with no chance of ever demonstrating anything, this study should never have been approved in the first place. Of interest to scholars of evidence-based medicine, this study has now been printed by Elsevier in a lesser journal, unchanged from the version I reviewed. Such is life; I only hope the authors learnt something from the review to outweigh the reward they felt at finally getting it published.
  • November 2013, cross-sectional survey, recommended rejection. Estimates were adjusted for covariates (not confounders) when it was not relevant to do so, grammar was poor and confusing in places, odds ratios were used when relative risks would be clearer, t-tests and chi-squareds were carried out and reported without any hypothesis being clearly stated or justified
  • November 2013, exploratory / correlation study, recommended major revision then rejection when authors declined to revise the analysis. Ordinal data analysed as nominal, causing an error crossing p=0.05.
  • March 2014, randomised controlled trial, recommended rejection. Estimates were adjusted for covariates when it was not relevant to do so, bold conclusions are made without justification.
  • April 2014, mixed methods systematic review, recommended minor changes around clarity of writing and details of one calculation.
  • May 2014, meta-analysis, recommended acceptance – conducted to current best practice, clearly written and on a useful topic.
  • July 2014, ecological analysis, recommended major revision. Pretty ropy on several fronts, but perhaps most importantly that any variables the authors could find had been thrown into an “adjusted” analysis with clearly no concept of what that meant or was supposed to do. Wildly optimistic conclusions too. Came back for re-review in September 2014 with toned-down conclusions and clarity about what had been included as covariates but the same issue of throwing the kitchen sink in. More “major revisions”; and don’t even think about sending it voetstoots to a lesser journal because I’ll be watching for it! (As of September 2015, I find no sign of it online)
  • July 2014, some other study I can’t find right now…
  • September 2014, cohort study. Clear, appropriate, important. Just a couple of minor additions to the discussion requested.
  • February 2015, secondary analysis of routine data, no clear question, no clear methods, no justification of adjustment, doesn’t contribute anything that we haven’t already known for 20 years and more. Reject.
  • February 2015, revision of some previously rejected paper where the authors try to wriggle out of any work by refuting basic statistical facts. Straight to the 5th circle of hell.
  • March 2015, statistical methods paper. Helpful, practical, clearly written. Only the very merest of amendments.
  • April 2015, secondary analysis of public-domain data. Inappropriate analysis, leading to meaningless conclusions. Reject.
  • April 2015, retrospective cohort study, can’t find the comments any more… but I think I recommended some level of revisions
  • September 2015, survey of a specific health service in a hard-to-reach population. Appropriate to the question, novel and important. Some amendments to graphics and tables were suggested. Minor revisions.
  • March 2016, case series developing a prognostic score. Nice analysis, written very well, and a really important topic. My only quibbles were about assuming linear effects. Accept subject to discretionary changes.
  • October 2016, cohort study. Adjusted for stuff that probably isn’t confounding, and adjusting (Cox regression) for competing risks when they should be recognised as such. Various facts about the participants that are not declared. Major revisions.
  • October 2016 diagnostic study meta-analysis. Well done, clearly explained. A few things could be spelled out more. Minor revisions.
  • November 2016, kind of a diagnostic study…, well-done, well-written, but very limited in scope and hard to tell what the implications for practice might be. Left in the lap of the gods editors.
  • December 2016, observational study of risk factors, using binary outcomes but would be more powerful with time-to-event if possible. Competing risks would have to be used in that case. Otherwise, nice.

Leave a comment

Filed under research

Performance indicators and routine data on child protection services

The parts of social services that do child protection in England get inspected by Ofsted on behalf of the Department for Education (DfE). The process is analogous to the Care Quality Commission inspections of healthcare and adult social care providers, and they both give out ratings of ‘Inadequate’, ‘Requires Improvement’, ‘Good’ or ‘Outstanding’. In the health setting, there’s many years’ experience of quantitative quality (or performance) indicators, often through a local process called clinical audit and sometimes nationally. I’ve been involved with clinical audit for many years. One general trend over that time has been away from de novo data collection and towards recycling routinely collected data. Especially in the era of big data, lots of organisations are very excited about Leveraging Big Data Analytics to discover who’s outstanding, who sucks, and how to save lives all over the place. Now, it may not be that simple, but there is definitely merit in using existing data.

This trend is just appearing on the horizon for social care though, because records are less organised and electronic, and because there just hasn’t been that culture of profession-led audit. Into this scene came my colleagues Rick Hood (complex systems thinker) and Ray Jones (now retired professor and general Colossus of UK social care). They wanted to investigate recently open-sourced data on child protection services and asked if I would be interested to join in. I was – and I wanted to consider this question: could routine data replace Ofsted inspections? I suspected not! But I also suspected that question would soon be asked on the cash-strapped corridors of the DfE, and I wanted to head it off with some facts and some proper analysis.

We hired master data wrangler Allie Goldacre, who combed through, tested and verified and combined together the various sources:

  • Children in Need census, and its predecessor the Child Protection and Referrals returns
  • Children and Family Court Advisory and Support Service records of care proceedings
  • DfE’s Children’s Social Work Workforce statistics
  • SSDA903 records of looked-after children
  • Spending statements from local authorities
  • Local authority statistics on child population, deprivation and urban/rural locations.

Just because the data were ‘open’ didn’t mean they were useable. Each set had its own quirks and each local authority had its own problems and definitions in some cases. The data wrangling was painstaking and painful! As it’s all in the public domain, I’m going to add the data and code to my website here, very soon.

Then, we wrote this paper investigating the system and this paper trying to predict ‘Inadequate’ ratings. The second of these took all the predictors in 2012 (the most complete year for data) and tried to predict Inadequates in 2012 or 2013. We used the marvellous glmnet package in R and got down to three predictors:

  • Initial assessments within the target of 10 days
  • Re-referrals to the service
  • The use of agency workers

Together they get 68% of teams right, and that could not be improved on. We concluded that 68% was not good enough to replace inspection, and called it a day.

But lo! Soon afterwards, the DfE announced that they had devised a new Big Data approach to predict Inadequate Ofsted scores, and that (what a coincidence!) it used the same three indicators. Well I never. We were not credited for this, nor indeed had our conclusion (that it’s a stupid idea) sunk in. Could they have just followed a parallel route to ours? Highly unlikely, unless they had an Allie at work on it, and I get no impression of the nuanced understanding of the data that would result from that.

Ray noticed that the magazine Children and Young People Now were running an article on the DfE prediction, and I got in touch. They asked for a comment and we stuck it in here.

A salutary lesson that cash-strapped Gradgrinds, starry eyed with the promises of big data after reading some half-cocked article in Forbes, will clutch at any positive message that suits them and ignore the rest. This is why careful curation of predictive models matters. The consumer is generally not equipped to make the judgements about using them.

A closing aside: Thomas Dinsmore wrote a while back that a fitted model is intellectual property. I think it would be hard to argue that coefficients from an elastic-net regression are mine and mine only, although the distinction may well be in how they are used, and this will appear in courts around the world now that they are viewed as commercially advantageous.

Leave a comment

Filed under research

Dataviz of the week, 26/4/17

This chart of population density across Europe by Henrik Lindberg has been very popular online this last week.

687474703a2f2f692e696d6775722e636f6d2f6e68564a71776b2e6a7067

Long-standing readers will recall my stab at this but nowadays everybody just does it in ggplot2. It’s good to have options. While you’re at his Gist page, checkout his other stuff too.

Leave a comment

Filed under Visualization

Dataviz of the week, 19/4/17

 

This is just the greatest thing I’ve seen in a while, and definitely in the running for dataviz o’ the year already. Emoji scatterplot:Screen Shot 2017-04-18 at 23.58.23

And another:

Screen Shot 2017-04-19 at 00.02.51

There’s also a randomisation test which I’ll leave you to discover for yourself.

 

Leave a comment

Filed under Visualization

Dataviz of the week, 29/3/17

Here’s a graphic of a really deep oil well by Fuel Fighter via Visual Capitalist. This is rather reminiscent (ahem) of the long, tall graphics by the Washington Post (and the eerily similar one from the Guardian a few days later which they had to admit they had nicked) about flight MH370 at the bottom of the ocean. The WP graphic works because you have to scroll down, and down, and down, and down, and down (wow, that’s deep!), and down, and down (no way), and down before you get to the sea bed. Yes, all the usual references are there, hot air balloons and Burj Khalifas and Barad-Dûrs and what have you, but they don’t matter because it’s the scrolling that does it, giving you GU2 (“Conveying the sense of the scale and complexity of a dataset”) and GU6 (“Attracting attention and stimulating interest.”) The references don’t mean anything to me (or probably you); I may have seen the Burj Khalifa and thought it was amazingly tall, but I have no grasp of how tall and that is what matters: I’d have to have an intuitive feel for what 3 BKs are compared to the height of a jet aircraft, and I don’t have that, so why should I care about the references?

Screen Shot 2017-03-21 at 08.42.38

My problem with the Fuel Fighter graphic is that it doesn’t have that same sense of depth. The image file is 796 x 4554 pixels, which is an aspect ratio of 1:17. The WP image (SVG FTW) is 539 x 16030 or 1:30, which is pretty extreme! It feels to me like you’d have to get past 1:20 before it started to have enough impact.

 

Leave a comment

Filed under Visualization

Dataviz of the week, 22/3/17

The Washington Post have an article about the US budget out by Kim Soffen and Denise Lu. It’s not long, but brings in four different graphical formats to tell different aspects of the data story. A bar showing parts of the whole (see, you don’t need a pie for this!)

Screen Shot 2017-03-21 at 08.21.25

then a line/dot/whatever-you-want-to-call-it chart of the change in relative terms

Screen Shot 2017-03-21 at 08.21.38

then a waffle of that change in absolute terms, plus a sparkline of the past.

Screen Shot 2017-03-21 at 08.21.55

there’s also a link to full department-specific stories under each graphic. I think this is really good stuff, though I can image some design-heads wanting to reduce it further. It shows how you can make a good data-driven story out of not many numbers.

Leave a comment

Filed under Visualization

Dataviz of the week: 15/3/17

This front page graphic in the Arizona Republic, by Aviva Loeb was spotted and blogged by Michael Sandberg.

azrep-a-deadly-year-full-page

I’m a fan of old school pictograms, and there’s something of the shock tactic of sheer scale here. Of course, a newspaper does not permit such space as xkcd’s global warming or CarbonVisuals’ mountain of CO2, but this is a good compromise in the space available. Kudos to the editors for running with such a bold idea.

The text is really good too, mixing numbers with individual stories and then bringing in the more subtle facts as you get in. “A swath of deadly violence snakes from Interstate 10 north to Bethany Home Road…” is crying out for a map.

Leave a comment

Filed under Visualization