Tag Archives: clinical trials

Dataviz of the week, 5/7/17

This week we look at a clinical trial of treatments for tuberculosis, the PanaCEA MAMS-TB study. I’ve been involved with TB on and off since project-managing and statisticizing the original NICE guideline back in the day. I won’t go into detail on TB treatments but the trial compares various combinations of drugs, and there’s a new candidate drug called SQ109 in the mix. The paper is here (I hope it is not paywalled). You can see the Kaplan-Meier plot on page 44. Without going into detail, these are classic formats for clinical trials looking at time-to-event data. As time goes by, people either get recurrence of the disease or disappear out of the trial, and the numbers at risk go down. You want to be in a group whose curve descends less steeply.

But there are different ways of measuring and counting events, so the authors made an interactive web page showing these as a sensitivity analysis. Hooray!

Screen Shot 2017-06-23 at 12.55.33

It’s a pity Lancet paid such lip service to it, tucked away as a link in the margin of page 45. Boo!

I found the transitions in the table of patients at risk weird – I guess that’s the d3 transition deciding to move the numbers horizontally and it might be clearer to fade them out, remove them, then put them back from scratch. It’s also clear that Mike Bostock never had to deal with step functions in transition. But otherwise a really nice example of how trials can provide more layers of info. 

Leave a comment

Filed under healthcare, JavaScript, Visualization

Every sample size calculation

A brief one this week, as I’m working on the dataviz book.

I’m a medical statistician, so I get asked about sample size calculations a lot. This is despite them being nonsense much of the time (wholly exploratory studies, no hypothesis, pilot study, feasibility study, qualitative study, validating a questionnaire…). In the case of randomised, experimental studies, they’re fine, and especially if there’s a potentially dangerous intervention or lack thereof. But we have a culture now where reviewers, ethics committees and such ask to see one for any quant study. No sample size, no approval.

So, I went back through six years of e-mails (I throw nothing out) and found all the sample size calculations. Others might have been on paper and lost forever, and there are many occasions where I’ve argued successfully that no calculation is needed. If it’s simple, I let students do it themselves. Those do not appear here, but what we do have (79 numbers from 21 distinct requests) give an idea of the spread.

every_sample_size_scatterevery_sample_size_hist

You see, I am so down on these damned things that I started thinking I could just draw sizes from the distribution in the above histogram like a prior, given that I think it is possible to tweak the study here and there and make it as big or as small as you like. If the information the requesting person lavishes on me makes no difference to the final size, then the sizes must be identically distributed even conditional on the study design etc., and so a draw from this prior will suffice. (Pedants: this is a light-hearted remark.)

You might well ask why there are multiple — and often very different — sizes for each request, and that is because there are usually unknowns in the values required for calculating error rates, so we try a range of values. We could get Bayesian! Then it would be tempting to include another level of uncertainty, being the colleague/student’s desire to force the number down by any means available to them. Of course I know the tricks but don’t tell them. Sometimes people ask outright, “how can we make that smaller”, to which my reply is “do a bad job”.

And in those occasions where I argue that no calculation is relevant, and the reviewers still come back asking for one, I just throw in any old rubbish. Usually 31. (I would say 30 but off-round numbers are more convincing.) It doesn’t matter.

If you want to read people (other than me) saying how terrible sample size calculations are, start with “Current sample size conventions: Flaws, harms, and alternatives” by Peter Bacchetti, in BMC Medicine 2010, 8:17 (open access). He pulls his punches, minces his words, and generally takes mercy on the calculators:

“Common conventions and expectations concerning sample size are deeply flawed, cause serious harm to the research process, and should be replaced by more rational alternatives.”

In a paper called “Sample size calculations: should the emperor’s clothes be off the peg or made to measure”, which wasn’t nearly as controversial as it should have been, Geoffrey Norman, Sandra Monteiro and Suzette Salama (no strangers to the ethics committee), point out that they are such guesswork, we should just save people’s anxiety, delays waiting for a reply from the near-mythical statistician, and brain work, and let them pick some standard numbers. 65! 250! These sound like nice numbers to me; why not? In fact, their paper backs up these numbers pretty well.

In the special case of ex-post “power” calculations, see “The Abuse of Power: The Pervasive Fallacy of Power Calculations for Data Analysis” by John M. Hoenig and Dennis M. Heisey, in The American Statistician (2001); 55(1): 19-24.

This is not a ‘field’ of scientific endeavour, it is a malarial swamp of steaming assumptions and reeking misunderstandings. Apart from multiple testing in its various guises, it’s hard to think of a worse problem in biomedical research today.

1 Comment

Filed under healthcare

It pays to be Bayes

Yes, indeed. I’m looking forward to getting this sent off for publication. Wilson asked my advice about his pilot RCT in stroke rehab, and the outcomes were so complicated it took me a long time to work out what to do to utilise all the information without dropping any. Thankfully I had been reading Song & Lee’s book “Basic and Advanced Bayesian Structural Equation Modeling (with applications in the medical and behavioral sciences)” and was able to fit one of these wonderfully flexible models to the data.

It was truly one of the most satisfying projects I’ve ever contributed to, because his clinical expertise and my stats added to up something that neither of us could have done alone, and actually changed the results substantively! But you’ll have to wait to read our findings…

Leave a comment

Filed under research

Parliamentary inquiry into clinical trials

From the Radstats mailing list:

The Parliamentary Science and Technology Committee has started an enquiry into clinical trials/disclosure of clinical trial data and transparency.It was announced 13 December, deadline for submissions is noon on Friday 22 February.
Submissions up to 3,000 words

Questions asked:
1.  Do the European Commission’s proposed revisions to the Clinical Trials Directive address the main barriers to conducting clinical trials in the UK and EU?
2. What is the role of the Health Research Authority (HRA) in relation to clinical trials and how effective has it been to date?
3. What evidence is there that pharmaceutical companies withhold clinical trial data and what impact does this have on public health?
4. How could the occurrence and results of clinical trials be made more open to scrutiny? Who should be responsible?
5. Can lessons about transparency and disclosure of clinical data be learned from other countries?

The Committee is encouraging written submissions for this inquiry to be sent by email to scitechcom@parliament.uk and marked ‘Clinical Trials’.

http://www.parliament.uk/business/committees/committees-a-z/commons-select/science-and-technology-committee/news/121213-clinical-trials-inquiry-announced/

Leave a comment

Filed under Uncategorized