A follow-up to my post on Andrew Gelman’s encounter with social psychology and the color of women’s clothes: a short editorial that is really worth a read is “Promoting healthy skepticism in the news: helping journalists get it right” from the Journal of the National Cancer Institute in December 2009.
This tells the story of a Phase I trial of a new cancer drug (in other words, they just wanted to see that it didn’t harm humans, or make the condition worse, so they gave it to 19 people whose cancers had not responded to previous treatments), olaparib. Twelve of those 19 people didn’t get any worse for four months, Now, that’s a valuable first step, but it doesn’t really tell you why they didn’t deteriorate, whether the effect would be sustained longer term, whether the drug works for a particular subgroup of people. All of that was years away in the Phase II and III trials, and then if all goes well, the post-marketing surveillance. Yet the study hit the news in a big way, with NBC opening the TV coverage with “some are calling this the most important cancer breakthrough of the decade”. Unhelpful!
They also highlight the case of the alcohol and cancer paper that came out of the Million Women study in March 2009, also in JCNI. I use this paper for teaching quite a lot. It’s a nice example of a good quality cohort study that went a little wrong at the last minute. Check out Table 2, where the non-drinkers have higher risks than the 1-2 drinks a week crowd. Now look at Figure 3. Whoa, where did those non-drinkers go? Maybe the graph just didn’t look so good without a nice straight line! (I am indebted to Doug Altman for first criticising this graph.)
And on the basis of this straight line, the printing presses rolled and rolled. The invited editorial got a bit over-excited about it, coining the quote ” there is no level of alcohol consumption that can be considered safe”. The PR department picked up on that, then the TV and the radio stations and the newspapers (viz BBC, I’m not even going to speculate on the alarming quote from the first author in this). Even august medical campaigners on alcohol jumped on the bandwagon too. Messy.
Back to the JCNI December 2009 editorial. Well done to them for tackling this important topic in the year when their own journal had fallen victim to one of these health news feeding frenzies. They suggest that a range of handy guides to accurate and fair reporting will help journalists improve their act, but I find this hard to believe. In business, even if you think what you are doing is wrong or not a great long-term brand-building choice, if the competitor is going to steal a march on you with an alarming, attention-grabbing headline, you’ve got to print it too. In fact, you’ve got to print it first (no time for that handy guide) and it’s got to be even more alarming!
The only way to tackle this is at the PR source. Journalists for the most part (until a story gets hot) simply copy out of the press release. Get that right and everything else will follow. And best of all, we are actually in a position where we can influence this. Without patronising them, set up a free stats workshop for your university / organisation’s PR and comms people. They can stick it on their CVs. Make yourself available for stats queries when they are drafting stuff. Build out from there to a sustainable posse of stats people who are happy to answer marketing / PR / comms queries. And accept that you are not going to appear on the TV news (followed by years of opprobrium from the likes of me).