Most statistics teachers would agree that our face-to-face time with students needs to get more ‘active’. The concepts and the critical thinking so essential to what we do only sinks in when you try it out. That applies as much to reading and critiquing other’s statistics as it does to working out your own. One area of particular interest to me is communicating statistical findings, something for which evidence of effective strategies is sorely lacking, so it remains most valuable to learn by doing.
It’s so easy to stand there and talk about what you do, but there’s no guarantee they get it or retain that information a week later. I always enjoy reading Andrew Gelman’s blog and a couple of interesting discussions about active learning came up there recently, which I’ll signpost and briefly summarise.
Firstly, thinking aloud about activating a survey class (and a graphics / comms one, but most of the responses are about the familiar survey topics). The consensus seems to be to let the students discover – painfully if necessary – for themselves. That means letting them collect and grapple with messy data, not contrived examples. There’s some nice pointers in there about stage-managing the student group experience (obviously we don’t really let them grapple unaided).
The statistical communication course came back next, with a refreshing theme that we don’t know how to do this (me neither, but we’re getting closer, I’d like to think). Check out O’Rourke’s suggested documents if nothing else!
Then, the problem of hypothesis testing. The dialogue between Vasishth and Gelman particularly crystallises the issue for practising analysts. It came back a couple of weeks later; I particularly like the section about a third of the way down after Deborah Mayo appears, like an avenging superhero, to demolish the widely used, over-simplified interpretation of hypothesis testing in a single sentence, after which Anonymous and Gelman cover a situation where two researchers look at the same data. Dr Good has a pre-specified hypothesis, tests it and finds a significant result, stops there and reports it. Dr Evil intends to keep fishing until he or she finds something sexy they can publish, but happens by chance to start with the same test as Dr Good. Satisfied with the magical p<0.05, they too stop and write it up. Is Evil’s work equivalent to Good’s? Is the issue with motivation or selection? Food for thought, but we have strayed from teaching into some kind of Socratic gunfight (doubly American!). However, I think there is no harm in exposing students (especially those already steeped in some professional practice like the healthcare professionals I teach) to these problems, because they already recognise them from published literature, although they might not formulate them quite so clearly. Along the way, someone linked to this rather nice post by Simine Vazire.
(I don’t want you to think I’ve wimped out, so here’s my view, although that’s really not what this post is about: Rahul wrote “The reasonable course might be for [Dr Evil] to treat this analysis as exploratory in the light of what he observed. Then collect another data set with the express goal of only testing for that specific hypothesis. And if he again gets p<0.01 then publish.” – which I agree with, but for me all statistical results are exploratory. They might be hypothesis testing as well, but they are never proving or disproving stuff, always stacking evidence quantitatively in the service of a fluffier mental process called abduction or Inference to the Best Explanation. They are merely a feeble attempt to make a quantitative, systematic, less biased representation of our own thoughts.)