# Monthly Archives: March 2015

## A thinking exercise to teach about cryptic multiplicity

It’s Pi Day, and yesterday I saw a tweet from Mathematics Mastery, my sister-in-law’s brain child, which pointed out that the number zero does not occur in the first 31 digits of pi. I wondered “what’s the chances of that?” and then realised it was a fine example to get students of statistics to think through. Not because the probability is difficult to work out, but because the model and assumptions are not obvious. Pi is a transcendental number, meaning that it was discovered by Walt Whitman, or something like that. All the symbols 0-9 appear without any pattern, so the chance that a particular digit is a particular symbol is 0.1. The chance it’s not “0” is 0.9, and the 30 that follow are independent and identically distributed, so that comes to 3.8%  But you’d be just as surprised to find that “3” does not appear. Or “8”. There was nothing special a priori about “0”. Students will hopefully spot this if you have shown them real-life examples like “women wear red or pink shirts when they ovulate“. (Your alarm bells might start going here, detecting an approaching rant about the philosophy of inference, but relax. I’m giving you a day off.) So we crunch up some nasty probability theory (if you’ve taught them that sort of stuff) and get the chance of one or more symbols being completely absent at just over 38%. Then you can subtract some unpleasant multiple absences and get back to about 34%, or just simulate it!:

```iter<-1000000
pb<-txtProgressBar(min=1,max=iter,style=3)
count<-matrix(NA,iter,10)
for (i in 1:iter) {
setTxtProgressBar(pb,i)
x<-floor(9.99999*runif(31))
for (j in 1:10)
count[i,j]<-sum(x==(j-1))
}
close(pb)
noneofone<-apply(count==0,1,sum)
table(noneofone)```

But there’s another issue, and I hope that someone in a class would come up with it. Why 31? That’s just because the 32nd was the first “0”. So isn’t that also capitalising on chance? Yes, I think it is. It is an exploratory look-see analysis that suddenly turned into a hard-nosed let’s-prove-something analysis because we got excited about a pattern. What we really need to examine is the chance of coming up with a n-free run of length 31 or greater, where n is any of the ten symbols we call numbers. This is starting to sound more like a hypothesis test now, and you can get students to work with a negative binomial distribution to get it, but the important message is not how to do this particular example, or that coincidences, being ill-defined a priori, happen a lot (though that’s important too: “million-to one chances crop up nine times out of ten”, wrote Terry Pratchett), but rather that our belief about the data-generating process determines how we analyse, and it is vital to stop and think about where they came from and why we believe that particular mental/causal model before diving into the eureka stuff.

Filed under learning

## Seminar at Kingston University, Wednesday 18 March 2015, 2:15 pm

Come and hear me talk about emerging work with Bayesian latent variable models and SEMs. email Luluwah al-Fagih if you want to attend:

Applying Bayesian latent variable models to imperfect healthcare data

Abstract: Analysis routinely collected or observational healthcare data is increasingly popular but troubled by poor data quality from a number of sources. Human error, coding habits, missing and coarse data all play a part in this. I will describe the application of Bayesian latent variable models to tackle issues like these in various forms to four projects: a pilot clinical trial in stroke rehabilitation, a meta-analysis including mean differences in depression scores alongside odds of reaching a threshold, an exploratory study of predictors of ocular tuberculosis, and an observational study of the timing of imaging in initial treatment of major trauma patients. The motivation for the Bayesian approach is the ease of flexible modelling, and I will explain the choices of software and algorithms currently available. Using latent variables allows us to draw inferences based on the unknown true values that are not available to us, while explicitly taking all sources of uncertainty into account.

Filed under Uncategorized

## Extract data from Dementia Care Mapping spreadsheets into Stata

March 2017 edit: I removed chunks of code now that I’m freelancing – if you’re interested in an automated, fast, reliable way of getting LOTS of DCM data into one place, contact me! I can do it for you or provide software to do it. Stata and R are both options and then data can go into SPSS or whatever, or it can be a stand-alone executable program.

I’ve recently been involved in three projects using the Dementia Care Mapping data collection tool. This is a neat way of getting really detailed longitudinal observations on the activity, mood, interactions and wellbeing of people with dementia in group settings. The makers of the DCM provide a spreadsheet which looks like this:

Copied from the Bradford DCM manual. These are not real data (I hope!)

that is, there is a row of BCC behaviour codes and a row of ME quality of life codes for each person, and they are captured typically every five minutes. Date and time and other stuff that might be important are floating above the main body of the data. Subsequent worksheets provide descriptive tables and graphs, but we will ignore those as we want to extract data into Stata for more detailed analysis. (But let me, in passing, point out this work in Bradford, which is moving towards electronic collection.)

The good news is that Stata versions 12 and 13 have improved import commands, with specific support for Excel spreadsheets. You can specify that you want to take only particular cells, so that allows us to pull in the stuff at the top like data and time, and then go back and get the bulk of the data.

In the most recent project, there were several care homes, and each was visited several times, and within each visit there was at least a before, during, and after spreadsheet. Thankfully, my colleague who did the collection had very consistently filed everything away so the file and folder structure was very consistent, and that is crucial if you want to automate the process of extracting and compiling the data.

1 Comment

Filed under Stata

## More active learning in statistics classes – and hypothesis testing too

Most statistics teachers would agree that our face-to-face time with students needs to get more ‘active’. The concepts and the critical thinking so essential to what we do only sinks in when you try it out. That applies as much to reading and critiquing other’s statistics as it does to working out your own. One area of particular interest to me is communicating statistical findings, something for which evidence of effective strategies is sorely lacking, so it remains most valuable to learn by doing.

It’s so easy to stand there and talk about what you do, but there’s no guarantee they get it or retain that information a week later. I always enjoy reading Andrew Gelman’s blog and a couple of interesting discussions about active learning came up there recently, which I’ll signpost and briefly summarise.

Firstly, thinking aloud about activating a survey class (and a graphics / comms one, but most of the responses are about the familiar survey topics). The consensus seems to be to let the students discover – painfully if necessary – for themselves. That means letting them collect and grapple with messy data, not contrived examples. There’s some nice pointers in there about stage-managing the student group experience (obviously we don’t really let them grapple unaided).

The statistical communication course came back next, with a refreshing theme that we don’t know how to do this (me neither, but we’re getting closer, I’d like to think). Check out O’Rourke’s suggested documents if nothing else!

Then, the problem of hypothesis testing. The dialogue between Vasishth and Gelman particularly crystallises the issue for practising analysts. It came back a couple of weeks later; I particularly like the section about a third of the way down after Deborah Mayo appears, like an avenging superhero, to demolish the widely used, over-simplified interpretation of hypothesis testing in a single sentence, after which Anonymous and Gelman cover a situation where two researchers look at the same data. Dr Good has a pre-specified hypothesis, tests it and finds a significant result, stops there and reports it. Dr Evil intends to keep fishing until he or she finds something sexy they can publish, but happens by chance to start with the same test as Dr Good. Satisfied with the magical p<0.05, they too stop and write it up. Is Evil’s work equivalent to Good’s? Is the issue with motivation or selection? Food for thought, but we have strayed from teaching into some kind of Socratic gunfight (doubly American!). However, I think there is no harm in exposing students (especially those already steeped in some professional practice like the healthcare professionals I teach) to these problems, because they already recognise them from published literature, although they might not formulate them quite so clearly. Along the way, someone linked to this rather nice post by Simine Vazire.

(I don’t want you to think I’ve wimped out, so here’s my view, although that’s really not what this post is about: Rahul wrote “The reasonable course might be for [Dr Evil] to treat this analysis as exploratory in the light of what he observed. Then collect another data set with the express goal of only testing for that specific hypothesis. And if he again gets p<0.01 then publish.” – which I agree with, but for me all statistical results are exploratory. They might be hypothesis testing as well, but they are never proving or disproving stuff, always stacking evidence quantitatively in the service of a fluffier mental process called abduction or Inference to the Best Explanation. They are merely a feeble attempt to make a quantitative, systematic, less biased representation of our own thoughts.)

Now, if you like a good hypothesis testing debate, consider the journal that banned tests, and keep watching StatsLife for some forthcoming opinions on the matter.

Filed under learning

## Adjust brightness in LXDE Linux

This is a little diversion from the usual stats.

I’ve been running LXDE Debian Linux on my small laptop for a while, and I’m really pleased with it. It handles all sorts of stuff and the fact that L stands for Lightweight hardly ever holds me back. But it doesn’t have any screen brightness controls, and it seems lots of people have asked about this on forums. Usually the issue is mixed up with allocating brightness control to a combination of keys, but that’s a bigger problem which depends on exactly the hardware you have. I just fixed it with a simple crude hack, and as it bothers everyone, I thought I’d share it here.

Take a look in your /sys/class/backlight folder. I’ve got a /samsung folder inside that, you might have something different, but whatever you have, look around until you find a file called brightness, and another called max_brightness. Open them in your text editor of choice. In my case max_brightness simply contains the number 8, and brightness 1. To change brightness of the screen, you change the number inside the brightness file. Make a new text file called go-dim, which contains this:

`echo 1 > /sys/class/backlight/samsung/brightness`

Then, one called go-bright, which contains this:

`echo 4 > /sys/class/backlight/samsung/brightness`

You don’t have to use 4 as a bright value, you can choose something else (less than or equal to the value inside max_brightness). Then save them somewhere easily accessible like the Desktop, open the terminal and type:

`chmod a+x Desktop/go-dim`

and

`chmod a+x Desktop/go-bright`

Now, you can double click those files on your desktop, choose “execute” and they will do their thing for you. Obviously, if you save them somewhere else, you need to type the correct path in the chmod command.