# Monthly Archives: August 2014

## Should every nonparametric test be accompanied by a bootstrap confidence interval?

Well, duh. Obviously. Because (a) every test should have a CI and (b) bootstrap CIs are just awesome. You can get a CI around almost any statistic, they account for non-normality and boundaries.

But you might have to be a little careful in the interpretation, because they might not be measuring the same thing as the test.

Take a classic Wilcoxon rank-sum / [Wilcoxon-]Mann-Whitney independent-samples test (don’t you just love those consistent and memorable names?). This ranks all the data and compares them across two groups. Every bit of the distribution is contributing, and there isn’t an intuitive statistic; what you’re testing is the W statistic. Do you know what a W of 65000 looks like? No, neither do I. If there’s a difference somewhere in terms of location, it might come up.

It’s so much simpler for the jolly old t-test. You take means and compare them. You get CIs around those means with a simple formula. And everybody knows what a mean is, even if they don’t really want to grapple with a t-statistic and Satterthwaite’s degrees of freedom.

So, in the Mann-Whitney case, the most sensible measure might be the difference between the medians. There is no formula for a CI for this, though undoubtedly we could get a pretty bad approximation by the usual techniques. So, we reach for the bootstrap. In fact, perhaps we should just be using it all the time…?

So the problem here is that you could have a significant Mann-Whitney but a median difference Ci that crosses zero. Interpreting that is not so easy, and I found one of my students in just that pickle recently. It was my fault really; I’d suggested the bootstrap CI. How could we deal with this situation? Running the risk of cliché, it’s not a problem but an opportunity. Because the test and the CI look at the data in slightly different ways, you’re actually getting more insight into the distribution, not less. Consider this situation:

Spend hours making it just so in R? No. Use a pencil.

Here, the groups have the same median but should get a significant Mann-Whitney result if the sample size is not tiny. You can surely imagine the opposite too, with a bimodal distribution where the median flips from one clump to another through only a tiny movement in the distribution as a whole.

So, in conclusion:

• my enthusiasm for bootstrapping is undimmed
• there is still no substitute for drawing lots of graphs to explore your data (and for this, pencils are probably best avoided)

Filed under learning

## Measuring nothing and saying the opposite: Stats in the service of the Scottish independence debate

I was half-heartedly considering writing about the ways that GDP can be twisted to back up any argument, when what should come along but this unedifying spectacle.

The Unionists (No campaign) have produced a league table of GDP, showing how far down Scotland would be. So, the argument goes, you should vote for them. This is, however, irrelevant to whether Scots would be better off. The GDP would drop because it would be a small country. GDP is the total sum of economic activities. If you have fewer people, you do less. That doesn’t make you poor, as Monaco or the Vatican City will tell you. If Manhattan declared independence from the rest of the USA, its GDP would go down. If Scotland not only stayed in the UK but convinced North Korea to join too, the UK’s GDP would go up. On this logic, every time a border is removed, people immediately get rich.

Meanwhile, the Separatists (Yes campaign) have produced a league table of GDP per capita, showing how far up Scotland would be. So, the argument goes, you should vote for them. This is, however, irrelevant to whether Scots would be better off, despite being a far less bad guess than GDP in toto. It tells the individual voter (let’s call him Mr Grant) practically nothing about whether the Grants would be better off, because that depends on a million other factors. Small countries with some very wealthy people, relying on foreign investment, will have inflated GDP per capita. It’s just the same thing kids learn in primary school now: when there are outliers, the mean is not so useful.

Anyway, the whole measure is used to form arguments about wellbeing, which is just nonsense. Otherwise all our ex-colonies would be kicking themselves at trading the jackpot of Saxe-Coburg-Gotha rule for silly stuff like identity and self-determination. (Except the USA, Canada, Australia and Ireland, who have higher GDP per capita than us and so are presumably happier. We should try to join them instead, as that will make us happy – though they will feel sad when we arrive.) No wonder there are still plenty of people who, having asked what I do for a living, gleefully say “lies, damned lies…”

PS: This Langtonian doesn’t get a vote because I live in London – that “giant suction machine” – and here’s a great post at Forbes about the UK joining the USA and becoming the lowest of the low.

Filed under Uncategorized

## Beeps and progress alerts to your phone

[Note: this post first went up in April 2014, but today I noticed it was missing. No idea why! Maybe it got taken down because I had an embedded video with copyrighted music, I don’t know. Anyway, I copied it back from r-bloggers.com]

[Another note: pingr is now replaced with beepr]

[Yet another note: Julia now has an excellent package called ProgressMeter. This not only gives you % progress, but also ETA and, when it’s finished, the total time taken. Nice.]

Recently I encountered an R package called pingr, made by Rasmus Bååth (the same guy who did MCMC in a web page, my visualization of 2013). You install it, you type ping(), and it goes ping. Nice.

In fact there are nine built-in pingr noises. It’s more useful than it may seem; I was using it within minutes of reading the blog post because I had a series of Bayesian models running on my laptop while I wrote some stuff on my desktop PC. When the models finished, they went ping, making everything as efficient as possible. It got me thinking about beeping alerts in all sorts of data analysis software.

In Stata, you can just type ‘beep’. Job done. In fact, that locates the system general alert sound (in Windows at least) and plays it. I spent some time extracting data from a primary care database recently, where there were several computers grinding through the big data for different researchers in a windowless room. Every now and then, a lion’s roar would emanate from one of them. I found it a bit disconcerting but played it cool until someone told me they had replaced the Windows alert beep with this .wav file for a laugh.

SPSS used to have sound alerts in the General Options menu, but they have quietly (?) been dropped sometime around version 20. The pain about that was it was either on, beeping every time some output was added, or off. There didn’t seem to be a syntax command for beeping. However, there is now one (STATS SOUND) in the extension commands package; it’s not clear whether one has to pay extra for that or not, and frankly, I’m not going to bother finding out.

When I’m able to glance at the computer regularly, perhaps because I’m eating what passes for lunch in Stats HQ, I particularly like R’s txtProgressBar with style=3. Stata users can easily display dots in a similar fashion, although it’s interesting to look online and see the alternative solutions, such as displaying progress in the window title, which could have advantages in some situations.

My latest long-running simulation made me try something quite different. I wanted progress reports but I was going to be in another room. If something went wrong, I would go back to the office and try to fix it. On my (Android) cellphone I have an app called Minutes. It’s a basic text editor that syncs very easily to Dropbox. So all I needed to do was have the stats software write periodically to a text file in the Minutes folder, and the update appears on my phone!

How 21st-century is that! This is how I’ve done it in R:

 1 2 3 4 5 6 7 8 9 10 11 `progress<-0` `fileConn<-``file``(``"C:/Users/Robert/Documents/Dropbox/Apps/Minutes/progress.txt"``)` `for ``(k ``in` `1:1000) {` `if``(``floor``(k/100)>progress) {` `writeLines``(``paste``(``"Now on iteration "``,k,sep=``""``),fileConn)` `progress<-``floor``(k/100)` `}` `# more complicated stuff follows, and then ...` `}` `close``(fileConn)` `ping``() `

and in Stata:

 1 2 3 4 5 6 7 8 9 10 11 12 `local progress=0` `forvalues i=1/1000 {` ` ``if (floor(`i'/100)>`progress') {` ` ``file open minutes using "progress.txt", write text replace` ` ``file seek minutes tof` ` ``file write minutes "Now on iteration `i'"` ` ``file close minutes` ` ``local progress=floor(`i'/100)` ` ``}` ` ``// some complicated time-consuming stuff...` `}` `beep`

Notice how the file is written to the drive each time you writeLines in R, even without closing the fileConn, but in Stata you have to close inside the if branch. Also, R will carry trying to run commands after an error, so it’ll (probably) go ping, while Stata will stop and therefore you will hear no beep.

It will get a little more complicated to catch errors, but not much. If your program grinds to an unpleasant halt, your progress.txt file will just be stuck there on the last number, and it could be a while before you get suspicious and go to check. One simple solution is to write all your output to the progress.txt file, but this will slow things down if you can’t avoid (or don’t want to avoid) writing lots of lines to the output; this was the case for my simulation with rstan. You only want one special line written in case of an error that says

 1 `I'm afraid I can't do that, Dave.`

You could send an SMS too, or tweet, if you prefer…