BLOGS WEBSITE

Roosters don’t lay p-values

I’ve just started teaching an online course, and one module is a very very introductory statistics module. There are a couple of moments when we ask the students to describe how they interpret some hypothesis tests and p-values, and a couple of the students have written very lengthy responses describing all the factors that weren’t controlled in the experiments outlined in the problem, and why that means that the confidence intervals/p-values are meaningless. When all we wanted was “we are 95% confident that the mean outcome in this situation is between here and here”.

It’s happened to me a lot before. Many students in various disciplines are extremely good at coming up with worries about experimental design or validity of measurement processes, and so they never get to the part where they deal with the statistics itself. They seem to treat every problem like the classic “rooster on the barn roof” problem,  essentially declaring that “roosters don’t lay p-values” and choosing not to answer the question at all.

Don’t get me wrong, I really do want the students to be good worriers: they should be able to think about experimental design and validity and bias and all those things that impact on whether the statistics answers the question you think it does. But what they can’t do is use it to avoid talking about statistics at all! There are quite a few students who seem to be using those worries to discount all statistical calculations, and to sidestep the need to understand the calculation processes involved. “Your question is stupid, and I refuse to learn until it’s less stupid,” they implicitly say.

The weirdest part is that the assignment or discussion questions don’t usually discuss enough details for the students to actually conclude there is a problem. They say “the groupss were not kept in identical conditions”, but nowhere does it say they weren’t. I realise that in a published article if it doesn’t say they were then you might worry, but this is just an assignment question whose goal is to try to make sense of what a p-value means! Why not give the fictitious researchers the benefit of the doubt? And also, take some time to learn what a p-value means!

I do realise it’s a bit of a paradox. In one part of the intro stats course, we spend time getting them to think about bias and representativeness and control, and in another part, we get grumpy when they think about that at the expense of the detail we want them to focus on today. It must be a confusing message for quite a few students. But on the other hand, even when reading a real paper, you still do need to suspend all of that stuff temporarily to assess what claim the writer is at least trying to make. It’s a good skill to be able to do this, even if you plan to tear down that claim afterwards!

I am thinking one way to deal with this is to start asking questions the other way around. Instead of asking only for “what ways could this be wrong?”, ask “how would you set this up to be right?” And when I ask about interpreting a p-value, maybe I need to say “What things should the researcher have considered when they collected this data? Good. Now, suppose they did consider all those things, how would you interpret this p-value?” Then maybe I could honour their worries, but also get them to consider the things I need them to learn.

This entry was posted in How people learn (or don't), Thoughts about maths thinking and tagged . Bookmark the permalink.
 

Leave a Reply