This blog post is about a moving maths activity that I have wanted to do for years and finally got an opportunity to do this year in 2018. It’s a model of a concept called a “Markov Chain” using human movement.

In a Markov chain, there is a thing that can be in any number of states, and the probability of switching to some other state is dependent only on which state it is in now. So maybe there’s three states the thing can be in, called A, B and C. If it’s in state A then it could switch to state B or C, or it could stay in state A. If it’s in state B, then it could switch to state A or C, or it could stay in state B. But the probabilities of going from A to the other states might be totally different from the probabilities of switching from B to the other states. Described using the word “state”, you might imagine a thing changing a property it has like colour or size, but it can also be used to describe things changing location too. Imagine the states as being literal states in a country, and the thing is a person who moves around between them.

Just like with any other probability situation, you might not be able to predict particularly well what any one individual does, but you might be able to tell what a large collection is doing. There are methods involving matrices and eigenvalues that allow you to make these predictions, but they can be viscerally illustrated with the Human Markov Chain!

**The idea of Human Markov Chains**

The basic idea is to mark out the states as big zones on the floor/ground, and have people walk around between them, using random numbers to tell them which state to go to at each turn. With a large group of people, the number in each state at any given moment settles down to be roughly unchanging from turn to turn, even though there are always people moving. The official term for this is the “steady state”. Note you really do need at least 20 people for this to work properly, preferably at least 30.

**The setup**

I used cloth tape to mark out three lines on the floor coming out from a central point. This created the three states of the Markov Chain where the participants would walk. (I don’t have a photo of this, sorry, but there is a quick view of it in the first seconds of the video below.)

I created posters to tell the people in each state where to go based on what their random number is. They were printed on paper, laminated, and stuck on a pole so they were high enough for everyone to see. You can download a Word document with the signs, including a blank set for making your own.

I created a sheet of random numbers for each person. The idea is that on each turn, you look at the next random number, look at the sign to see where to go, and then move. Each page of the random numbers was different, which the students only took a few minutes to notice and they were very intrigued. This word file has 100 pages of 100 random digits each.

**The activity**

I passed out a sheet of random numbers and a pencil to each student, and got Nicholas to hold the sign where everyone could see it. I explained how the game would work by walking it out myself: everyone would each look at the digit they had, and find that digit on their sign, then go left or right or stay depending on where the digit appeared on the sign. Then we’d do it again with the next digit in our new place, and the next digit, and so on.

Thinking back I reckon I should have gotten a volunteer student to do it for several steps, so that we could see what was happening for one person more clearly. However, this time we just launched into it. I got everyone to start in the corner labelled “A” and then we set off, with me calling out “go” every few seconds, after I had seen everyone arrive in their new locations.

After about ten moves, I asked the students to notice and wonder about how people were arranged and they noticed that everyone was roughly evenly spread across the three corners, even though to start with they were all in one corner. We did a couple more turns so that we could illustrate how this roughly even spread is maintained even though most individual people move on every turn. We also discussed how individual people never settle down to one place and we can’t predict what happens to them, but we can predict what happens to how many there are in each corner.

The coolness of Markov Chains is how we can use the nine probabilities of going from one state to another to create a matrix and use matrix methods to pull out the information of how it will settle down.

Actually, you can investigate it from the other side too. The steady state isn’t just a description of how many people end up in each corner, it’s also a description of how much time any one person spends in each state on average. You could possibly get each person to record which state they end up in and notice that on average they spend roughly equal time in every state.

Then we switched up the signs and put everyone back in corner A and did another game. This time one corner ends up with twice as much as either of the other corners. The point here is that it doesn’t always settle down to be an even spread — you can have other distributions.

Then we switched up the signs one last time and put everyone back in corner A again and did one last game. This time even though everyone started in corner A, everyone ends up in corner C eventually. It was quite funny to see the people arriving in corner C realising that they couldn’t escape, and see the people who hadn’t yet gotten to C all confused about why nobody was moving once they got there only to exclaim “oh” loudly when they arrived because it was obvious when they looked up at the sign. (This sort of situation is called an “absorbing state”.)

Here is a little video of how this last game went:

Today we did Human Markov Chains for an activity with high school students. It went way better than we expected! Here's a quick video of the one with an absorbing state. #MLCUoA pic.twitter.com/4jlPfMe6Ij

— David Butler (@DavidKButlerUoA) May 10, 2018

This last version was pretty cool because we had one person who bounced around for quite some time before arriving at C. If I had had the inclination, I might have started to investigate just how many steps someone might take on average before arriving in the absorbing state, first by getting everyone to do it again and count how many steps it took. This is another thing you can investigate with the matrix methods.

All in all it was a nice little activity to do for half an hour to get a feel for a probability situation quite different to the sorts of things you normally do. I really like the tension between individuals doing wildly random things, but the system being predictable. If I was actually teaching Markov Chains, I’d love to do it with my actual class so that I could give them a bodyscale experience to draw on when imagining these things!

**Some theory**

Just in case you’re wondering, you can set up a nice little matrix to describe what probability there is of moving from any state to any other state. If you have a certain number/percentage in each state you can write that as a column vector, and then how many you expect to be in each state one step later is found by multiplying the matrix and the vector.

You can find the steady state by starting with any vector and applying the matrix over and over and over until the result seems to settle down. This is one of the reasons why you might want to know the powers of a matrix! On the other hand, knowing that the steady state itself is unchanged by the system, you can just solve a matrix-vector equation to go straight to the answer! If A is the matrix and v is the steady state vector, then you must have Av = v, so 0 = v – Av, and hence 0 = (I-A)v. Now we have matrix times vector is equal to 0, and that can be solved by good old row operations!

In fact, I did all of those operations on a general matrix in order to produce a formula for the steady state of a 3×3 matrix, then put it into this Desmos graph so I could fiddle with the matrix to get the results I wanted. I had to do this because I needed my probabilities to be multiples of 1/10 in order to use random digits as my way of telling students how to get around!

**Conclusion**

All in all the activity went much better than I had expected, and I am looking forward to doing it again sometime soon! I want to spend more time discussing what happened and trying to do some predicting and checking our predictions. I just need enough students to make it worthwhile! My hope is that I can convince the Maths 1B lecturer to let me do it with his class(es). We’ll see…

Hello,

I did this today with my Year 12 Maths C class in Queensland and it was very successful. I have a class of 17 and we used our calculators to generate random digits (potentially not truly random as some students ended up with the same sequence of values – but a good side discussion happening there) and we walked through 10 stages of the first situation given. Students made good observations around what was occurring and predictions of what would happen if we had a larger group and walked through a larger number of stages.

Thank you for sharing! And thanks to Jim at QUT that pointed me in your direction.

WOW! How exciting! I am so glad it was a success for you. Also mildly amazed to think of something I created being actually in someone’s classroom. Thank you for letting me know.

As a teacher, you may be happy to know that working out this scenario actually just taught me Markov chains! Here’s a little JavaScript I used to play around with it ðŸ™‚

// Get the button.

window.onload = function() {

document.getElementsByTagName(“button”)[0].addEventListener(“click”, function() {

doMarkov();

});

}

// Define the initial vector.

var x = 11;

var y = 63;

// Define the matrix.

var matrix = [[0.6, 0.8],

[0.4, 0.2]];

// A function to perform a Markov step

// and print the results.

function doMarkov() {

var newX;

var newY;

newX = x*matrix[0][0] + y*matrix[0][1];

newY = x*matrix[1][0] + y*matrix[1][1];

x = newX;

y = newY;

console.log(“x, y: ” + x + “, ” + y);

}

Wow! Cool! I don’t know what is going on with the Java, but I am glad this helped you.