Go to CCP Homepage Go to Materials Page Go to Linear Algebra Materials Go to Table of Contents
Go Back One Page Go Forward One Page

Markov Chains

Part 1: Television Viewers

Suppose there are two regional news shows in the local television viewing area, and we have conducted a survey of viewers to determine which channel the viewers have been watching. The first survey revealed that 40% of the viewers watched station X and 60% watched station Y. Subsequent surveys revealed that each week 15% of the X viewers switched to station Y and 5% of the Y viewers switched to station X.

We will use transition matrices and Markov chains to make a prediction about the future television market from this information. Our conclusions at the end of the example may be more meaningful if you take time now to make a guess as to what proportions of the population will be watching each station in the long run. We assume that the 15%-5% switching trends will continue indefinitely.

Let pk be a two-dimensional column vector whose entries give the proportion of people who watch station X and the proportion who watch station Y, in that order, during week k. Thus, p0 = [0.4,0.6]T. From the assumptions above, we see that the proportion of people watching station X in week k = 1 will be 85% of the X viewers in week k = 0 (which is 85% of 40%, or 34%) plus 5% of the Y viewers in week k = 0 (which is 5% of 60%, or 3%) for a total of 37%:

0.85(0.4) + 0.05(0.6) = 0.37.

Similarly,

0.15(0.4) + 0.95(0.6) = 0.63,

is the proportion of people watching station Y in week k = 1. So, we have computed p1 = [0.37,0.63]T. Similarly, we could compute the sequence of vectors p2, p3, ... , but this process would be quite tedious if we continued in the manner above. Andrei Markov (1856-1922) described a much more efficient way of handling such a problem.

First, we think of the movement of viewers as being described by the following array, which gives the weekly proportion of viewers who change from one station to another:

From:
X Y
To: X 0.85 0.05
Y 0.15 0.95

The matrix A of entries in the table is called the transition matrix (or Markov matrix) for this problem.

Looking carefully at the definiton of matrix multiplication, we see that p1=Ap0. Indeed, we see that, for k = 0, 1, 2, ..., and so on, pk+1=Apk. Each pk is called a probability vector, and the sequence, p0, p1, p2, p3, ..., is called a Markov chain.

  1. Calculate p1, p2, p3, p4, and p5.
  2. If there are 50,000 people who watch the news each week, how many will watch station X during the fifth week? How many will watch station Y?
  3. Because matrix multiplication is associative, we find
    p2 = Ap1 = A(Ap0) = A2p0,
    p3 = Ap2 = A(A2p0) = A3p0,
    and, in general,
    pk = Akp0, for k = 1, 2, ... .
    Compute A5. Then compute p5 directly from A5 without computing p1, ... , p4. Check that you get the same values for p5 here that you did in step 1.
  4. Suppose p is a probability vector with the property that Ap = p. If this p describes the current viewers, then what does this equation say about the proportion of viewers for each station from one week to the next?
  5. We can solve the equation Ap = p by rewriting it as Ap - p = 0 and factoring out p. We get the matrix equation (A - I)p = 0, where I is the 2x2 identity matrix. Using our given transition matrix, solve this matrix equation for p. From the solutions pick the one which is a probability vector (i.e., its entries must sum to 1). This vector is called the steady-state vector. Check your answer by computing Ap.
  6. The fact that p is the steady-state vector does not imply that the Markov chain will actually reach that state from p0. Compute pk for some large values of k, describe what you see, and explain your conclusion about whether this chain will approach a steady state.
  7. At the beginning of this example, you guessed the long-term behavior of these television news viewers. Was your guess correct? Did anything about the subsequent mathematical prediction surprise you? If so, what?
Go to CCP Homepage Go to Materials Page Go to Linear Algebra Materials Go to Table of Contents
Go Back One Page Go Forward One Page


modules at math.duke.edu