11 Predicting future states with Markov analysis

 

This chapter covers

  • States, state probabilities, and transition matrices
  • Vector of state probabilities
  • Equilibrium conditions
  • Absorbing states
  • Matrix operations in Markov chains

Markov analysis is a mathematical technique used to predict future states of a system—such as a market, a machine, or a population—based on its current state and the probabilities of transitioning from one state to another. It is applicable specifically to systems that exhibit the Markov property, meaning future states depend only on the present state and not on the sequence of past events. This memoryless characteristic makes Markov analysis particularly useful for modeling dynamic processes where past influences can be effectively summarized by the present state. The technique originated from the work of Andrey Markov, a Russian mathematician, in the early 20th century and has since been widely applied in various fields, including market share predictions and delinquency forecasting.

11.1 Understanding the mechanics of Markov analysis

11.2 States and state probabilities

11.2.1 Understanding the vector of state probabilities for multistate systems

11.2.2 Matrix of transition probabilities

11.3 Equilibrium conditions

11.3.1 Predicting equilibrium conditions programmatically

11.4 Absorbing states

11.4.1 Obtaining the fundamental matrix

11.4.2 Predicting absorbing states

11.4.3 Predicting absorbing states programmatically

Summary