This paper provides a brief introduction to Markov chains and their fundamental properties. Then we use the previous knowledge to construct Markov Decision Processes (MDPs), which give a framework for modeling decision-making processes in optimization problems. Policy iteration and value iteration algorithms are introduced to solve for the optimal result of such models. We finish by introducing the hidden Markov models to solve the part-of-speech tagging problem.
CITATION STYLE
Shirai, T. (2014). Finite Markov Chains and Markov Decision Processes (pp. 189–206). https://doi.org/10.1007/978-4-431-55060-0_15
Mendeley helps you to discover research relevant for your work.