1. Most sequential decision-making problems in conservation can be viewed conceptually and modelled as a Markov decision process. The goal in this context is to construct a policy that associates each state of the system with a particular action. This policy should offer optimal performance in the sense of maximizing or minimizing a specified conservation objective 2. Dynamic programming algorithms rely on explicit enumeration to derive the optimal policy. This is problematic from a computational perspective as the size of the state space grows exponentially with the number of state variables. 3. We present a state aggregation method where the idea is to capture the most important aspects of the original Markov decision process, find an optimal policy over this reduced space and use this as an approximate solution to the original problem. 4. Applying the aggregation method to a species reintroduction problem, we demonstrate how we were able to reduce the number of states by 75% and reduce the size of the transition matrices by almost 94% (324 vs. 5184), and the abstract action matched the optimal action more than 86% of the time. 5. We conclude that the aggregation method is not a panacea for the curse of dimensionality, but it does advance our ability to construct approximately optimal policies in systems with large state spaces. © 2012 The Authors. Methods in Ecology and Evolution © 2012 British Ecological Society.
CITATION STYLE
Schapaugh, A. W., & Tyre, A. J. (2012). A simple method for dealing with large state spaces. Methods in Ecology and Evolution, 3(6), 949–957. https://doi.org/10.1111/j.2041-210X.2012.00242.x
Mendeley helps you to discover research relevant for your work.