Policy Gradient Planning for Environmental Decision Making with Existing Simulators

by Mark Crowley, David Poole
Association for the Advancement of Artificial Intelligence ()


In environmental and natural resource planning do- mains actions are taken at a large number of locations over multiple time periods. These problems have enor- mous state and action spaces, spatial correlation be- tween actions, uncertainty and complex utility models. We present an approach for modeling these planning problems as factored Markov decision processes. The reward model can contain local and global components as well as spatial constraints between locations. The transition dynamics can be provided by existing simula- tors developed by domain experts. We propose a land- scape policy defined as the equilibrium distribution of a Markov chain built from many locally-parameterized policies. This policy is optimized using a policy gra- dient algorithm. Experiments using a forestry simulator demonstrate the algorithm’s ability to devise policies for sustainable harvest planning of a forest.

Cite this document (BETA)

Authors on Mendeley

Readership Statistics

2 Readers on Mendeley
by Discipline
100% Computer and Information Science
by Academic Status
50% Ph.D. Student
50% Post Doc
by Country
50% Canada

Sign up today - FREE

Mendeley saves you time finding and organizing research. Learn more

  • All your research in one place
  • Add and import papers easily
  • Access it anywhere, anytime

Start using Mendeley in seconds!

Sign up & Download

Already have an account? Sign in