Stochastic planning with lifted symbolic trajectory optimization

9Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.

Abstract

This paper investigates online stochastic planning for problems with large factored state and action spaces. One promising approach in recent work estimates the quality of applicable actions in the current state through aggregate simulation from the states they reach. This leads to significant speedup, compared to search over concrete states and actions, and suffices to guide decision making in cases where the performance of a random policy is informative of the quality of a state. The paper makes two significant improvements to this approach. The first, taking inspiration from lifted belief propagation, exploits the structure of the problem to derive a more compact computation graph for aggregate simulation. The second improvement replaces the random policy embedded in the computation graph with symbolic variables that are optimized simultaneously with the search for high quality actions. This expands the scope of the approach to problems that require deep search and where information is lost quickly with random steps. An empirical evaluation shows that these ideas significantly improve performance, leading to state of the art performance on hard planning problems.

Cite

CITATION STYLE

APA

Cui, H., Keller, T., & Khardon, R. (2019). Stochastic planning with lifted symbolic trajectory optimization. In Proceedings International Conference on Automated Planning and Scheduling, ICAPS (pp. 119–127). Association for the Advancement of Artificial Intelligence. https://doi.org/10.1609/icaps.v29i1.3467

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free