Incremental learning of planning operators in stochastic domains

7Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this work we assume that there is an agent in an unknown environment (domain). This agent has some predefined actions and it can perceive its current state in the environment completely. The mission of this agent is to fulfill the tasks (goals) that are often assigned to it as fast as it can. Acting has lots of cost, and usually planning and simulating the environment can reduce this cost. In this paper we address a new approach for incremental induction of probabilistic planning operators, from this environment while the agent tries to reach to its current goals. It should be noted that there have been some works related to incremental induction of deterministic planning operators and batch learning of probabilistic planning operators, but the problem of incremental induction of probabilistic planning operators has not been studied yet. We also address some trade offs such as exploration (for better learning of stochastic operators, acting) and exploitation (for fast discovery of goals, planning), and we explain that a good decision in these trade offs is dependant on the stability and accuracy of the learned planning operators. © Springer-Verlag Berlin Heidelberg 2007.

Cite

CITATION STYLE

APA

Safaei, J., & Ghassem-Sani, G. (2007). Incremental learning of planning operators in stochastic domains. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4362 LNCS, pp. 644–655). Springer Verlag. https://doi.org/10.1007/978-3-540-69507-3_56

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free