The design of autonomous agents that are situated in real world domains involves dealing with uncertainty in terms of dynamism, observability and non-determinism. These three types of uncertainty, when combined with the real-time requirements of many application do- mains, imply that an agent must be capable of effiectively coordinating its reasoning. As such, situated belief-desire-intention (bdi) agents need an efficient intention reconsideration policy, which defines when computational resources are spent on reasoning, i.e., deliberating over intentions, and when resources are better spent on either object level reasoning or action. This paper presents an implementation of such a policy by modelling intention reconsideration as a partially observable Markov decision process (pomdp). The motivation for a pomdp implementation of intention reconsideration is that the two processes have similar properties and functions, as we demonstrate in this paper. Our approach achieves better results than existing intention reconsideration frameworks, as is demonstrated empirically in this paper. © Springer-Verlag Berlin Heidelberg 2001.
CITATION STYLE
Schut, M., Wooldridge, M., & Parsons, S. (2001). Reasoning about intentions in uncertain domains. Lecture Notes in Artificial Intelligence (Subseries of Lecture Notes in Computer Science), 2143, 84–95. https://doi.org/10.1007/3-540-44652-4_9
Mendeley helps you to discover research relevant for your work.