Reasoning about risk in agent's deliberation process: A jadex implementation

2Citations
Citations of this article
19Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Autonomous agents and multi-agent systems have been proved to be useful in several safety-critical applications. However, in current agent architectures (particularly BDI architectures) the deliberation process does not include any form of risk analysis. In this paper, we propose guidelines to implement Tropos Goal-Risk reasoning. Our proposal aims at introducing risk reasoning in the deliberation process of a BDI agent so that the overall set of possible plans is evaluated with respect to risk. When the level of risk results too high, agents can consider and introduce additional plans, called treatments, that produce an overall reduction of the risk. Side effects of treatments are also considered as part of the model. To make the discussion more concrete, we illustrate the proposal with a case study on the Unmanned Aerial Vehicle agent. © 2008 Springer-Verlag Berlin Heidelberg.

Cite

CITATION STYLE

APA

Asnar, Y., Giorgini, P., & Zannone, N. (2008). Reasoning about risk in agent’s deliberation process: A jadex implementation. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4951 LNCS, pp. 118–131). https://doi.org/10.1007/978-3-540-79488-2_9

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free