Probabilistic Selection of Case-Based Explanations in an Underwater Mine Clearance Domain

1Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Autonomous agents should formulate and achieve goals with minimum support from humans. Although this might be feasible in a perfectly static world, it is not as easy in the real world where uncertainty is bound to occur. One approach to solving such a problem is to formulate goals based on cases that explain discrepancies observed in the environment. However, in an uncertain world, multiple such cases often apply (i.e., as alternative explanations). Moreover, agents in the real world often have limited resources to achieve their missions. So, it is risky to generate and achieve goals for every applicable explanatory case. Our solution to these problems is to down-select the retrieved cases based on probabilities derived using Bayesian inference, then to monitor the selected cases’ validity based on observed evidence. We evaluate the performance of an agent in an underwater mine clearance domain and compare it to another agent that selects a random case from the candidate set.

Cite

CITATION STYLE

APA

Gogineni, V. R., Kondrakunta, S., Brown, D., Molineaux, M., & Cox, M. T. (2019). Probabilistic Selection of Case-Based Explanations in an Underwater Mine Clearance Domain. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11680 LNAI, pp. 110–124). Springer Verlag. https://doi.org/10.1007/978-3-030-29249-2_8

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free