Improving competence for reliable autonomy

2Citations
Citations of this article
13Readers
Mendeley users who have this article in their library.

Abstract

Given the complexity of real-world, unstructured domains, it is often impossible or impractical to design models that include every feature needed to handle all possible scenarios that an autonomous system may encounter. For an autonomous system to be reliable in such domains, it should have the ability to improve its competence online. In this paper, we propose a method for improving the competence of a system over the course of its deployment. We specifically focus on a class of semiautonomous systems known as competence-aware systems that model their own competence-the optimal extent of autonomy to use in any given situation-and learn this competence over time from feedback received through interactions with a human authority. Our method exploits such feedback to identify important state features missing from the system's initial model, and incorporates them into its state representation. The result is an agent that better predicts human involvement, leading to improvements in its competence and reliability, and as a result, its overall performance.

Cite

CITATION STYLE

APA

Basich, C., Svegliato, J., Zilberstein, S., Wray, K. H., & Witwicki, S. J. (2020). Improving competence for reliable autonomy. In Electronic Proceedings in Theoretical Computer Science, EPTCS (Vol. 319, pp. 37–53). Open Publishing Association. https://doi.org/10.4204/eptcs.319.4

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free