Machine Self-confidence in Autonomous Systems via Meta-analysis of Decision Processes

13Citations
Citations of this article
26Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Algorithmic assurances assist human users in trusting advanced autonomous systems appropriately. This work explores one approach to creating assurances in which systems self-assess their decision-making capabilities, resulting in a ‘self-confidence’ measure. We present a framework for self-confidence assessment and reporting using meta-analysis factors, and then develop a new factor pertaining to ‘solver quality’ in the context of solving Markov decision processes (MDPs), which are widely used in autonomous systems. A novel method for computing solver quality self-confidence is derived, drawing inspiration from empirical hardness models. Numerical examples show our approach has desirable properties for enabling an MDP-based agent to self-assess its performance for a given task under different conditions. Experimental results for a simulated autonomous vehicle navigation problem show significantly improved delegated task performance outcomes in conditions where self-confidence reports are provided to users.

Cite

CITATION STYLE

APA

Israelsen, B., Ahmed, N., Frew, E., Lawrence, D., & Argrow, B. (2020). Machine Self-confidence in Autonomous Systems via Meta-analysis of Decision Processes. In Advances in Intelligent Systems and Computing (Vol. 965, pp. 213–223). Springer Verlag. https://doi.org/10.1007/978-3-030-20454-9_21

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free