Trustworthy Human-Centered Automation Through Explainable AI and High-Fidelity Simulation

3Citations
Citations of this article
33Readers
Mendeley users who have this article in their library.
Get full text

Abstract

As we become more competent developers of artificially intelligent systems, the level of deployment and associated implicit trust in these systems will increase in kind. While this is an attractive concept, with an already-demonstrated capability to positively disrupt industries around the world, it remains a dangerous premise that demands attention and intentional resource allocation to ensure that these systems’ behaviors match our expectations. Until we can develop explainable AI techniques or high-fidelity simulators to enable us to examine the models’ underlying logic for the situations we intend to utilize them in, it will be irresponsible to place our trust in their ability to act on our behalf. In this work we describe and provide guidelines for ongoing efforts in using novel explainable AI techniques and high-fidelity simulation to help establish shared expectations between autonomous systems and the humans who interact with them, discussing collaborative robotics and cybersecurity domains.

Cite

CITATION STYLE

APA

Hayes, B., & Moniz, M. (2021). Trustworthy Human-Centered Automation Through Explainable AI and High-Fidelity Simulation. In Advances in Intelligent Systems and Computing (Vol. 1206 AISC, pp. 3–9). Springer. https://doi.org/10.1007/978-3-030-51064-0_1

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free