Trust Engineering for Human-AI Teams

43Citations
Citations of this article
59Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Human-AI teaming refers to systems in which humans and artificial intelligence (AI) agents collaborate to provide significant mission performance improvements over that which humans or AI can achieve alone. The goal is faster and more accurate decision-making by integrating the rapid data ingest, learning, and analyses capabilities of AI with the creative problem solving and abstraction capabilities of humans. The purpose of this panel is to discuss research directions in Trust Engineering for building appropriate bidirectional trust between humans and AI. Discussions focus on the challenges in systems that are increasingly complex and work within imperfect information environments. Panelists provide their perspectives on addressing these challenges through concepts such as dynamic relationship management, adaptive systems, co-discovery learning, and algorithmic transparency. Mission scenarios in command and control (C2), piloting, cybersecurity, and criminal intelligence analysis demonstrate the importance of bidirectional trust in human-AI teams.

Cite

CITATION STYLE

APA

Ezer, N., Bruni, S., Cai, Y., Hepenstal, S. J., Miller, C. A., & Schmorrow, D. D. (2019). Trust Engineering for Human-AI Teams. In Proceedings of the Human Factors and Ergonomics Society (Vol. 63, pp. 322–326). SAGE Publications Inc. https://doi.org/10.1177/1071181319631264

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free