Advances in robotics may reshape the landscape of daily life, yet those in the military have been part of the robotics revolution for some time now. One cannot traverse far within military echelons nor listen to the popular press without hearing planning, discussion, and for some, a great deal of concern regarding the military's latest push toward autonomous systems. The military's use of drones (uninhabited aerial systems, or UASs) has been a ubiquitous topic of discussion/criticism within the popular media for several years since their highly publicized use in regions such as Pakistan, Yemen, and Afghanistan. Much of the chagrin surrounding these systems, despite the fact they are currently teleoperated with human oversight and command, has to do with whether or not we can or should trust them in a combat environment. Robotic systems within the military may be operated in hostile, complex situations and may, someday, be given the authority to execute lethal decisions within the battle space (Arkin 2009). However, future concept of operations (CONOPS) will likely inject greater autonomy for understanding the trust dynamics that exist between humans and machines. As will be discussed in this article, the challenge of understanding these trust dynamics is more complicated than simply increasing the system's reliability.into these systems that will ultimately increase the need.
CITATION STYLE
Lyons, J. B., Clark, M. A., Wagner, A. R., & Schuelke, M. J. (2017). Certifiable trust in autonomous systems: Making the intractable tangible. AI Magazine, 38(3), 37–49. https://doi.org/10.1609/aimag.v38i3.2717
Mendeley helps you to discover research relevant for your work.