Special issue on “Human-Autonomy Teaming in Military Contexts”

  • Chen J
  • Schulte A
N/ACitations
Citations of this article
25Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Since the publication of a Theoretical Issues in Ergonom-ics Science special issue on "Human-Autonomy Teaming in Military Settings" in 2018 (Chen 2018), there have been tremendous advancements in artificial intelligence (AI) and autonomous systems technologies. In the military domain, these capabilities are increasingly implemented in systems that Warfighters use during missions-both on and off the battlefield, aboard manned vehicles, and in the context of unmanned systems mission management. In order to deploy these systems effectively, the military must critically consider human-autonomy teaming (HAT). Indeed, many intelligent systems are able to conduct sophisticated planning, sense-observe-orient-decide-act, and work side by side (literally or figuratively) with Warfighters; cognitive capabilities , formerly the domain of humans, have increasingly become incorporated into highly automated/autonomous systems (Schulte et al. 2016). This trend clearly shifts the human-machine relationship from a supervisory hierarchy to a partner-like collaboration. In other words, intelligent systems increasingly play the role of a peer teammate rather than that of a subordinate tool. However, there are challenges associated with military operations utilizing intelligent systems embedded in human socio-technical environments-potential lethality, responsibility and controllability, high stress and workload, and time pressure, just to name a few. More often than not, military HAT issues pose unique challenges that have not been addressed in civilian applications. This special issue (SI) tackles military HAT issues from multiple fronts-theoretical frameworks and considerations related to team processes and performance (e.g., communications , trust, and workload), Warfighter-machine interaction and interface designs, and simulation-based experimentation. The contexts examined in the studies cover a wide range of military operations-humans working with small robotic systems, cyber analysts, human interaction with virtual agents, management of multiple heterogeneous unmanned systems, human-swarm interaction, and helicopter pilots working with an adaptive agent in the cockpit environment. While multiple articles are authored by U.S. Department of Defense researchers, there are also contributions from (or supported by) military agencies in Australia, Canada, and Germany. Hou et al. present a conceptual framework for developing trustworthy agents and effective HAT. The model, IMPACTS, describes system design principles that include intention, measurability, performance, adaptivity, communication , transparency, and security. An actual system, Authority Pathway for Weapon Engagement, is used to illustrate how the framework can be applied towards designing an intelligent adaptive decision aid. The authors also describe a field test in a multinational military exercise, in which feedback about the decision aid was obtained from participating subject matter experts. Baker et al. review eleven team communication assessment techniques that are particularly relevant to HAT. The authors provide examples of efforts to apply those techniques in military HAT contexts and discuss issues associated with team cohesion, trust, and other team performance outcomes in those settings. Based on the extensive review, Baker et al. identify four critical areas for future research that can provide useful guidance for researchers interested in team communications-related topics. Lyons and Wynne examine the efficacy of using the Autonomous Agent Teammate-likeness (AAT) scales to assess six aspects of perceptions of machine agents' teammate -likeness: agency, benevolence, communication, interdependence , synchrony, and team orientation. An online experiment was conducted, in which the participants were provided a brief narrative of a new technology with either high or low teaming characteristics. The results demonstrate the utility of the AAT scales to reliably assess perceptions of agents' teammate likeness. Holder and Wang discuss user interface design requirements for an AI agent to work effectively and transparently

Cite

CITATION STYLE

APA

Chen, J., & Schulte, A. (2021). Special issue on “Human-Autonomy Teaming in Military Contexts.” Human-Intelligent Systems Integration, 3(4), 287–289. https://doi.org/10.1007/s42454-021-00032-4

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free