Human–Machine teaming is a very near term standard for many occupational settings and still requires considerations for the design of autonomous teammates (ATs). Transparency of system processes is important for human–machine interaction and reliance but standards for its implementation are still being explored. Embedding social cues is a potential design approach, which may capture the social benefits of a team environment, yet vary with task setting. The current study examined the manipulation of transparency of benevolent intent from an AT within a piloting task requiring suppression of enemy defenses. Specifically, the benevolent AT maintained task communication as in a neutral condition, but included messages of support and awareness of errors. Benevolent communication reduced reported workload and increased reported team collaboration, indicating that this team intent was beneficial. In addition, trust and acceptance of the AT were rated higher by individuals tasked with depending on the system to protect them from missile threats. The need for information from ATs is beneficial, however may vary depending on team type.
CITATION STYLE
Panganiban, A. R., Matthews, G., & Long, M. D. (2020). Transparency in Autonomous Teammates: Intention to Support as Teaming Information. Journal of Cognitive Engineering and Decision Making, 14(2), 174–190. https://doi.org/10.1177/1555343419881563
Mendeley helps you to discover research relevant for your work.