In this work we explore how Augmented Reality annotations can be used as a form of Mixed Reality gesture, how neurophysiological measurements can inform the decision as to whether or not to use such gestures, and whether and how to adapt language when using such gestures. In this paper, we propose a preliminary investigation of how decisions regarding robot-to-human communication modality in mixed reality environments might be made on the basis of humans’ perceptual and cognitive states. Specifically, we propose to use brain data acquired with high-density functional near-infrared spectroscopy (fNIRS) to measure the neural correlates of cognitive and emotional states with particular relevance to adaptive human-robot interaction (HRI). In this paper we describe several states of interest that fNIRS is well suited to measure and that have direct implications to HRI adaptations and we leverage a framework developed in our prior work to explore how different neurophysiological measures could inform the selection of different communication strategies. We then describe results from a feasibility experiment where multilabel Convolutional Long Short Term Memory Networks were trained to classify the target mental states of 10 participants and we discuss a research agenda for adaptive human-robot teams based on our findings.
CITATION STYLE
Hirshfield, L., Williams, T., Sommer, N., Grant, T., & Gursoy, S. V. (2018). Workload-driven modulation of mixed-reality robot-human communication. In Proceedings of the Workshop on Modeling Cognitive Processes from Multimodal Data, MCPMD 2018. Association for Computing Machinery, Inc. https://doi.org/10.1145/3279810.3279848
Mendeley helps you to discover research relevant for your work.