The prospect of human commanders teaming with mobile robots "smart enough" to undertake joint exploratory tasks-especially tasks that neither commander nor robot could perform alone-requires novel methods of preparing and testing human-robot teams for these ventures prior to real-time operations. In this paper, we report work-in-progress that maintains face validity of selected configurations of resources and people, as would be available in emergency circumstances. More specifically, from an off-site post, we ask human commanders (C) to perform an exploratory task in collaboration with a remotely located human robot-navigator (Rn) who controls the navigation of, but cannot see the physical robot (R). We impose network bandwidth restrictions in two mission scenarios comparable to real circumstances by varying the availability of sensor, image, and video signals to Rn, in effect limiting the human Rn to function as an automation stand-in. To better understand the capabilities and language required in such configurations, we constructed multi-modal corpora of time-synced dialog, video, and LIDAR files recorded during task sessions. We can now examine commander/robot dialogs while replaying what C and Rn saw, to assess their task performance under these varied conditions.
CITATION STYLE
Summers-Stay, D., Cassidy, T., & Voss, C. R. (2014). Joint Navigation in Commander/Robot Teams: Dialog & Task Performance When Vision is Bandwidth-Limited. In V and L Net 2014 - 3rd Annual Meeting of the EPSRC Network on Vision and Language and 1st Technical Meeting of the European Network on Integrating Vision and Language, A Workshop of the 25th International Conference on Computational Linguistics, COLING 2014 - Proceedings (pp. 9–16). Association for Computational Linguistics (ACL). https://doi.org/10.3115/v1/w14-5402
Mendeley helps you to discover research relevant for your work.