Abstract
Group meetings can suffer from serious problems that undermine performance, including bias, “groupthink", fear of speaking, and unfocused discussion. To better understand these issues, propose interventions, and thus improve team performance, we need to study human dynamics in group meetings. However, this process currently heavily depends on manual coding and video cameras. Manual coding is tedious, inaccurate, and subjective, while active video cameras can affect the natural behavior of meeting participants. Here, we present a smart meeting room that combines microphones and unobtrusive ceiling-mounted Time-of-Flight (ToF) sensors to understand group dynamics in team meetings. We automatically process the multimodal sensor outputs with signal, image, and natural language processing algorithms to estimate participant head pose, visual focus of attention (VFOA), non-verbal speech patterns, and discussion content. We derive metrics from these automatic estimates and correlate them with user-reported rankings of emergent group leaders and major contributors to produce accurate predictors. We validate our algorithms and report results on a new dataset of lunar survival tasks of 36 individuals across 10 groups collected in the multimodal-sensor-enabled smart room.
Author supplied keywords
Cite
CITATION STYLE
Bhattacharya, I., Zhang, T., Ji, H., Foley, M., Ku, C., Riedl, C., … Welles, B. F. (2018). A multimodal-sensor-enabled room for unobtrusive group meeting analysis. In ICMI 2018 - Proceedings of the 2018 International Conference on Multimodal Interaction (pp. 347–355). Association for Computing Machinery, Inc. https://doi.org/10.1145/3242969.3243022
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.