Gesture-based recognition systems are susceptible to input recognition errors and user errors, both of which negatively affect user experiences and can be frustrating to correct. Prior work has suggested that user gaze patterns following an input event could be used to detect input recognition errors and subsequently improve interaction. However, to be useful, error detection systems would need to detect various types of high-cost errors. Furthermore, to build a reliable detection model for errors, gaze behaviour following these errors must be manifested consistently across different tasks. Using data analysis and machine learning models, this research examined gaze dynamics following input events in virtual reality (VR). Across three distinct point-and-select tasks, we found differences in user gaze patterns following three input events: correctly recognized input actions, input recognition errors, and user errors. These differences were consistent across tasks, selection versus deselection actions, and naturally occurring versus experimentally injected input recognition errors. A multi-class deep neural network successfully discriminated between these three input events using only gaze dynamics, achieving an AUC-ROC-OVR score of 0.78. Together, these results demonstrate the utility of gaze in detecting interaction errors and have implications for the design of intelligent systems that can assist with adaptive error recovery.
CITATION STYLE
Sendhilnathan, N., Zhang, T., Lafreniere, B., Grossman, T., & Jonker, T. R. (2022). Detecting Input Recognition Errors and User Errors using Gaze Dynamics in Virtual Reality. In UIST 2022 - Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology. Association for Computing Machinery, Inc. https://doi.org/10.1145/3526113.3545628
Mendeley helps you to discover research relevant for your work.