Abstract
In a guided virtual field trip, students often need to pay attention to the correct objects in a 3D scene. Distractions or misunderstandings of a virtual agent's spatial guidance may cause students to miss critical information. We present a generalizable virtual reality (VR) avatar animation architecture that is responsive to a viewer's eye gaze and we evaluate the rated effectiveness (e.g., naturalness) of enabled agent responses. Our novel annotation-driven sequencing system modifies the playing, seeking, rewinding, and pausing of teacher recordings to create appropriate teacher avatar behavior based on a viewer's eye-tracked visual attention. Annotations are contextual metadata that modify sequencing behavior during critical time points and can be adjusted in a timeline editor. We demonstrate the success of our architecture with a study that compares 3 different teacher agent behavioral responses when pointing to and explaining objects on a virtual oil rig while an in-game mobile device provides an experiment control mechanism for 2 levels of distractions. Results suggest that users consider teacher agent behaviors with increased interactivity to be more appropriate, more natural, and less strange than default agent behaviors, implying that more elaborate agent behaviors can improve a student's educational VR experience. Results also provide insights into how or why a minimal response (Pause) and a more dynamic response (Respond) are perceived differently.
Author supplied keywords
Cite
CITATION STYLE
Khokhar, A., & Borst, C. (2022). Modifying Pedagogical Agent Spatial Guidance Sequences to Respond to Eye-Tracked Student Gaze in VR. In Proceedings - SUI 2022: ACM Conference on Spatial User Interaction. Association for Computing Machinery, Inc. https://doi.org/10.1145/3565970.3567697
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.