Abstract
Predicting the user's visual attention enables a virtual reality (VR) environment to provide a context-aware and interactive user experience. Researchers have attempted to understand visual attention using eye-tracking data in a 2D plane. In this poster, we propose the first 3D eye-tracking dataset for visual attention modelling in the context of a virtual museum. It comprises about 7 million records and may facilitate visual attention modelling in a 3D VR space.
Author supplied keywords
Cite
CITATION STYLE
Zhou, Y., Feng, T., Shuai, S., Li, X., Sun, L., & Duh, H. B. L. (2019). An eye-tracking dataset for visual attention modelling in a virtual museum context. In Proceedings - VRCAI 2019: 17th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry. Association for Computing Machinery, Inc. https://doi.org/10.1145/3359997.3365738
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.