An eye-tracking dataset for visual attention modelling in a virtual museum context

4Citations
Citations of this article
21Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Predicting the user's visual attention enables a virtual reality (VR) environment to provide a context-aware and interactive user experience. Researchers have attempted to understand visual attention using eye-tracking data in a 2D plane. In this poster, we propose the first 3D eye-tracking dataset for visual attention modelling in the context of a virtual museum. It comprises about 7 million records and may facilitate visual attention modelling in a 3D VR space.

Cite

CITATION STYLE

APA

Zhou, Y., Feng, T., Shuai, S., Li, X., Sun, L., & Duh, H. B. L. (2019). An eye-tracking dataset for visual attention modelling in a virtual museum context. In Proceedings - VRCAI 2019: 17th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry. Association for Computing Machinery, Inc. https://doi.org/10.1145/3359997.3365738

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free