Semantic 3D gaze mapping for estimating focused objects

2Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Eye-trackers are expected to be used in portable daily-use devices. However, it must register object information and define a unified coordinate system in advance for human-computer interaction and quantitative analysis. Therefore, we propose a semantic 3D gaze mapping to collect gaze information from multiple people on the unified map and detect focused objects automatically. The semantic 3D map can be reconstructed using keyframe-based semantic segmentation and structure-from-motion, and the 3D point-of-gaze can also be computed on the map. We confirmed that the fixation time of the focused object can be calculated through an experiment without prior information.

Cite

CITATION STYLE

APA

Matsumoto, R., & Takemura, K. (2019). Semantic 3D gaze mapping for estimating focused objects. In Proceedings of the 21st International Conference on Human-Computer Interaction with Mobile Devices and Services, MobileHCI 2019. Association for Computing Machinery, Inc. https://doi.org/10.1145/3338286.3344396

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free