Exploitation of gaze data for photo region labeling in an immersive environment

0Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Metadata describing the content of photos are of high importance for applications like image search or as part of training sets for object detection algorithms. In this work, we apply tags to image regions for a more detailed description of the photo semantics. This region labeling is performed without additional effort from the user, just from analyzing eye tracking data, recorded while users are playing a gaze-controlled game. In the game EyeGrab, users classify and rate photos falling down the screen. The photos are classified according to a given category under time pressure. The game has been evaluated in a study with 54 subjects. The results show that it is possible to assign the given categories to image regions with a precision of up to 61%. This shows that we can perform an almost equally good region labeling using an immersive environment like in EyeGrab compared to a previous classification experiment that was much more controlled. © 2014 Springer International Publishing.

Cite

CITATION STYLE

APA

Walber, T., Scherp, A., & Staab, S. (2014). Exploitation of gaze data for photo region labeling in an immersive environment. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 8325 LNCS, pp. 424–435). https://doi.org/10.1007/978-3-319-04114-8_36

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free