Training object class detectors from eye tracking data

74Citations
Citations of this article
97Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Training an object class detector typically requires a large set of images annotated with bounding-boxes, which is expensive and time consuming to create. We propose novel approach to annotate object locations which can substantially reduce annotation time. We first track the eye movements of annotators instructed to find the object and then propose a technique for deriving object bounding-boxes from these fixations. To validate our idea, we collected eye tracking data for the trainval part of 10 object classes of Pascal VOC 2012 (6,270 images, 5 observers). Our technique correctly produces bounding-boxes in 50%of the images, while reducing the total annotation time by factor 6.8x compared to drawing bounding-boxes. Any standard object class detector can be trained on the bounding-boxes predicted by our model. Our large scale eye tracking dataset is available at groups.inf.ed.ac.uk/calvin/eyetrackdataset/. © 2014 Springer International Publishing.

Cite

CITATION STYLE

APA

Papadopoulos, D. P., Clarke, A. D. F., Keller, F., & Ferrari, V. (2014). Training object class detectors from eye tracking data. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 8693 LNCS, pp. 361–376). Springer Verlag. https://doi.org/10.1007/978-3-319-10602-1_24

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free