This study explores the role that vision plays in sequential object interactions. We used a head-mounted eye tracker and upper-limb motion capture to quantify visual behavior while participants performed two standardized functional tasks. By simultaneously recording eye and motion tracking, we precisely segmented participants' visual data using the movement data, yielding a consistent and highly functionally resolved data set of real-world object-interaction tasks. Our results show that participants spend nearly the full duration of a trial fixating on objects relevant to the task, little time fixating on their own hand when reaching toward an object, and slightly more time- although still very little-fixating on the object in their hand when transporting it. A consistent spatial and temporal pattern of fixations was found across participants. In brief, participants fixate an object to be picked up at least half a second before their hand arrives at the object and stay fixated on the object until they begin to transport it, at which point they shift their fixation directly to the drop-offlocation of the object, where they stay fixated until the object is successfully released. This pattern provides additional evidence of a common system for the integration of vision and object interaction in humans, and is consistent with theoretical frameworks hypothesizing the distribution of attention to future action targets as part of eye and handmovement preparation. Our results thus aid the understanding of visual attention allocation during planning of object interactions both inside and outside the field of view.
CITATION STYLE
Lavoie, E. B., Valevicius, A. M., Boser, Q. A., Kovic, O., Vette, A. H., Pilarski, P. M., … Chapman, C. S. (2018). Using synchronized eye and motion tracking to determine high-precision eye-movement patterns during objectinteraction tasks. Journal of Vision, 18(6), 1–20. https://doi.org/10.1167/18.6.18
Mendeley helps you to discover research relevant for your work.