Gaze Inputs for Targeting: The Eyes Have It, Not With a Cursor

9Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Can eye tracking enable VR users to target and select elements at par or better than controller or head-based targeting? We explored visual feedback methods (none, cursor, outline and resize) for gaze targeting with a button press for selection, and an ecologically valid representation of commercially available user interfaces with a body-locked, grid-based design. Forty participants interacted with a 5x5 square element grid, and elements subtended either 3-, 6- or 9-degrees of visual angle. If the participant looked out of the grid boundary, on button press, we chose to select the last targeted element, but no other algorithms to enhance performance were employed. We also assessed signal quality requirements with a fixed offset 1.5-degree accuracy degradation. Participants completed 36 blocks and in each, targeted and selected 15 successive elements. We found that gaze targeting, with appropriate feedback methods and a button press, can perform at par or better than the controller in cases intended to replicate targeting and selecting in world- or body-locked paradigms in AR/VR. We anticipate that with design improvements or algorithmic mitigations that this can improve significantly.

Cite

CITATION STYLE

APA

Fernandes, A. S., Schütz, I., Murdison, T. S., & Proulx, M. J. (2025). Gaze Inputs for Targeting: The Eyes Have It, Not With a Cursor. International Journal of Human-Computer Interaction, 41(19), 12251–12269. https://doi.org/10.1080/10447318.2025.2453966

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free