Head-mounted displays (HMDs) with integrated eye trackers have opened up a new realm for gaze-contingent rendering. The accurate estimation of gaze depth is essential when modeling the optical capabilities of the eye. Most recently multifocal displays are gaining importance, requiring focus estimates to control displays or lenses. Deriving the gaze depth solely by sampling the scene’s depth at the point-of-regard fails for complex or thin objects as eye tracking is suffering from inaccuracies. Gaze depth measures using the eye’s vergence only provide an accurate depth estimate for the first meter. In this work, we combine vergence measures and multiple depth measures into feature sets. This data is used to train a regression model to deliver improved estimates. We present a study showing that using multiple features allows for an accurate estimation of the focused depth (MSE<0.1m) over a wide range (first 6m).
CITATION STYLE
Weier, M., Roth, T., Hinkenjann, A., & Slusallek, P. (2018). Predicting the gaze depth in head-mounted displays using multiple feature regression. In Eye Tracking Research and Applications Symposium (ETRA). Association for Computing Machinery. https://doi.org/10.1145/3204493.3204547
Mendeley helps you to discover research relevant for your work.