LiteGaze: Neural architecture search for efficient gaze estimation

5Citations
Citations of this article
13Readers
Mendeley users who have this article in their library.

Abstract

Gaze estimation plays a critical role in human-centered vision applications such as human-computer interaction and virtual reality. Although significant progress has been made in automatic gaze estimation by deep convolutional neural networks, it is still difficult to directly deploy deep learning based gaze estimation models across different edge devices, due to the high computational cost and various resource constraints. This work proposes LiteGaze, a deep learning framework to learn architectures for efficient gaze estimation via neural architecture search (NAS). Inspired by the once-for-all model (Cai et al., 2020), this work decouples the model training and architecture search into two different stages. In particular, a supernet is trained to support diverse architectural settings. Then specialized sub-networks are selected from the obtained supernet, given different efficiency constraints. Extensive experiments are performed on two gaze estimation datasets and demonstrate the superiority of the proposed method over previous works, advancing the real-time gaze estimation on edge devices.

Cite

CITATION STYLE

APA

Guo, X., Wu, Y., Miao, J., & Chen, Y. (2023). LiteGaze: Neural architecture search for efficient gaze estimation. PLoS ONE, 18(5 May). https://doi.org/10.1371/journal.pone.0284814

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free