GazeGraph: Graph-based few-shot cognitive context sensing from human visual behavior

37Citations
Citations of this article
28Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this work, we present GazeGraph, a system that leverages human gazes as the sensing modality for cognitive context sensing. GazeGraph is a generalized framework that is compatible with different eye trackers and supports various gaze-based sensing applications. It ensures high sensing performance in the presence of heterogeneity of human visual behavior, and enables quick system adaptation to unseen sensing scenarios with few-shot instances. To achieve these capabilities, we introduce the spatial-temporal gaze graphs and the deep learning-based representation learning method to extract powerful and generalized features from the eye movements for context sensing. Furthermore, we develop a few-shot gaze graph learning module that adapts the 'learning to learn' concept from meta-learning to enable quick system adaptation in a data-efficient manner. Our evaluation demonstrates that GazeGraph outperforms the existing solutions in recognition accuracy by 45% on average over three datasets. Moreover, in few-shot learning scenarios, GazeGraph outperforms the transfer learning-based approach by 19% to 30%, while reducing the system adaptation time by 80%.

Cite

CITATION STYLE

APA

Lan, G., Heit, B., Scargill, T., & Gorlatova, M. (2020). GazeGraph: Graph-based few-shot cognitive context sensing from human visual behavior. In SenSys 2020 - Proceedings of the 2020 18th ACM Conference on Embedded Networked Sensor Systems (pp. 422–435). Association for Computing Machinery, Inc. https://doi.org/10.1145/3384419.3430774

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free