An ever-growing body of work has demonstrated the rich information content available in eye movements for user modelling, e.g. for predicting users' activities, cognitive processes, or even personality traits. We show that state-of-the-art classifiers for eye-based user modelling are highly vulnerable to adversarial examples: Small artificial perturbations in gaze input that can dramatically change a classifier's predictions. On the sample task of eye-based document type recognition we study the success of adversarial attacks with and without targeting the attack to a specific class.
CITATION STYLE
Hagestedt, I., Backes, M., & Bulling, A. (2020). Adversarial Attacks on Classifiers for Eye-based User Modelling. In Eye Tracking Research and Applications Symposium (ETRA). Association for Computing Machinery. https://doi.org/10.1145/3379157.3390511
Mendeley helps you to discover research relevant for your work.