Adversarial Attacks on Classifiers for Eye-based User Modelling

5Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.
Get full text

Abstract

An ever-growing body of work has demonstrated the rich information content available in eye movements for user modelling, e.g. for predicting users' activities, cognitive processes, or even personality traits. We show that state-of-the-art classifiers for eye-based user modelling are highly vulnerable to adversarial examples: Small artificial perturbations in gaze input that can dramatically change a classifier's predictions. On the sample task of eye-based document type recognition we study the success of adversarial attacks with and without targeting the attack to a specific class.

Cite

CITATION STYLE

APA

Hagestedt, I., Backes, M., & Bulling, A. (2020). Adversarial Attacks on Classifiers for Eye-based User Modelling. In Eye Tracking Research and Applications Symposium (ETRA). Association for Computing Machinery. https://doi.org/10.1145/3379157.3390511

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free