Abstract
Fast and accurate human intention prediction can significantly advance the performance of assistive devices for patients with limited motor or communication abilities. Among available modalities, eye movement can be valuable for inferring the user's intention, as it can be tracked non-invasively. However, existing limited studies in this domain do not provide the level of accuracy required for the reliable operation of assistive systems. By taking a data-driven approach, this paper presents a new framework that utilizes the spatial and temporal patterns of eye movement along with deep learning to predict the user's intention. In the proposed framework, the spatial patterns of gaze are identified by clustering the gaze points based on their density over displayed images in order to find the regions of interest (ROIs). The temporal patterns of gaze are identified via hidden Markov models (HMMs) to find the transition sequence between ROIs. Transfer learning is utilized to identify the objects of interest in the displayed images. Finally, models are developed to predict the user's intention after completing the task as well as at early stages of the task. The proposed framework is evaluated in an experiment involving predicting intended daily-life activities. Results indicate that an average classification accuracy of 97.42% is achieved, which is considerably higher than existing gaze-based intention prediction studies.
Author supplied keywords
Cite
CITATION STYLE
Koochaki, F., & Najafizadeh, L. (2021). A Data-Driven Framework for Intention Prediction via Eye Movement with Applications to Assistive Systems. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 29, 974–984. https://doi.org/10.1109/TNSRE.2021.3083815
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.