The patients with diseases that cause severe movement disabilities was noticeably increasing. These disabilities made patients unable to carry out their daily activities or interact with their external environment. However, the existence of human-computer interfaces (HCI) gave those patients a new hope to be able to interact once again. HCI enabled these patients to communicate with their environment by recognizing the movement of their eyes. Eye movements are recorded by an electro-oculogram (EOG) through some electrodes that are put vertically and horizontally on the eyes. In this paper, EOG vertical and horizontal signals were analyzed to detect six eye movements (up, down, right, left, double blinking, and center). Three deep learning models namely convolution neural network (CNN), visual geometry group (VGG), and inception had been examined on filtered EOG signals. The experimental results reveal the superiority of the inception model in providing the best average accuracy 96.4%. Accordingly, a writing system is presented based on the detected movements.
CITATION STYLE
Hossieny, R. R., Tantawi, M., Shedeed, H., & Tolba, M. F. (2023). Development of electrooculogram based human computer interface system using deep learning. Bulletin of Electrical Engineering and Informatics, 12(4), 2410–2420. https://doi.org/10.11591/eei.v12i4.5591
Mendeley helps you to discover research relevant for your work.