Head-computer interface: A multimodal approach to navigate through real and virtual worlds

11Citations
Citations of this article
19Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

This paper presents a novel approach for multimodal interaction which combines user mental activity (thoughts and emotions), user facial expressions and user head movements. In order to avoid problems related to computer vision (sensitivity to lighting changes, reliance on camera position, etc.), the proposed approach doesn't make use of optical techniques. Furthermore, in order to make human communication and control smooth, and avoid other environmental artifacts, the used information is non-verbal. The head's movements (rotations) are detected by a bi-axial gyroscope; the expressions and gaze are identified by electromyography and electrooculargraphy; the emotions and the thoughts are monitored by electroencephalography. In order to validate the proposed approach we developed an application where the user can navigate through a virtual world using his head. We chose Google Street View as virtual world. The developed application was conceived for a further integration with a electric wheelchair in order to replace the virtual world with a real world. A first evaluation of the system is provided. © 2011 Springer-Verlag.

Cite

CITATION STYLE

APA

Carrino, F., Tscherrig, J., Mugellini, E., Abou Khaled, O., & Ingold, R. (2011). Head-computer interface: A multimodal approach to navigate through real and virtual worlds. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 6762 LNCS, pp. 222–230). https://doi.org/10.1007/978-3-642-21605-3_25

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free