We present our current research on the implementation of gaze as an efficient and usable pointing modality supplementary to speech, for interacting with augmented objects in our daily environment or large displays, especially immersive virtual reality environments, such as reality centres and caves. We are also addressing issues relating to the use of gaze as the main interaction input modality. We have designed and developed two operational user interfaces: one for providing motor-disabled users with easy gaze-based access to map applications and graphical software; the other for iteratively testing and improving the usability of gaze-contingent displays. © Springer-Verlag Berlin Heidelberg 2007.
CITATION STYLE
Gepner, D., Simonin, J., & Carbonell, N. (2007). Gaze as a supplementary modality for interacting with ambient intelligence environments. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4555 LNCS, pp. 848–857). Springer Verlag. https://doi.org/10.1007/978-3-540-73281-5_93
Mendeley helps you to discover research relevant for your work.