Visual human-machine interaction

0Citations
Citations of this article
17Readers
Mendeley users who have this article in their library.
Get full text

Abstract

It is envisaged that computers of the future will have smart interfaces such as speech and vision, which will facilitate natural and easy human-machine interaction. Gestures of the face and hands could become a natural way to control the operations of a computer or a machine, such as a robot. In this paper, we present a vision-based interface that in real-time tracks a person's facial features and the gaze point of the eyes. The system can robustly track facial features, can detect tracking failures and has an automatic mechanism for error recovery. The system is insensitive to lighting changes and occulsions or distortion of the facial features. The system is user independent and can automatically calibrate for each different user. An application using this technology for driver fatigue detection and the evaluation of ergonomic design of motor vehicles has been developed. Our human-machine interface has an enormous potential in other applications that allow the control of machines and processes, and measure human performance. For example, product possibilities exist for assisting the disabled and in video game entertainment.

Cite

CITATION STYLE

APA

Zelinsky, A. (1999). Visual human-machine interaction. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 1747, pp. 440–452). Springer Verlag. https://doi.org/10.1007/3-540-46695-9_37

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free