Building human-friendly robots which are able to interact and cooperate with humans has been an active research field in recent years. A major challenge in this field is to develop robots that can interact and cooperate with humans by understanding human communication modalities. Nonetheless, human face is a dynamic object and has a high degree of variability in its appearance, which makes face detection a difficult problem. In this paper, we present a real-time vision-based framework to detect human face and analysis of the human face direction in window area to interact with robot. A cascade of feature detectors trained with boosting technique has been employed. Experimental results using servo motors connect to SD21 and PIC16F887A microcontroller; and the MIABOT Pro have validated our approach. Our future work is to build an intelligent wheelchair whose motion can be controlled by the user's face direction. © 2011 Springer-Verlag.
CITATION STYLE
Lam, M. C., Prabuwono, A. S., Arshad, H., & Chan, C. S. (2011). A real-time vision-based framework for human-robot interaction. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 7066 LNCS, pp. 257–267). https://doi.org/10.1007/978-3-642-25191-7_25
Mendeley helps you to discover research relevant for your work.