Human-robot interaction is an interdisciplinary research area that is becoming more and more relevant as robots start to enter our homes, workplaces, schools, etc. In order to navigate safely among us, robots must be able to understand human behavior, to communicate, and to interpret instructions from humans, either by recognizing their speech or by understanding their body movements and gestures. We present a biologically inspired vision system for human-robot interaction which integrates several components: visual saliency, stereo vision, face and hand detection and gesture recognition. Visual saliency is computed using color, motion and disparity. Both the stereo vision and gesture recognition components are based on keypoints coded by means of cortical V1 simple, complex and end-stopped cells. Hand and face detection is achieved by using a linear SVM classifier. The system was tested on a child-sized robot.
CITATION STYLE
Saleiro, M., Farrajota, M., Terzić, K., Krishna, S., Rodrigues, J. M. F., & du Buf, J. M. H. (2015). Biologically inspired vision for human-robot interaction. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9176, pp. 505–517). Springer Verlag. https://doi.org/10.1007/978-3-319-20681-3_48
Mendeley helps you to discover research relevant for your work.