In this paper, we explore how a humanoid robot having two cameras can learn to improve depth perception by itself. We propose an approach that can autonomously improve depth estimation of the humanoid robot. This approach can tune parameters that are required for binocular vision system of the humanoid robot and improve depth perception automatically through interaction with environment. To set parameters of binocular vision system of the humanoid robot, the robot utilizes sensory invariant driven action (SIDA). The sensory invariant driven action (SIDA) gives identical sensory stimulus to the robot even though actions are not same. These actions are autonomously generated by the humanoid robot without the external control in order to improve depth perception. The humanoid robot can gather training data so as to tune parameters of binocular vision system from the sensory invariant driven action (SIDA). Object size invariance (OSI) is used to examine whether or not current depth estimation is correct. If the current depth estimation is reliable, the robot tunes the parameters of binocular vision system based on object size invariance (OSI) again. The humanoid robot interacts with environment so as to understand a relation between the size of the object and distance to the object from the robot. Our approach shows that action plays an important role in the perception. Experimental results show that the proposed approach can successfully and automatically improve depth estimation of the humanoid robot.
CITATION STYLE
Jin, Y., Rammohan, M., Lee, G., & Lee, M. (2015). Autonomous depth perception of humanoid robot using binocular vision system through sensorimotor interaction with environment. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9490, pp. 554–561). Springer Verlag. https://doi.org/10.1007/978-3-319-26535-3_63
Mendeley helps you to discover research relevant for your work.