SLAM and Vision-based Humanoid Navigation

4Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In order for humanoid robots to evolve autonomously in a complex environment, they have to perceive it, build an appropriate representation, localize in it, and decide which motion to realize. The relationship between the environment and the robot is rather complex as some parts are obstacles to avoid, other possible support for locomotion, or objects to manipulate. The affordance with the objects and the environment may result in quite complex motions ranging from bimanual manipulation to whole-body motion generation. In this chapter, we will introduce tools to realize vision-based humanoid navigation. The general structure of such a system is depicted in Fig. 1. It classically represents the perception-action loop where, based on the sensor signals, a number of information are extracted. The information is used to localize the robot and build a representation of the environment. This process is the subject of the second paragraph. Finally a motion is planned and sent to the robot control system. The third paragraph describes several approaches to implement visual navigation in the context of humanoid robotics.

Cite

CITATION STYLE

APA

Stasse, O. (2018). SLAM and Vision-based Humanoid Navigation. In Humanoid Robotics: A Reference (pp. 1739–1761). Springer Netherlands. https://doi.org/10.1007/978-94-007-6046-2_59

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free