Humanoid odometric localization integrating kinematic, inertial and visual information

28Citations
Citations of this article
43Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We present a method for odometric localization of humanoid robots using standard sensing equipment, i.e., a monocular camera, an inertial measurement unit (IMU), joint encoders and foot pressure sensors. Data from all these sources are integrated using the prediction-correction paradigm of the Extended Kalman Filter. Position and orientation of the torso, defined as the representative body of the robot, are predicted through kinematic computations based on joint encoder readings; an asynchronous mechanism triggered by the pressure sensors is used to update the placement of the support foot. The correction step of the filter uses as measurements the torso orientation, provided by the IMU, and the head pose, reconstructed by a VSLAM algorithm. The proposed method is validated on the humanoid NAO through two sets of experiments: open-loop motions aimed at assessing the accuracy of localization with respect to a ground truth, and closed-loop motions where the humanoid pose estimates are used in real-time as feedback signals for trajectory control.

Cite

CITATION STYLE

APA

Oriolo, G., Paolillo, A., Rosa, L., & Vendittelli, M. (2016). Humanoid odometric localization integrating kinematic, inertial and visual information. Autonomous Robots, 40(5), 867–879. https://doi.org/10.1007/s10514-015-9498-0

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free