Visual motion capturing for kinematic model estimation of a humanoid robot

4Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Controlling a tendon-driven robot like the humanoid Ecce is a difficult task, even more so when its kinematics and its pose are not known precisely. In this paper, we present a visual motion capture system to allow both real-time measurements of robot joint angles and model estimation of its kinematics. Unlike other humanoid robots, Ecce (see Fig. 1A) is completely molded by hand and its joints are not equipped with angle sensors. This anthropomimetic robot design [5] demands for both (i) real-time measurement of joint angles and (ii) model estimation of its kinematics. The underlying principle of this work is that all kinematic model parameters can be derived from visual motion data. Joint angle data finally lay the foundation for physics-based simulation and control of this novel musculoskeletal robot. © 2011 Springer-Verlag.

Cite

CITATION STYLE

APA

Gaschler, A. (2011). Visual motion capturing for kinematic model estimation of a humanoid robot. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 6835 LNCS, pp. 438–443). https://doi.org/10.1007/978-3-642-23123-0_45

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free