Learning to Use a Ratchet by Modeling Spatial Relations in Demonstrations

0Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We introduce a framework where visual features, describing the interaction among a robot hand, a tool, and an assembly fixture, can be learned efficiently using a small number of demonstrations. We illustrate the approach by torquing a bolt with the Robonaut-2 humanoid robot using a handheld ratchet. The difficulties include the uncertainty of the ratchet pose after grasping and the high precision required for mating the socket to the bolt and replacing the tool in the tool holder. Our approach learns the desired relative position between visual features on the ratchet and the bolt. It does this by identifying goal offsets from visual features that are consistently observable over a set of demonstrations. With this approach we show that Robonaut-2 is capable of grasping the ratchet, tightening a bolt, and putting the ratchet back into a tool holder. We measure the accuracy of the socket-bolt mating subtask over multiple demonstrations and show that a small set of demonstrations can decrease the error significantly.

Cite

CITATION STYLE

APA

Ku, L. Y., Jordan, S., Badger, J., Learned-Miller, E., & Grupen, R. (2020). Learning to Use a Ratchet by Modeling Spatial Relations in Demonstrations. In Springer Proceedings in Advanced Robotics (Vol. 11, pp. 398–410). Springer Science and Business Media B.V. https://doi.org/10.1007/978-3-030-33950-0_35

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free