Environment-adaptive interaction primitives through visual context for human–robot motor skill learning

11Citations
Citations of this article
58Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

In situations where robots need to closely co-operate with human partners, consideration of the task combined with partner observation maintains robustness when partner behavior is erratic or ambiguous. This paper documents our approach to capture human–robot interactive skills by combining their demonstrative data with additional environmental parameters automatically derived from observation of task context without the need for heuristic assignment, as an extension to overcome shortcomings of the interaction primitives framework. These parameters reduce the partner observation period required before suitable robot motion can commence, while also enabling success in cases where partner observation alone was inadequate for planning actions suited to the task. Validation in a collaborative object covering exercise with a humanoid robot demonstrate the robustness of our environment-adaptive interaction primitives, when augmented with parameters directly drawn from visual data of the task scene.

Cite

CITATION STYLE

APA

Cui, Y., Poon, J., Miro, J. V., Yamazaki, K., Sugimoto, K., & Matsubara, T. (2019). Environment-adaptive interaction primitives through visual context for human–robot motor skill learning. Autonomous Robots, 43(5), 1225–1240. https://doi.org/10.1007/s10514-018-9798-2

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free