Learning Hand Movement Interaction Control Using RNNs: From HHI to HRI

2Citations
Citations of this article
27Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

A key problem in robotics is enabling an autonomous agent to perform human-like arm movements in close proximity to another human. However, modeling the human decision and control process of the movement during dyadic interaction presents a challenge. Although, most prior approaches rely on multicomponent robot motion planning architectures, we use data of two humans performing interfering arm reaching movements to extract and transfer interaction behavior control skill to a robotic agent. A recurrent neural network-based framework is constructed to learn a policy that computes control signals for a robot end effector in order to replace one human. The learned policy is benchmarked against unseen interaction data and a state-of-the-art learning from demonstration framework in simulated scenarios. We compare several architectures and investigate a new activation function of three stacked tanh(). The results show that the proposed framework successfully learns a policy to imitate human movement behavior control during dyadic interaction. The policy is transferred to a real robot and its feasibility for close-proximity human-robot interaction is shown.

Cite

CITATION STYLE

APA

Oguz, O. S., Pfirrmann, B. M., Guo, M., & Wollherr, D. (2018). Learning Hand Movement Interaction Control Using RNNs: From HHI to HRI. IEEE Robotics and Automation Letters, 3(4), 4100–4107. https://doi.org/10.1109/LRA.2018.2862923

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free