View-invariant robot adaptation to human action timing

6Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this work we describe a novel method to enable robots to adapt their action timing to the concurrent actions of a human partner in a repetitive joint task. We propose to exploit purely motion-based information to detect view-invariant dynamic instants of observed actions, i.e. moments in which the action dynamic is subject to a severe change. We model such instants as local minima of the movement velocity profile and mark temporal locations that are preserved under projective transformations, i.e. that resist to the mapping on the image planes and then can be considered view-invariant. Also, their level of generality allows them to easily adapt to a variety of human dynamics and settings. We first validate a computational method to detect such instants offline, on a new dataset of cooking activities. Then we propose an online implementation of the method, and we integrate the new functionality in the software framework of the iCub humanoid robot. The experimental testing of the online method proves its robustness in predicting the right intervention time for the robot and in supporting the adaptation of its actions durations in Human-Robot Interaction (HRI) sessions.

Cite

CITATION STYLE

APA

Noceti, N., Odone, F., Rea, F., Sciutti, A., & Sandini, G. (2018). View-invariant robot adaptation to human action timing. In Advances in Intelligent Systems and Computing (Vol. 868, pp. 804–821). Springer Verlag. https://doi.org/10.1007/978-3-030-01054-6_56

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free