Anytime, Anywhere: Human Arm Pose from Smartwatch Data for Ubiquitous Robot Control and Teleoperation

5Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This work devises an optimized machine learning approach for human arm pose estimation from a single smart-watch. Our approach results in a distribution of possible wrist and elbow positions, which allows for a measure of uncertainty and the detection of multiple possible arm posture solutions, i.e., multimodal pose distributions. Combining estimated arm postures with speech recognition, we turn the smartwatch into a ubiquitous, low-cost and versatile robot control interface. We demonstrate in two use-cases that this intuitive control interface enables users to swiftly intervene in robot behavior, to temporarily adjust their goal, or to train completely new control policies by imitation. Extensive experiments show that the approach results in a 40% reduction in prediction error over the current state-of-the-art and achieves a mean error of 2.56 cm for wrist and elbow positions.

Cite

CITATION STYLE

APA

Weigend, F. C., Sonawani, S., Drolet, M., & Amor, H. B. (2023). Anytime, Anywhere: Human Arm Pose from Smartwatch Data for Ubiquitous Robot Control and Teleoperation. In IEEE International Conference on Intelligent Robots and Systems (pp. 3811–3818). Institute of Electrical and Electronics Engineers Inc. https://doi.org/10.1109/IROS55552.2023.10341624

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free