Understanding and Transfer of Human Skills in Robotics Using Deep Learning and Musculoskeletal Modeling

0Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

With the application of deep learning, prosthetic rehabilitation can be carried out in a manner that not only emulates human manipulation skills and performance, but can also work more efficiently. In this study, we introduced computer vision capability for a rehabilitation robot using a convolutional neural network (CNN). The human skill of scooping was studied by dividing it into four motion primitives or sub-tasks. For each primitive, optimum human posture was identified in terms of muscular effort. Human motion skills were analyzed in terms of physiological parameters, including wrist pronation-supination angle, elbow flexion angle, shoulder rotation/abduction/flexion angles, and hand accelerations by three dimensional musculoskeletal modeling. This analysis identified how humans execute the same activity for eight different materials. Optimum human motion for each material was mapped to a robotic arm with six degrees-of-freedom (DOFs), which was equipped with a camera. The success ratio while examining the scooping motion over all trials was found to be 85%. Consequently, the activity can be performed efficiently based on human intuition in a dynamic environment.

Cite

CITATION STYLE

APA

Chaudhari, D., Bhagat, K., & Demircan, E. (2020). Understanding and Transfer of Human Skills in Robotics Using Deep Learning and Musculoskeletal Modeling. In Springer Proceedings in Advanced Robotics (Vol. 11, pp. 45–55). Springer Science and Business Media B.V. https://doi.org/10.1007/978-3-030-33950-0_5

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free