Deep learning-based upper limb functional assessment using a single kinect v2 sensor

26Citations
Citations of this article
58Readers
Mendeley users who have this article in their library.

Abstract

We develop a deep learning refined kinematic model for accurately assessing upper limb joint angles using a single Kinect v2 sensor. We train a long short-term memory recurrent neural network using a supervised machine learning architecture to compensate for the systematic error of the Kinect kinematic model, taking a marker-based three-dimensional motion capture system (3DMC) as the golden standard. A series of upper limb functional task experiments were conducted, namely hand to the contralateral shoulder, hand to mouth or drinking, combing hair, and hand to back pocket. Our deep learning-based model significantly improves the performance of a single Kinect v2 sensor for all investigated upper limb joint angles across all functional tasks. Using a single Kinect v2 sensor, our deep learning-based model could measure shoulder and elbow flexion/extension waveforms with mean CMCs >0.93 for all tasks, shoulder adduction/abduction, and internal/external rotation waveforms with mean CMCs >0.8 for most of the tasks. The mean deviations of angles at the point of target achieved and range of motion are under 5º for all investigated joint angles during all functional tasks. Compared with the 3DMC, our presented system is easier to operate and needs less laboratory space.

Cite

CITATION STYLE

APA

Ma, Y., Liu, D., & Cai, L. (2020). Deep learning-based upper limb functional assessment using a single kinect v2 sensor. Sensors (Switzerland), 20(7). https://doi.org/10.3390/s20071903

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free