Augmentation of Segmented Motion Capture Data for Improving Generalization of Deep Neural Networks

4Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper presents a method for augmenting the motion capture trajectories to improve generalization performance of recurrent long short-term memory (LSTM) neural networks. The presented algorithm is based on the interpolation of existing time series and can be applied only to segmented or easy-to-segment data due to the possibility of blending similar motion trajectories that are not significantly time-shifted. The paper shows the results of the classification efficiency with and without augmentation for two publicly available databases: Multimodal Kinect-IMU Dataset and National Chiao Tung University Multisensor Fitness Dataset. The former contains the data representing separate human computer interaction gestures, while the latter comprises the data of unsegmented series of body exercises. As a result of using the presented algorithm, the classification accuracy increased by approximately 11% points for the first dataset and 8% points for the second one.

Cite

CITATION STYLE

APA

Sawicki, A., & Zieliński, S. K. (2020). Augmentation of Segmented Motion Capture Data for Improving Generalization of Deep Neural Networks. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12133 LNCS, pp. 278–290). Springer. https://doi.org/10.1007/978-3-030-47679-3_24

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free