Abstract
The growth in self-fitness mobile applications has encouraged people to turn to personal fitness, which entails integrating self-tracking applications with exercise motion data to reduce fatigue and mitigate the risk of injury. The advancements in computer vision and motion capture technologies hold great promise to improve exercise classification performance. This study investigates a supervised deep learning model performance, Graph Convolutional Network (GCN) to classify three workouts using the Azure Kinect device's motion data. The model defines the skeleton as a graph and combines GCN layers, a readout layer, and multi-layer perceptrons to build an end-to-end framework for graph classification. The model achieves an accuracy of 95.86% in classifying 19,442 frames. The current model exchanges feature information between each joint and its 1-nearest neighbor, which impact fades in graph-level classification. Therefore, a future study on improved feature utilization can enhance the model performance in classifying inter-user exercise variation.
Cite
CITATION STYLE
Haghighat, P., Prince, A., & Jeong, H. (2021). Graph Convolutional Networks for Exercise Motion Classification. In Proceedings of the Human Factors and Ergonomics Society (Vol. 65, pp. 685–689). SAGE Publications Inc. https://doi.org/10.1177/1071181321651255
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.