Self-supervised Human Activity Recognition by Learning to Predict Cross-Dimensional Motion

11Citations
Citations of this article
19Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We propose the use of self-supervised learning for human activity recognition with smartphone accelerometer data. Our proposed solution consists of two steps. First, the representations of unlabeled input signals are learned by training a deep convolutional neural network to predict a segment of accelerometer values. Our model exploits a novel scheme to leverage past and present motion in x and y dimensions, as well as past values of the z axis to predict values in the z dimension. This cross-dimensional prediction approach results in effective pretext training with which our model learns to extract strong representations. Next, we freeze the convolution blocks and transfer the weights to our downstream network aimed at human activity recognition. For this task, we add a number of fully connected layers to the end of the frozen network and train the added layers with labeled accelerometer signals to learn to classify human activities. We evaluate the performance of our method on three publicly available human activity datasets: UCI HAR, MotionSense, and HAPT. The results show that our approach outperforms the existing methods and sets new state-of-The-Art results.

Cite

CITATION STYLE

APA

Rahimi Taghanaki, S., Rainbow, M. J., & Etemad, A. (2020). Self-supervised Human Activity Recognition by Learning to Predict Cross-Dimensional Motion. In Proceedings - International Symposium on Wearable Computers, ISWC (pp. 23–27). Association for Computing Machinery. https://doi.org/10.1145/3460421.3480417

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free