Robust Feature Representation Using Multi-Task Learning for Human Activity Recognition

8Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.

Abstract

Learning underlying patterns from sensory data is crucial in the Human Activity Recognition (HAR) task to avoid poor generalization when coping with unseen data. A key solution to such an issue is representation learning, which becomes essential when input signals contain activities with similar patterns or when patterns generated by different subjects for the same activity vary. To address these issues, we seek a solution to increase generalization by learning the underlying factors of each sensor signal. We develop a novel multi-channel asymmetric auto-encoder to recreate input signals precisely and extract indicative unsupervised futures. Further, we investigate the role of various activation functions in signal reconstruction to ensure the model preserves the patterns of each activity in the output. Our main contribution is that we propose a multi-task learning model to enhance representation learning through shared layers between signal reconstruction and the HAR task to improve the robustness of the model in coping with users not included in the training phase. The proposed model learns shared features between different tasks that are indeed the underlying factors of each input signal. We validate our multi-task learning model using several publicly available HAR datasets, UCI-HAR, MHealth, PAMAP2, and USC-HAD, and an in-house alpine skiing dataset collected in the wild, where our model achieved 99%, 99%, 95%, 88%, and 92% accuracy. Our proposed method shows consistent performance and good generalization on all the datasets compared to the state of the art.

References Powered by Scopus

Reducing the dimensionality of data with neural networks

17408Citations
N/AReaders
Get full text

Representation learning: A review and new perspectives

9989Citations
N/AReaders
Get full text

Deep convolutional and LSTM recurrent neural networks for multimodal wearable activity recognition

2091Citations
N/AReaders
Get full text

Cited by Powered by Scopus

Automatic speech recognition using advanced deep learning approaches: A survey

45Citations
N/AReaders
Get full text

Simple to Complex, Single to Concurrent Sensor-Based Human Activity Recognition: Perception and Open Challenges

3Citations
N/AReaders
Get full text

Bi-DeepViT: Binarized Transformer for Efficient Sensor-Based Human Activity Recognition

1Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Azadi, B., Haslgrübler, M., Anzengruber-Tanase, B., Sopidis, G., & Ferscha, A. (2024). Robust Feature Representation Using Multi-Task Learning for Human Activity Recognition. Sensors, 24(2). https://doi.org/10.3390/s24020681

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 2

100%

Readers' Discipline

Tooltip

Computer Science 2

40%

Medicine and Dentistry 1

20%

Psychology 1

20%

Materials Science 1

20%

Article Metrics

Tooltip
Mentions
Blog Mentions: 1
News Mentions: 1

Save time finding and organizing research with Mendeley

Sign up for free