A Sensor-Independent Multimodal Fusion Scheme for Human Activity Recognition

N/ACitations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Human Activity Recognition is a field that provides the fundamentals for Ambient Intelligence and Assisted Living Applications. Multimodal methods for Human Activity Recognition utilize different sensors and fuse them together to provide higher-accuracy results. These methods require data for all sensors employed to operate with. In this work we present a sensor-independent, in regards to the number of sensors used, scheme for designing multimodal methods that operate when sensor-data are missing. Furthermore, we present a data augmentation method that increases the fusion model’s accuracy (up to 11% increases) when operating with missing sensor-data. The proposed method’s effectiveness is evaluated on the ExtraSensory dataset, which contains over 300,000 samples from 60 users, collected from smartphones and smartwatches. In addition, the methods are evaluated for different number of sensors used at the same time. However, the max number of sensors must be known beforehand.

Cite

CITATION STYLE

APA

Alexiadis, A., Nizamis, A., Giakoumis, D., Votis, K., & Tzovaras, D. (2022). A Sensor-Independent Multimodal Fusion Scheme for Human Activity Recognition. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 13364 LNCS, pp. 28–39). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-09282-4_3

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free