Robust and Affordable Deep Learning Models for Multimodal Sensor Fusion

0Citations
Citations of this article
16Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Deep fusion networks have received considerable attention lately due to the growing adoption of IoT devices, smartphones, and wearables that incorporate multiple sensing modalities, and their promising applications from human activity recognition to smart home automation. Despite recent advances in this area, there are several practical requirements that are often overlooked. Specifically, fusion networks must maintain their performance during momentary and long-term changes in the environment, be robust to sensor data quality issues, and have a reasonable size so that they can be deployed on resource-constrained devices. My PhD research aims to address these challenges by building robust multimodal fusion networks that rapidly generalize to new environments and have a smaller number of trainable weights, hence lower memory and carbon footprints.

Cite

CITATION STYLE

APA

Xaviar, S. (2021). Robust and Affordable Deep Learning Models for Multimodal Sensor Fusion. In SenSys 2021 - Proceedings of the 2021 19th ACM Conference on Embedded Networked Sensor Systems (pp. 403–404). Association for Computing Machinery, Inc. https://doi.org/10.1145/3485730.3492897

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free