Analyzing Modality Robustness in Multimodal Sentiment Analysis

21Citations
Citations of this article
48Readers
Mendeley users who have this article in their library.

Abstract

Building robust multimodal models are crucial for achieving reliable deployment in the wild. Despite its importance, less attention has been paid to identifying and improving the robustness of Multimodal Sentiment Analysis (MSA) models. In this work, we hope to address that by (i) Proposing simple diagnostic checks for modality robustness in a trained multimodal model. Using these checks, we find MSA models to be highly sensitive to a single modality, which creates issues in their robustness; (ii) We analyze well-known robust training strategies to alleviate the issues. Critically, we observe that robustness can be achieved without compromising on the original performance. We hope our extensive study-performed across five models and two benchmark datasets-and proposed procedures would make robustness an integral component in MSA research. Our diagnostic checks and robust training solutions are simple to implement and available at https://github.com/declare-lab/MSA-Robustness.

Cite

CITATION STYLE

APA

Hazarika, D., Li, Y., Cheng, B., Zhao, S., Zimmermann, R., & Poria, S. (2022). Analyzing Modality Robustness in Multimodal Sentiment Analysis. In NAACL 2022 - 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Proceedings of the Conference (pp. 685–696). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.naacl-main.50

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free