Our senses individually work in a coordinated fashion to express our emotional intentions. In this work, we experiment with modeling modality-specific sensory signals to attend to our latent multimodal emotional intentions and vice versa expressed via low-rank multimodal fusion and multimodal transformers. The low-rank factorization of multimodal fusion amongst the modalities helps represent approximate multiplicative latent signal interactions. Motivated by the work of (Tsai et al., 2019) and (Liu et al., 2018), we present our transformer-based cross-fusion architecture without any over-parameterization of the model. The low-rank fusion helps represent the latent signal interactions while the modality-specific attention helps focus on relevant parts of the signal. We present two methods for the Multimodal Sentiment and Emotion Recognition results on CMU-MOSEI, CMU-MOSI, and IEMOCAP datasets and show that our models have lesser parameters, train faster and perform comparably to many larger fusion-based architectures.
CITATION STYLE
Sahay, S., Okur, E., Kumar, S. H., & Nachman, L. (2020). Low rank fusion based transformers for multimodal sequences. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 29–34). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.challengehml-1.4
Mendeley helps you to discover research relevant for your work.