Multi-Modal Sentiment Classification with Independent and Interactive Knowledge via Semi-Supervised Learning

27Citations
Citations of this article
47Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Multi-modal sentiment analysis extends conventional text-based definition of sentiment analysis to a multi-modal setup where multiple relevant modalities are leveraged to perform sentiment analysis. In real applications, however, acquiring annotated multi-modal data is normally labor expensive and time-consuming. In this paper, we aim to reduce the annotation effort for multi-modal sentiment classification via semi-supervised learning. The key idea is to leverage the semi-supervised variational autoencoders to mine more information from unlabeled data for multi-modal sentiment analysis. Specifically, the mined information includes both the independent knowledge within single modality and the interactive knowledge among different modalities. Empirical evaluation demonstrates the great effectiveness of the proposed semi-supervised approach to multi-modal sentiment classification.

Cite

CITATION STYLE

APA

Zhang, D., Li, S., Zhu, Q., & Zhou, G. (2020). Multi-Modal Sentiment Classification with Independent and Interactive Knowledge via Semi-Supervised Learning. IEEE Access, 8, 22945–22954. https://doi.org/10.1109/ACCESS.2020.2969205

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free