Abstract
The development of multimodal media compensates for the lack of information expression in a single modality and thus gradually becomes the main carrier of sentiment. In this situation, automatic assessment for sentiment information in multimodal contents is of increasing importance for many applications. To achieve this, we propose a joint sentiment part topic regression model (JSP) based on latent Dirichlet allocation (LDA), with a sentiment part, which effectively utilizes the complementary information between the modalities and strengthens the relationship between the sentiment layer and multimodal content. Specifically, a linear regression module is developed to share implicit variables between image–text pairs, so that one modality can predict the other. Moreover, a sentiment label layer is added to model the relationship between sentiment distribution parameters and multimodal contents. Experimental results on several datasets verify the feasibility of our proposed approach for multimodal sentiment analysis.
Author supplied keywords
Cite
CITATION STYLE
Li, M., Zhu, Y., Gao, W., Cao, M., & Wang, S. (2020). Joint sentiment part topic regression model for multimodal analysis. Information (Switzerland), 11(10), 1–16. https://doi.org/10.3390/info11100486
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.