Recently, Target-oriented Multimodal Sentiment Classification (TMSC) has gained significant attention among scholars. However, current multimodal models have reached a performance bottleneck. To investigate the causes of this problem, we perform extensive empirical evaluation and in-depth analysis of the datasets to answer the following questions: Q1: Are the modalities equally important for TMSC? Q2: Which multimodal fusion modules are more effective? Q3: Do existing datasets adequately support the research? Our experiments and analyses reveal that the current TMSC systems primarily rely on the textual modality, as most of targets' sentiments can be determined solely by text. Consequently, we point out several directions to work on for the TMSC task in terms of model design and dataset construction. The code and data can be found in https://github.com/Junjie-Ye/RethinkingTMSC.
CITATION STYLE
Ye, J., Zhou, J., Tian, J., Wang, R., Zhang, Q., Gui, T., & Huang, X. (2023). RethinkingTMSC: An Empirical Study for Target-Oriented Multimodal Sentiment Classification. In Findings of the Association for Computational Linguistics: EMNLP 2023 (pp. 270–277). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.findings-emnlp.21
Mendeley helps you to discover research relevant for your work.