Multidimensional Extra Evidence Mining for Image Sentiment Analysis

10Citations
Citations of this article
25Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Image sentiment analysis is a hot research topic in the field of computer vision. However, two key issues need to be addressed. First, high-quality training samples are scarce. There are numerous ambiguous images in the original datasets owing to diverse subjective cognitions from different annotators. Second, the cross-modal sentimental semantics among heterogeneous image features has not been fully explored. To alleviate these problems, we propose a novel model called multidimensional extra evidence mining (ME2M) for image sentiment analysis, it involves sample-refinement and cross-modal sentimental semantics mining. A new soft voting-based sample-refinement strategy is designed to address the former problem, whereas the state-of-the-art discriminant correlation analysis (DCA) model is used to completely mine the cross-modal sentimental semantics among diverse image features. Image sentiment analysis is conducted based on the cross-modal sentimental semantics and a general classifier. The experimental results verify that the ME2M model is effective and robust and that it outperforms the most competitive baselines on two well-known datasets. Furthermore, it is versatile owing to its flexible structure.

Cite

CITATION STYLE

APA

Zhang, H., Wu, J., Shi, H., Jiang, Z., Ji, D., Yuan, T., & Li, G. (2020). Multidimensional Extra Evidence Mining for Image Sentiment Analysis. IEEE Access, 8, 103619–103634. https://doi.org/10.1109/ACCESS.2020.2999128

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free