From image to text in sentiment analysis via regression and deep learning

11Citations
Citations of this article
63Readers
Mendeley users who have this article in their library.

Abstract

Images and text represent types of content which are used together for conveying user emotions in online social networks. These contents are usually associated with a sentiment category. In this paper, we investigate an approach for mapping images to text for three types of sentiment categories: positive, neutral and negative. The mapping from images to text is performed using a Kernel Ridge Regression model. We considered two types of image features: i) RGB pixel-values features, and ii) features extracted with a deep learning approach. The experimental evaluation was performed on a Twitter data set containing both text and images and the sentiment associated with these. The experimental results show a difference in performance for different sentiment categories, in particular the mapping that we propose performs better for the positive sentiment category in comparison with the neutral and negative ones. Furthermore, the experimental results show that the more complex deep learning features perform better than the RGB pixel-value features for all sentiment categories and for larger training sets.

Cite

CITATION STYLE

APA

Onita, D., Dinu, L. P., & Adriana, B. (2019). From image to text in sentiment analysis via regression and deep learning. In International Conference Recent Advances in Natural Language Processing, RANLP (Vol. 2019-September, pp. 862–868). Incoma Ltd. https://doi.org/10.26615/978-954-452-056-4_100

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free