Associating textual features with visual ones to improve affective image classification

27Citations
Citations of this article
25Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Many images carry a strong emotional semantic. These last years, some investigations have been driven to automatically identify induced emotions that may arise in viewers when looking at images, based on low-level image properties. Since these features can only catch the image atmosphere, they may fail when the emotional semantic is carried by objects. Therefore additional information is needed, and we propose in this paper to make use of textual information describing the image, such as tags. Thus, we have developed two textual features to catch the text emotional meaning: one is based on the semantic distance matrix between the text and an emotional dictionary, and the other one carries the valence and arousal meanings of words. Experiments have been driven on two datasets to evaluate visual and textual features and their fusion. The results have shown that our textual features can improve the classification accuracy of affective images. © 2011 Springer-Verlag.

Cite

CITATION STYLE

APA

Liu, N., Dellandréa, E., Tellez, B., & Chen, L. (2011). Associating textual features with visual ones to improve affective image classification. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 6974 LNCS, pp. 195–204). Springer Verlag. https://doi.org/10.1007/978-3-642-24600-5_23

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free