SentiNet: Mining visual sentiment from scratch

3Citations
Citations of this article
15Readers
Mendeley users who have this article in their library.
Get full text

Abstract

An image is worth a thousand of words for sentiment expression, but the semantic gap between low-level pixels and high-level sentiment make visual sentiment analysis difficult. Our work focuses on two aspects to bridge the gap: (1) Highlevel abstract feature learning for visual sentiment content. (2) Utilizing large-scale unlabeled dataset. We propose a hierarchical structure for automatic discovery of visual sentiment features—we called SentiNet which employed a ConvNet structure. In order to deal with the limitation of labeled data, we leverage the sentiment related signal to pre-annotate unlabeled samples from different source domains. In particular, we propose a hierarchy-stack fine-tune strategy to train SentiNet. We show how this pipeline can be applied on social media visual sentiment analysis. Our experiments on real-world covering half-million unlabeled images and two thousands labeled images show that our method defeats state-of-the-art visual methods, and prove the importance of large scale data and hierarchical architecture for visual sentiment analysis.

Cite

CITATION STYLE

APA

Li, L., Li, S., Cao, D., & Lin, D. (2017). SentiNet: Mining visual sentiment from scratch. In Advances in Intelligent Systems and Computing (Vol. 513, pp. 309–317). Springer Verlag. https://doi.org/10.1007/978-3-319-46562-3_20

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free