Capturing dependencies among labels and features for multiple emotion tagging of multimedia data

3Citations
Citations of this article
18Readers
Mendeley users who have this article in their library.

Abstract

In this paper, we tackle the problem of emotion tagging of multimedia data by modeling the dependencies among multiple emotions in both the feature and label spaces. These dependencies, which carry crucial top-down and bottom-up evidence for improving multimedia affective content analysis, have not been thoroughly exploited yet. To this end, we propose two hierarchical models that independently and dependently learn the shared features and global semantic relationships among emotion labels to jointly tag multiple emotion labels of multimedia data. Efficient learning and inference algorithms of the proposed models are also developed. Experiments on three benchmark emotion databases demonstrate the superior performance of our methods to existing methods.

Cite

CITATION STYLE

APA

Wu, S., Wang, S., & Ji, Q. (2017). Capturing dependencies among labels and features for multiple emotion tagging of multimedia data. In 31st AAAI Conference on Artificial Intelligence, AAAI 2017 (pp. 1026–1032). AAAI press. https://doi.org/10.1609/aaai.v31i1.10629

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free