Video affective content representation and recognition using video affective tree and hidden Markov models

37Citations
Citations of this article
26Readers
Mendeley users who have this article in their library.
Get full text

Abstract

A video affective content representation and recognition framework based on Video Affective Tree (VAT) and Hidden Markov Models (HMMs) is presented. Video affective content units in different granularities are firstly located by excitement intensity curves, and then the selected affective content units are used to construct VAT. According to the excitement intensity curve the affective intensity of each affective content unit at different levels of VAT can also be quantified into several levels from weak to strong. Many middlelevel audio and visual affective features, which represent emotional characteristics, are designed and extracted to construct observation vectors. Based on these observation vector sequences HMMs-based video affective content recognizers are trained and tested to recognize the basic emotional events of audience (joy, anger, sadness and fear). The experimental results show that the proposed framework is not only suitable for a broad range of video affective understanding applications, but also capable of representing affective semantics in different granularities. © Springer-Verlag Berlin Heidelberg 2007.

Cite

CITATION STYLE

APA

Sun, K., & Yu, J. (2007). Video affective content representation and recognition using video affective tree and hidden Markov models. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4738 LNCS, pp. 594–605). Springer Verlag. https://doi.org/10.1007/978-3-540-74889-2_52

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free