Multimodal subjectivity analysis of multiparty conversation

57Citations
Citations of this article
133Readers
Mendeley users who have this article in their library.

Abstract

We investigate the combination of several sources of information for the purpose of subjectivity recognition and polarity classification in meetings. We focus on features from two modalities, transcribed words and acoustics, and we compare the performance of three different textual representations: words, characters, and phonemes. Our experiments show that character-level features outperform word-level features for these tasks, and that a careful fusion of all features yields the best performance. © 2008 Association for Computational Linguistics.

Cite

CITATION STYLE

APA

Raaijmakers, S., Truong, K., & Wilson, T. (2008). Multimodal subjectivity analysis of multiparty conversation. In EMNLP 2008 - 2008 Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference: A Meeting of SIGDAT, a Special Interest Group of the ACL (pp. 466–474). Association for Computational Linguistics (ACL). https://doi.org/10.3115/1613715.1613774

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free