We investigate the combination of several sources of information for the purpose of subjectivity recognition and polarity classification in meetings. We focus on features from two modalities, transcribed words and acoustics, and we compare the performance of three different textual representations: words, characters, and phonemes. Our experiments show that character-level features outperform word-level features for these tasks, and that a careful fusion of all features yields the best performance. © 2008 Association for Computational Linguistics.
CITATION STYLE
Raaijmakers, S., Truong, K., & Wilson, T. (2008). Multimodal subjectivity analysis of multiparty conversation. In EMNLP 2008 - 2008 Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference: A Meeting of SIGDAT, a Special Interest Group of the ACL (pp. 466–474). Association for Computational Linguistics (ACL). https://doi.org/10.3115/1613715.1613774
Mendeley helps you to discover research relevant for your work.