Sentiment classification via a response recalibration framework

0Citations
Citations of this article
59Readers
Mendeley users who have this article in their library.

Abstract

Probabilistic learning models have the ability to be calibrated to improve the performance of tasks such as sentiment classification. In this paper, we introduce a framework for sentiment classification that enables classifier recalibration given the presence of related, context-bearing documents. We investigate the use of probabilistic thresholding and document similarity based recalibration methods to yield classifier improvements. We demonstrate the performance of our proposed recalibration methods on a dataset of online clinical reviews from the patient feedback domain that have adjoining management responses that yield sentiment bearing information. Experimental results show the proposed recalibration methods outperform uncalibrated supervised machine learning models trained for sentiment analysis, and yield significant improvements over a robust baseline.

Cite

CITATION STYLE

APA

Smith, P., & Lee, M. (2015). Sentiment classification via a response recalibration framework. In 6th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis, WASSA 2015 at the 2015 Conference on Empirical Methods in Natural Language Processing, EMNLP 2015 - Proceedings (pp. 175–180). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/w15-2925

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free