CORT: A New Baseline for Comparative Opinion Classification by Dual Prompts

2Citations
Citations of this article
24Readers
Mendeley users who have this article in their library.

Abstract

Comparative opinion is a common linguistic phenomenon. The opinion is expressed by comparing multiple targets on a shared aspect, e.g., “camera A is better than camera B in picture quality”. Among the various subtasks in opinion mining, comparative opinion classification is relatively less studied. Current solutions use rules or classifiers to identify opinions, i.e., better, worse, or same, through feature engineering. Because the features are directly derived from the input sentence, these solutions are sensitive to the order of the targets mentioned in the sentence. For example, “camera A is better than camera B” means the same as “camera B is worse than camera A”; but the features of these two sentences are completely different. In this paper, we approach comparative opinion classification through prompt learning, taking the advantage of embedded knowledge in pre-trained language model. We design a twin framework with dual prompts, named CORT. This extremely simple model delivers state-of-the-art and robust performance on all benchmark datasets for comparative opinion classification. We believe CORT well serves as a new baseline for comparative opinion classification.

Cite

CITATION STYLE

APA

Wang, Y., Zhang, H., Sun, A., & Meng, X. (2022). CORT: A New Baseline for Comparative Opinion Classification by Dual Prompts. In Findings of the Association for Computational Linguistics: EMNLP 2022 (pp. 7093–7104). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.findings-emnlp.524

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free