Exploiting Position Bias for Robust Aspect Sentiment Classification

9Citations
Citations of this article
63Readers
Mendeley users who have this article in their library.

Abstract

Aspect sentiment classification (ASC) aims at determining sentiments expressed towards different aspects in a sentence. While state-of-the-art ASC models have achieved remarkable performance, they are recently shown to suffer from the issue of robustness. Particularly in two common scenarios: when domains of test and training data are different (out-of-domain scenario) or test data is adversarially perturbed (adversarial scenario), ASC models may attend to irrelevant words and neglect opinion expressions that truly describe diverse aspects. To tackle the challenge, in this paper, we hypothesize that position bias (i.e., the words closer to a concerning aspect would carry a higher degree of importance) is crucial for building more robust ASC models by reducing the probability of mis-attending. Accordingly, we propose two mechanisms for capturing position bias, namely position-biased weight and position-biased dropout, which can be flexibly injected into existing models to enhance representations for classification. Experiments conducted on out-of-domain and adversarial datasets demonstrate that our proposed approaches largely improve the robustness and effectiveness of current models.

Cite

CITATION STYLE

APA

Ma, F., Zhang, C., & Song, D. (2021). Exploiting Position Bias for Robust Aspect Sentiment Classification. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021 (pp. 1352–1358). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.findings-acl.116

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free