On the Strength of Sequence Labeling and Generative Models for Aspect Sentiment Triplet Extraction

6Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.

Abstract

Generative models have achieved great success in aspect sentiment triplet extraction tasks. However, existing methods ignore the mutual informative clues between aspect and opinion terms and may generate false paired triplets. Furthermore, the inherent limitations of generative models, i.e., the token-by-token decoding and the simple structured prompt, prevent models from handling complex structures especially multi-word terms and multi-triplet sentences. To address these issues, we propose a sequence labeling enhanced generative model. Firstly, we encode the dependency between aspect and opinion into two bidirectional templates to avoid false paired triplets. Secondly, we introduce a marker-oriented sequence labeling module to improve generative models' ability of tackling complex structures. Specifically, this module enables the generative model to capture the boundary information of aspect/opinion spans and provides hints to decode multiple triplets with the shared marker. Experimental results on four datasets prove that our model yields a new state-of-art performance. Our code and data are available at https://github.com/NLPWM-WHU/SLGM.

Cite

CITATION STYLE

APA

Zhou, S., & Qian, T. (2023). On the Strength of Sequence Labeling and Generative Models for Aspect Sentiment Triplet Extraction. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 12038–12050). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.findings-acl.762

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free