Syntax-aware representation for aspect term extraction

4Citations
Citations of this article
16Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Aspect Term Extraction (ATE) plays an important role in aspect-based sentiment analysis. Syntax-based neural models that learn rich linguistic knowledge have proven their effectiveness on ATE. However, previous approaches mainly focus on modeling syntactic structure, neglecting rich interactions along dependency arcs. Besides, these methods highly rely on results of dependency parsing and are sensitive to parsing noise. In this work, we introduce a syntax-directed attention network and a contextual gating mechanism to tackle these issues. Specifically, a graphical neural network is utilized to model interactions along dependency arcs. With the help of syntax-directed self-attention, it could directly operate on syntactic graph and obtain structural information. We further introduce a gating mechanism to synthesize syntactic information with structure-free features. This gate is utilized to reduce the effects of parsing noise. Experimental results demonstrate that the proposed method achieves state-of-the-art performance on three widely used benchmark datasets.

Cite

CITATION STYLE

APA

Zhang, J., Xu, G., Wang, X., Sun, X., & Huang, T. (2019). Syntax-aware representation for aspect term extraction. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11439 LNAI, pp. 123–134). Springer Verlag. https://doi.org/10.1007/978-3-030-16148-4_10

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free