Syntax-aware opinion role labeling with dependency graph convolutional networks

27Citations
Citations of this article
116Readers
Mendeley users who have this article in their library.

Abstract

Opinion role labeling (ORL) is a fine-grained opinion analysis task and aims to answer “who expressed what kind of sentiment towards what?”. Due to the scarcity of labeled data, ORL remains challenging for data-driven methods. In this work, we try to enhance neural ORL models with syntactic knowledge by comparing and integrating different representations. We also propose dependency graph convolutional networks (DEPGCN) to encode parser information at different processing levels. In order to compensate for parser inaccuracy and reduce error propagation, we introduce multi-task learning (MTL) to train the parser and the ORL model simultaneously. We verify our methods on the benchmark MPQA corpus. The experimental results show that syntactic information is highly valuable for ORL, and our final MTL model effectively boosts the F1 score by 9.29 over the syntax-agnostic baseline. In addition, we find that the contributions from syntactic knowledge do not fully overlap with contextualized word representations (BERT). Our best model achieves 4.34 higher F1 score than the current state-of-the-art.

Cite

CITATION STYLE

APA

Zhang, B., Zhang, Y., Wang, R., Li, Z., & Zhang, M. (2020). Syntax-aware opinion role labeling with dependency graph convolutional networks. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 3249–3258). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.acl-main.297

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free