Multi-grain Representation Learning for Implicit Discourse Relation Recognition

2Citations
Citations of this article
1Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We analyze narrative text spans (also named as arguments) in this paper, and merely concentrate on the recognition of semantic relations between them. Because larger-grain linguistic units (such as phrase, chunk) are inherently cohesive in semantics, they generally contribute more than words in the representation of sentence-level text spans. On the basis of it, we propose the multi-grain representation learning method, which uses different convolution filters to form larger-grain linguistic units. Methodologically, Bi-LSTM based attention mechanism is used to strengthen suitable-grain representation, which is concatenated with word-level representation to form multi-grain representation. In addition, we employ bidirectional interactive attention mechanism to focus on the key information in the arguments. Experimental results on the Penn Discourse TreeBank show that the proposed method is effective.

Cite

CITATION STYLE

APA

Sun, Y., Ruan, H., Hong, Y., Wu, C., Zhang, M., & Zhou, G. (2019). Multi-grain Representation Learning for Implicit Discourse Relation Recognition. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11838 LNAI, pp. 725–736). Springer. https://doi.org/10.1007/978-3-030-32233-5_56

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free