We analyze narrative text spans (also named as arguments) in this paper, and merely concentrate on the recognition of semantic relations between them. Because larger-grain linguistic units (such as phrase, chunk) are inherently cohesive in semantics, they generally contribute more than words in the representation of sentence-level text spans. On the basis of it, we propose the multi-grain representation learning method, which uses different convolution filters to form larger-grain linguistic units. Methodologically, Bi-LSTM based attention mechanism is used to strengthen suitable-grain representation, which is concatenated with word-level representation to form multi-grain representation. In addition, we employ bidirectional interactive attention mechanism to focus on the key information in the arguments. Experimental results on the Penn Discourse TreeBank show that the proposed method is effective.
CITATION STYLE
Sun, Y., Ruan, H., Hong, Y., Wu, C., Zhang, M., & Zhou, G. (2019). Multi-grain Representation Learning for Implicit Discourse Relation Recognition. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11838 LNAI, pp. 725–736). Springer. https://doi.org/10.1007/978-3-030-32233-5_56
Mendeley helps you to discover research relevant for your work.