Abstract
Recent advances in RST discourse parsing have focused on two modeling paradigms: (a) high order parsers which jointly predict the tree structure of the discourse and the relations it encodes; or (b) linear-time parsers which are efficient but mostly based on local features. In this work, we propose a linear-time parser with a novel way of representing discourse constituents based on neural networks which takes into account global contextual information and is able to capture long-distance dependencies. Experimental results show that our parser obtains state-of-the art performance on benchmark datasets, while being efficient (with time complexity linear in the number of sentences in the document) and requiring minimal feature engineering.
Cite
CITATION STYLE
Liu, Y., & Lapata, M. (2017). Learning contextually informed representations for linear-time discourse parsing. In EMNLP 2017 - Conference on Empirical Methods in Natural Language Processing, Proceedings (pp. 1289–1298). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/d17-1133
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.