How much progress have we made on RST discourse parsing? A replication study of recent results on the RST-DT

54Citations
Citations of this article
117Readers
Mendeley users who have this article in their library.

Abstract

This article evaluates purported progress over the past years in RST discourse parsing. Several studies report a relative error reduction of 24 to 51% on all metrics that authors attribute to the introduction of distributed representations of discourse units. We replicate the standard evaluation of 9 parsers, 5 of which use distributed representations, from 8 studies published between 2013 and 2017, using their predictions on the test set of the RST-DT. Our main finding is that most recently reported increases in RST discourse parser performance are an artefact of differences in implementations of the evaluation procedure. We evaluate all these parsers with the standard Parseval procedure to provide a more accurate picture of the actual RST discourse parsers performance in standard evaluation settings. Under this more stringent procedure, the gains attributable to distributed representations represent at most a 16% relative error reduction on fully-labelled structures.

Cite

CITATION STYLE

APA

Morey, M., Muller, P., & Asher, N. (2017). How much progress have we made on RST discourse parsing? A replication study of recent results on the RST-DT. In EMNLP 2017 - Conference on Empirical Methods in Natural Language Processing, Proceedings (pp. 1319–1324). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/d17-1136

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free