A cross-domain transferable neural coherence model

32Citations
Citations of this article
131Readers
Mendeley users who have this article in their library.

Abstract

Coherence is an important aspect of text quality and is crucial for ensuring its readability. One important limitation of existing coherence models is that training on one domain does not easily generalize to unseen categories of text. Previous work (Li and Jurafsky, 2017) advocates for generative models for cross-domain generalization, because for discriminative models, the space of incoherent sentence orderings to discriminate against during training is prohibitively large. In this work, we propose a local discriminative neural model with a much smaller negative sampling space that can efficiently learn against incorrect orderings. The proposed coherence model is simple in structure, yet it significantly outperforms previous state-of-art methods on a standard benchmark dataset on the Wall Street Journal corpus, as well as in multiple new challenging settings of transfer to unseen categories of discourse on Wikipedia articles.

Cite

CITATION STYLE

APA

Xu, P., Saghir, H., Kang, J. S., Long, T., Bose, A. J., Cao, Y., & Cheung, J. C. K. (2020). A cross-domain transferable neural coherence model. In ACL 2019 - 57th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (pp. 678–687). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/p19-1067

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free