Rethinking coherence modeling: Synthetic vs. downstream tasks

12Citations
Citations of this article
66Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Although coherence modeling has come a long way in developing novel models, their evaluation on downstream applications for which they are purportedly developed has largely been neglected. With the advancements made by neural approaches in applications such as machine translation (MT), summarization and dialog systems, the need for coherence evaluation of these tasks is now more crucial than ever. However, coherence models are typically evaluated only on synthetic tasks, which may not be representative of their performance in downstream applications. To investigate how representative the synthetic tasks are of downstream use cases, we conduct experiments on benchmarking well-known traditional and neural coherence models on synthetic sentence ordering tasks, and contrast this with their performance on three downstream applications: coherence evaluation for MT and summarization, and next utterance prediction in retrieval-based dialog. Our results demonstrate a weak correlation between the model performances in the synthetic tasks and the downstream applications, motivating alternate training and evaluation methods for coherence models.

Cite

CITATION STYLE

APA

Mohiuddin, T., Jwalapuram, P., Lin, X., & Joty, S. (2021). Rethinking coherence modeling: Synthetic vs. downstream tasks. In EACL 2021 - 16th Conference of the European Chapter of the Association for Computational Linguistics, Proceedings of the Conference (pp. 3528–3539). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.eacl-main.308

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free