Incremental Neural Lexical Coherence Modeling

3Citations
Citations of this article
66Readers
Mendeley users who have this article in their library.

Abstract

Pretrained language models, neural models pretrained on massive amounts of data, have established the state of the art in a range of NLP tasks. They are based on a modern machine-learning technique, the Transformer which relates all items simultaneously to capture semantic relations in sequences. However, it differs from what humans do. Humans read sentences one-by-one, incrementally. Can neural models benefit by interpreting texts incrementally as humans do? We investigate this question in coherence modeling. We propose a coherence model which interprets sentences incrementally to capture lexical relations between them. We compare the state of the art in each task, simple neural models relying on a pretrained language model, and our model in two downstream tasks. Our findings suggest that interpreting texts incrementally as humans could be useful to design more advanced models.

Cite

CITATION STYLE

APA

Jeon, S., & Strube, M. (2020). Incremental Neural Lexical Coherence Modeling. In COLING 2020 - 28th International Conference on Computational Linguistics, Proceedings of the Conference (pp. 6752–6758). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.coling-main.594

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free