Incremental processing in the age of non-incremental encoders: An empirical assessment of bidirectional models for incremental NLU

16Citations
Citations of this article
84Readers
Mendeley users who have this article in their library.

Abstract

While humans process language incrementally, the best language encoders currently used in NLP do not. Both bidirectional LSTMs and Transformers assume that the sequence that is to be encoded is available in full, to be processed either forwards and backwards (BiLSTMs) or as a whole (Transformers). We investigate how they behave under incremental interfaces, when partial output must be provided based on partial input seen up to a certain time step, which may happen in interactive systems. We test five models on various NLU datasets and compare their performance using three incremental evaluation metrics. The results support the possibility of using bidirectional encoders in incremental mode while retaining most of their non-incremental quality. The “omni-directional” BERT model, which achieves better non-incremental performance, is impacted more by the incremental access. This can be alleviated by adapting the training regime (truncated training), or the testing procedure, by delaying the output until some right context is available or by incorporating hypothetical right contexts generated by a language model like GPT-2.

Cite

CITATION STYLE

APA

Madureira, B., & Schlangen, D. (2020). Incremental processing in the age of non-incremental encoders: An empirical assessment of bidirectional models for incremental NLU. In EMNLP 2020 - 2020 Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference (pp. 357–374). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.emnlp-main.26

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free