Abstract
Transformer-based language models (LMs) pretrained on large text collections are proven to store a wealth of semantic knowledge. However, 1) they are not effective as sentence encoders when used off-the-shelf, and 2) thus typically lag behind conversationally pretrained (e.g., via response selection) encoders on conversational tasks such as intent detection (ID). In this work, we propose CONVFIT, a simple and efficient two-stage procedure which turns any pretrained LM into a universal conversational encoder (after Stage 1 CONVFIT-ing) and task-specialised sentence encoder (after Stage 2). We demonstrate that 1) full-blown conversational pretraining is not required, and that LMs can be quickly transformed into effective conversational encoders with much smaller amounts of unannotated data; 2) pretrained LMs can be fine-tuned into task-specialised sentence encoders, optimised for the fine-grained semantics of a particular task. Consequently, such specialised sentence encoders allow for treating ID as a simple semantic similarity task based on interpretable nearest neighbours retrieval. We validate the robustness and versatility of the CONVFIT framework with such similarity-based inference on the standard ID evaluation sets: CONVFIT-ed LMs achieve state-of-the-art ID performance across the board, with particular gains in the most challenging, few-shot setups.
Cite
CITATION STYLE
Vulić, I., Su, P. H., Coope, S., Gerz, D., Budzianowski, P., Casanueva, I., … Wen, T. H. (2021). CONVFIT: Conversational Fine-Tuning of Pretrained Language Models. In EMNLP 2021 - 2021 Conference on Empirical Methods in Natural Language Processing, Proceedings (pp. 1151–1168). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.emnlp-main.88
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.