Abstract
When an NLP model is trained on text data from one time period and tested or deployed on data from another, the resulting temporal misalignment can degrade end-task performance. In this work, we establish a suite of eight diverse tasks across different domains (social media, science papers, news, and reviews) and periods of time (spanning five years or more) to quantify the effects of temporal misalignment. Our study is focused on the ubiquitous setting where a pretrained model is optionally adapted through continued domain-specific pretraining, followed by task-specific finetuning. We establish a suite of tasks across multiple domains to study temporal misalignment in modern NLP systems. We find stronger effects of temporal misalignment on task performance than have been previously reported. We also find that, while temporal adaptation through continued pretraining can help, these gains are small compared to task-specific finetuning on data from the target time period. Our findings motivate continued research to improve temporal robustness of NLP models.
Cite
CITATION STYLE
Luu, K., Khashabi, D., Gururangan, S., Mandyam, K., & Smith, N. A. (2022). Time Waits for No One! Analysis and Challenges of Temporal Misalignment. In NAACL 2022 - 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Proceedings of the Conference (pp. 5944–5958). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.naacl-main.435
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.