Robust Question Answering against Distribution Shifts with Test-Time Adaptation: An Empirical Study

7Citations
Citations of this article
24Readers
Mendeley users who have this article in their library.
Get full text

Abstract

A deployed question answering (QA) model can easily fail when the test data has a distribution shift compared to the training data. Robustness tuning (RT) methods have been widely studied to enhance model robustness against distribution shifts before model deployment. However, can we improve a model after deployment? To answer this question, we evaluate test-time adaptation (TTA) to improve a model after deployment. We first introduce COLDQA, a unified evaluation benchmark for robust QA against text corruption and changes in language and domain. We then evaluate previous TTA methods on COLDQA and compare them to RT methods. We also propose a novel TTA method called online imitation learning (OIL). Through extensive experiments, we find that TTA is comparable to RT methods, and applying TTA after RT can significantly boost the performance on COLDQA. Our proposed OIL improves TTA to be more robust to variation in hyper-parameters and test distributions over time.

Cite

CITATION STYLE

APA

Ye, H., Ding, Y., Li, J., & Ng, H. T. (2022). Robust Question Answering against Distribution Shifts with Test-Time Adaptation: An Empirical Study. In Findings of the Association for Computational Linguistics: EMNLP 2022 (pp. 6208–6221). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.findings-emnlp.417

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free