Extending the Scope of Out-of-Domain: Examining QA models in multiple subdomains

2Citations
Citations of this article
36Readers
Mendeley users who have this article in their library.

Abstract

Past work that investigates out-of-domain performance of QA systems has mainly focused on general domains (e.g. news domain, wikipedia domain), underestimating the importance of subdomains defined by the internal characteristics of QA datasets. In this paper, we extend the scope of “out-of-domain” by splitting QA examples into different subdomains according to their internal characteristics including question type, text length, answer position. We then examine the performance of QA systems trained on the data from different subdomains. Experimental results show that the performance of QA systems can be significantly reduced when the train data and test data come from different subdomains. These results question the generalizability of current QA systems in multiple subdomains, suggesting the need to combat the bias introduced by the internal characteristics of QA datasets.

Cite

CITATION STYLE

APA

Lyu, C., Foster, J., & Graham, Y. (2022). Extending the Scope of Out-of-Domain: Examining QA models in multiple subdomains. In Insights 2022 - 3rd Workshop on Insights from Negative Results in NLP, Proceedings of the Workshop (pp. 24–37). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.insights-1.4

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free