Abstract
Off-the-shelf models are widely used by computational social science researchers to measure properties of text, such as sentiment. However, without access to source data it is difficult to account for domain shift, which represents a threat to validity. Here, we treat domain adaptation as a modular process that involves separate model producers and model consumers, and show how they can independently cooperate to facilitate more accurate measurements of text. We introduce two lightweight techniques for this scenario, and demonstrate that they reliably increase out-of-domain accuracy on four multi-domain text classification datasets when used with linear and contextual embedding models. We conclude with recommendations for model producers and consumers, and release models and replication code to accompany this paper.
Cite
CITATION STYLE
Chen, J. K., Card, D., & Jurafsky, D. (2022). Modular Domain Adaptation. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 3633–3655). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.findings-acl.288
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.