Data and Model Distillation as a Solution for Domain-transferable Fact Verification

6Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.
Get full text

Abstract

While neural networks produce state-of-the-art performance in several NLP tasks, they generally depend heavily on lexicalized information, which transfer poorly between domains. We present a combination of two strategies to mitigate this dependence on lexicalized information in fact verification tasks. We present a data distillation technique for delexicalization, which we then combine with a model distillation method to prevent aggressive data distillation. We show that by using our solution, not only does the performance of an existing state-of-the-art model remain at par with that of the model trained on a fully lexicalized data, but it also performs better than it when tested out of domain. We show that the technique we present encourages models to extract transferable facts from a given fact verification dataset.

Cite

CITATION STYLE

APA

Mithun, M. P., Suntwal, S., & Surdeanu, M. (2021). Data and Model Distillation as a Solution for Domain-transferable Fact Verification. In NAACL-HLT 2021 - 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Proceedings of the Conference (pp. 4546–4552). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.naacl-main.37

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free