Adapting open domain fact extraction and verification to COVID-FACT through in-domain language modeling

12Citations
Citations of this article
73Readers
Mendeley users who have this article in their library.

Abstract

With the epidemic of COVID-19, verifying the scientifically false online information, such as fake news and maliciously fabricated statements, has become crucial. However, the lack of training data in the scientific domain limits the performance of fact verification models. This paper proposes an in-domain language modeling method for fact extraction and verification systems. We come up with SciKGAT to combine the advantages of open-domain literature search, state-of-the-art fact verification systems and in-domain medical knowledge through language modeling. Our experiments on SCIFACT, a dataset of expert-written scientific fact verification, show that SciKGAT achieves 30% absolute improvement on precision. Our analyses show that such improvement thrives from our in-domain language model by picking up more related evidence pieces and accurate fact verification. Our codes and data are released via Github1

Cite

CITATION STYLE

APA

Liu, Z., Xiong, C., Dai, Z., Sun, S., Sun, M., & Liu, Z. (2020). Adapting open domain fact extraction and verification to COVID-FACT through in-domain language modeling. In Findings of the Association for Computational Linguistics Findings of ACL: EMNLP 2020 (pp. 2395–2400). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.findings-emnlp.216

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free