MULTIVERS: Improving scientific claim verification with weak supervision and full-document context

28Citations
Citations of this article
43Readers
Mendeley users who have this article in their library.

Abstract

The scientific claim verification task requires an NLP system to label scientific documents which SUPPORT or REFUTE an input claim, and to select evidentiary sentences (or rationales) justifying each predicted label. In this work, we present MULTIVERS, which predicts a fact-checking label and identifies rationales in a multitask fashion based on a shared encoding of the claim and full document context. This approach accomplishes two key modeling goals. First, it ensures that all relevant contextual information is incorporated into each labeling decision. Second, it enables the model to learn from instances annotated with a document-level fact-checking label, but lacking sentence-level rationales. This allows MULTIVERS to perform weakly-supervised domain adaptation by training on scientific documents labeled using high-precision heuristics. Our approach outperforms two competitive baselines on three scientific claim verification datasets, with particularly strong performance in zero/few-shot domain adaptation experiments. Our code and data are available at https://github.com/dwadden/multivers.

Cite

CITATION STYLE

APA

Wadden, D., Lo, K., Wang, L. L., Cohan, A., Beltagy, I., & Hajishirzi, H. (2022). MULTIVERS: Improving scientific claim verification with weak supervision and full-document context. In NAACL 2022 - 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Proceedings of the Conference (pp. 175–190). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.findings-naacl.6

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free