Scientific claim verification is a unique challenge that is attracting increasing interest. The SCIVER shared task offers a benchmark scenario to test and compare claim verification approaches by participating teams and consists in three steps: Relevant abstract selection, rationale selection and label prediction. In this paper, we present team QMUL-SDS's participation in the shared task. We propose an approach that performs scientific claim verification by doing binary classifications stepby-step. We trained a BioBERT-large classifier to select abstracts based on pairwise relevance assessments for each claim, title of the abstract and continued to train it to select rationales out of each retrieved abstract based on claim, sentence. We then propose a two-step setting for label prediction, i.e. first predicting "NOT_ENOUGH_INFO" or "ENOUGH_INFO", then label those marked as "ENOUGH_INFO" as either "SUPPORT" or "CONTRADICT". Compared to the baseline system, we achieve substantial improvements on the dev set. As a result, our team is the No. 4 team on the leaderboard.
CITATION STYLE
Zeng, X., & Zubiaga, A. (2021). QMUL-SDS at SCIVER: Step-by-Step Binary Classification for Scientific Claim Verification. In 2nd Workshop on Scholarly Document Processing, SDP 2021 - Proceedings of the Workshop (pp. 116–123). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.sdp-1.15
Mendeley helps you to discover research relevant for your work.