A benchmark for fact checking algorithms built on knowledge bases

20Citations
Citations of this article
43Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Fact checking is the task of determining if a given claim holds. Several algorithms have been developed to check claims with reference information in the form of facts in a knowledge base. While algorithms have been experimentally evaluated in the past, we provide the first comprehensive and publicly available benchmark infrastructure for evaluating methods across a wide range of assumptions about the claims and the reference information. We show how, by changing the popularity, transparency, homogeneity, and functionality properties of the facts in an experiment, it is possible to influence significantly the performance of the fact checking algorithms. We introduce a benchmark to systematically enforce such properties in training and testing datasets with fine tune control over their properties. We then use our benchmark to compare fact checking algorithms with one another, as well as with methods that can solve the link prediction task in knowledge bases. Our evaluation shows the impact of the four data properties on the qualitative performance of the fact checking solutions and reveals a number of new insights concerning their applicability and performance.

Cite

CITATION STYLE

APA

Huynh, V. P., & Papotti, P. (2019). A benchmark for fact checking algorithms built on knowledge bases. In International Conference on Information and Knowledge Management, Proceedings (pp. 689–698). Association for Computing Machinery. https://doi.org/10.1145/3357384.3358036

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free