Computational fact checking through query perturbations

30Citations
Citations of this article
57Readers
Mendeley users who have this article in their library.

Abstract

Our media is saturated with claims of "facts" made from data. Database research has in the past focused on how to answer queries, but has not devotedmuch attention to discerningmore subtle qualities of the resulting claims, for example, is a claim "cherry-picking"? This article proposes a framework that models claims based on structured data as parameterized queries. Intuitively, with its choice of the parameter setting, a claim presents a particular (and potentially biased) view of the underlying data. A key insight is that we can learn a lot about a claim by "perturbing" its parameters and seeing how its conclusion changes. For example, a claim is not robust if small perturbations to its parameters can change its conclusions significantly. This framework allows us to formulate practical fact-checking tasks-reverse-engineering vague claims, and countering questionable claims-as computational problems. Along with the modeling framework, we develop an algorithmic framework that enables efficient instantiations of "meta" algorithms by supplying appropriate algorithmic building blocks.We present real-world examples and experiments that demonstrate the power of our model, efficiency of our algorithms, and usefulness of their results.

Cite

CITATION STYLE

APA

Wu, Y., Agarwal, P. K., Li, C., Yang, J., & Yu, C. (2017). Computational fact checking through query perturbations. ACM Transactions on Database Systems, 42(1). https://doi.org/10.1145/2996453

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free