Generating fact checking briefs

34Citations
Citations of this article
132Readers
Mendeley users who have this article in their library.

Abstract

Fact checking at scale is difficult-while the number of active fact checking websites is growing, it remains too small for the needs of the contemporary media ecosystem. However, despite good intentions, contributions from volunteers are often error-prone, and thus in practice restricted to claim detection. We investigate how to increase the accuracy and efficiency of fact checking by providing information about the claim before performing the check, in the form of natural language briefs. We investigate passage-based briefs, containing a relevant passage from Wikipedia, entity-centric ones consisting of Wikipedia pages of mentioned entities, and Question-Answering Briefs, with questions decomposing the claim, and their answers. To produce QABriefs, we develop QABRIEFER, a model that generates a set of questions conditioned on the claim, searches the web for evidence, and generates answers. To train its components, we introduce QABRIEFDATASET which we collected via crowdsourcing. We show that fact checking with briefs - in particular QABriefs - increases the accuracy of crowdworkers by 10% while slightly decreasing the time taken. For volunteer (unpaid) fact checkers, QABriefs slightly increase accuracy and reduce the time required by around 20%.

References Powered by Scopus

SQuad: 100,000+ questions for machine comprehension of text

3978Citations
N/AReaders
Get full text

Natural Questions: A Benchmark for Question Answering Research

1790Citations
N/AReaders
Get full text

“Liar, liar pants on fire”: A new benchmark dataset for fake news detection

776Citations
N/AReaders
Get full text

Cited by Powered by Scopus

A Survey on Automated Fact-Checking

300Citations
N/AReaders
Get full text

Automated Fact-Checking for Assisting Human Fact-Checkers

130Citations
N/AReaders
Get full text

Internet-Augmented Dialogue Generation

95Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Fan, A., Piktus, A., Petroni, F., Wenzek, G., Saeidi, M., Vlachos, A., … Riedel, S. (2020). Generating fact checking briefs. In EMNLP 2020 - 2020 Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference (pp. 7147–7161). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.emnlp-main.580

Readers over time

‘20‘21‘22‘23‘24‘25020406080

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 43

68%

Researcher 13

21%

Lecturer / Post doc 5

8%

Professor / Associate Prof. 2

3%

Readers' Discipline

Tooltip

Computer Science 59

84%

Linguistics 5

7%

Engineering 3

4%

Business, Management and Accounting 3

4%

Save time finding and organizing research with Mendeley

Sign up for free
0