The information on the internet suffers from noise and corrupt knowledge that may arise due to human and mechanical errors. To further exacerbate this problem, an ever-increasing amount of fake news on social media, or internet in general, has created another challenge to drawing correct information from the web. This huge sea of data makes it difficult for human fact checkers and journalists to assess all the information manually. In recent years Automated Fact-Checking has emerged as a branch of natural language processing devoted to achieving this feat. In this work, we give an overview of recent approaches, emphasizing on the key challenges faced during the development of such frameworks. We test existing solutions to perform claim classification on simple-claims and introduce a new model dubbed SimpleLSTM, which outperforms baselines by 11%, 10.2% and 18.7% on FEVER-Support, FEVER-Reject and 3-Class datasets respectively. The data, metadata and code are released as open-source and will be available at https://github.com/DeFacto/SimpleLSTM.
CITATION STYLE
Chawla, P., Esteves, D., Pujar, K., & Lehmann, J. (2019). SimpleLSTM: A Deep-Learning Approach to Simple-Claims Classification. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11805 LNAI, pp. 244–255). Springer Verlag. https://doi.org/10.1007/978-3-030-30244-3_21
Mendeley helps you to discover research relevant for your work.