Check-worthiness detection aims at predicting which sentences should be prioritized for fact-checking. A typical use is to rank sentences in political debates and speeches according to their degree of check-worthiness. We present the first direct optimization of sentence ranking for check-worthiness; in contrast, all previous work has solely used standard classification based loss functions. We present a recurrent neural network model that learns a sentence encoding, from which a check-worthiness score is predicted. The model is trained by jointly optimizing a binary cross entropy loss, as well as a ranking based pairwise hinge loss. We obtain sentence pairs for training through contrastive sampling, where for each sentence we find the top most semantically similar sentences with opposite label. Through a comparison to existing state-of-the-art check-worthiness methods, we find that our approach improves the MAP score by 11%.
CITATION STYLE
Hansen, C., Hansen, C., Simonsen, J. G., & Lioma, C. (2020). Fact Check-Worthiness Detection with Contrastive Ranking. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12260 LNCS, pp. 124–130). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-58219-7_11
Mendeley helps you to discover research relevant for your work.