YNU-HPCC at SemEval-2019 task 8: Using A LSTM-attention model for fact-checking in community forums

0Citations
Citations of this article
72Readers
Mendeley users who have this article in their library.

Abstract

The objective of the task, Fact-Checking in Community Forums, is to determine whether an answer to a factual question is true, false, or whether it even constitutes a proper answer. In this paper, we propose a system that uses a long short-term memory with attention mechanism (LSTM-Attention) model to complete the task. The LSTM-Attention model uses two LSTM(Long Short-Term Memory) to extract the features of the question and answer pair. Then, each of the features is sequentially composed using the Attention mechanism, concatenating the two vectors into one. Finally, the concatenated vector is used as input for the MLP (Multi-Layer Perceptron) and the MLP's output layer uses the softmax function to classify the provided answers into three categories. This model is capable of extracting the features of the question and answer pair well. The results show that the proposed system outperforms the baseline algorithm.

Cite

CITATION STYLE

APA

Liu, P., Wang, J., & Zhang, X. (2019). YNU-HPCC at SemEval-2019 task 8: Using A LSTM-attention model for fact-checking in community forums. In NAACL HLT 2019 - International Workshop on Semantic Evaluation, SemEval 2019, Proceedings of the 13th Workshop (pp. 1180–1184). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/s19-2207

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free