Improved compare-aggregate model for Chinese document-based question answering

0Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Document-based question answering (DBQA) is a sub-task in question answering. It aims to measure the matching relation between questions and answers, which can be regarded as sentence matching problem. In this paper, we introduce a Compare-Aggregate architecture to handle the word-level comparison and aggregation. To deal with the noisy information in traditional attention mechanism, the k-top attention mechanism is proposed to filter out irrelevant words. Subsequently, we propose a combined model to merge matching relation learned by Compare-Aggregate model with shallow features to generate the final matching score. We evaluate our model on Chinese Document-based Question Answering (DBQA) task. The experimental results show the effectiveness of our proposed improved methods. And our final combined model achieves second place result on the DBQA task of NLPCC-ICCPOL 2017 Shared Task. The paper provides the technical details of the proposed algorithm.

Cite

CITATION STYLE

APA

Wang, Z., Bian, W., Li, S., Chen, G., & Lin, Z. (2018). Improved compare-aggregate model for Chinese document-based question answering. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10619 LNAI, pp. 712–720). Springer Verlag. https://doi.org/10.1007/978-3-319-73618-1_61

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free