A neural network for factoid question answering over paragraphs

262Citations
Citations of this article
642Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Text classification methods for tasks like factoid question answering typically use manually defined string matching rules or bag of words representations. These methods are ineffective when question text contains very few individual words (e.g., named entities) that are indicative of the answer. We introduce a recursive neural network (rnn) model that can reason over such input by modeling textual compositionality. We apply our model, qanta, to a dataset of questions from a trivia competition called quiz bowl. Unlike previous rnn models, qanta learns word and phrase-level representations that combine across sentences to reason about entities. The model outperforms multiple baselines and, when combined with information retrieval methods, rivals the best human players.

Cite

CITATION STYLE

APA

Iyyer, M., Boyd-Graber, J., Claudino, L., Socher, R., & Daumé, H. (2014). A neural network for factoid question answering over paragraphs. In EMNLP 2014 - 2014 Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference (pp. 633–644). Association for Computational Linguistics (ACL). https://doi.org/10.3115/v1/d14-1070

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free