Abstract
Text classification methods for tasks like factoid question answering typically use manually defined string matching rules or bag of words representations. These methods are ineffective when question text contains very few individual words (e.g., named entities) that are indicative of the answer. We introduce a recursive neural network (rnn) model that can reason over such input by modeling textual compositionality. We apply our model, qanta, to a dataset of questions from a trivia competition called quiz bowl. Unlike previous rnn models, qanta learns word and phrase-level representations that combine across sentences to reason about entities. The model outperforms multiple baselines and, when combined with information retrieval methods, rivals the best human players.
Cite
CITATION STYLE
Iyyer, M., Boyd-Graber, J., Claudino, L., Socher, R., & Daumé, H. (2014). A neural network for factoid question answering over paragraphs. In EMNLP 2014 - 2014 Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference (pp. 633–644). Association for Computational Linguistics (ACL). https://doi.org/10.3115/v1/d14-1070
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.