No need to pay attention: Simple recurrent neural networks work! (for answering “simple” questions)

ArXiv: 1606.05029
35Citations
Citations of this article
177Readers
Mendeley users who have this article in their library.

Abstract

First-order factoid question answering assumes that the question can be answered by a single fact in a knowledge base (KB). While this does not seem like a challenging task, many recent attempts that apply either complex linguistic reasoning or deep neural networks achieve 65%–76% accuracy on benchmark sets. Our approach formulates the task as two machine learning problems: detecting the entities in the question, and classifying the question as one of the relation types in the KB. We train a recurrent neural network to solve each problem. On the SimpleQuestions dataset, our approach yields substantial improvements over previously published results — even neural networks based on much more complex architectures. The simplicity of our approach also has practical advantages, such as efficiency and modularity, that are valuable especially in an industry setting. In fact, we present a preliminary analysis of the performance of our model on real queries from Comcast’s X1 entertainment platform with millions of users every day.

Cite

CITATION STYLE

APA

Ture, F., & Jojic, O. (2017). No need to pay attention: Simple recurrent neural networks work! (for answering “simple” questions). In EMNLP 2017 - Conference on Empirical Methods in Natural Language Processing, Proceedings (pp. 2866–2872). Association for Computational Linguistics (ACL).

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free