Comparative Analysis of Neural QA models on SQuAD

8Citations
Citations of this article
115Readers
Mendeley users who have this article in their library.

Abstract

The task of Question Answering has gained prominence in the past few decades for testing the ability of machines to understand natural language. Large datasets for Machine Reading have led to the development of neural models that cater to deeper language understanding compared to information retrieval tasks. Different components in these neural architectures are intended to tackle different challenges. As a first step towards achieving generalization across multiple domains, we attempt to understand and compare the peculiarities of existing end-to-end neural models on the Stanford Question Answering Dataset (SQuAD) by performing quantitative as well as qualitative analysis of the results attained by each of them. We observed that prediction errors reflect certain model-specific biases, which we further discuss in this paper.

Cite

CITATION STYLE

APA

Wadhwa, S., Chandu, K. R., & Nyberg, E. (2018). Comparative Analysis of Neural QA models on SQuAD. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 89–97). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/w18-2610

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free