A parallel-hierarchical model for machine comprehension on sparse data

29Citations
Citations of this article
185Readers
Mendeley users who have this article in their library.

Abstract

Understanding unstructured text is a major goal within natural language processing. Comprehension tests pose questions based on short text passages to evaluate such understanding. In this work, we investigate machine comprehension on the challenging MCTest benchmark. Partly because of its limited size, prior work on MCTest has focused mainly on engineering better features. We tackle the dataset with a neural approach, harnessing simple neural networks arranged in a parallel hierarchy. The parallel hierarchy enables our model to compare the passage, question, and answer from a variety of trainable perspectives, as opposed to using a manually designed, rigid feature set. Perspectives range from the word level to sentence fragments to sequences of sentences; the networks operate only on word-embedding representations of text. When trained with a methodology designed to help cope with limited training data, our Parallel-Hierarchical model sets a new state of the art for MCTest, outperforming previous feature-engineered approaches slightly and previous neural approaches by a significant margin (over 15 percentage points).

Cite

CITATION STYLE

APA

Trischler, A., Ye, Z., Yuan, X., He, J., Bachman, P., & Suleman, K. (2016). A parallel-hierarchical model for machine comprehension on sparse data. In 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016 - Long Papers (Vol. 1, pp. 432–441). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/p16-1041

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free