LSTMs Exploit Linguistic Attributes of Data

8Citations
Citations of this article
113Readers
Mendeley users who have this article in their library.

Abstract

While recurrent neural networks have found success in a variety of natural language processing applications, they are general models of sequential data. We investigate how the properties of natural language data affect an LSTM's ability to learn a nonlinguistic task: recalling elements from its input. We find that models trained on natural language data are able to recall tokens from much longer sequences than models trained on non-language sequential data. Furthermore, we show that the LSTM learns to solve the memorization task by explicitly using a subset of its neurons to count timesteps in the input. We hypothesize that the patterns and structure in natural language data enable LSTMs to learn by providing approximate ways of reducing loss, but understanding the effect of different training data on the learnability of LSTMs remains an open question.

Cite

CITATION STYLE

APA

Liu, N. F., Levy, O., Schwartz, R., Tan, C., & Smith, N. A. (2018). LSTMs Exploit Linguistic Attributes of Data. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 180–186). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/w18-3024

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free