Abstract
In this paper we address the question of how to render sequence-level networks better at handling structured input. We propose a machine reading simulator which processes text incrementally from left to right and performs shallow reasoning with memory and attention. The reader extends the Long Short-Term Memory architecture with a memory network in place of a single memory cell. This enables adaptive memory usage during recurrence with neural attention, offering a way to weakly induce relations among tokens. The system is initially designed to process a single sequence but we also demonstrate how to integrate it with an encoder-decoder architecture. Experiments on language modeling, sentiment analysis, and natural language inference show that our model matches or outperforms the state of the art.
Cite
CITATION STYLE
Cheng, J., Dong, L., & Lapata, M. (2016). Long short-term memory-networks for machine reading. In EMNLP 2016 - Conference on Empirical Methods in Natural Language Processing, Proceedings (pp. 551–561). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/d16-1053
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.