Many problems in NLP require aggregating information from multiple mentions of the same entity which may be far apart in the text. Existing Recurrent Neural Network (RNN) layers are biased towards short-term dependencies and hence not suited to such tasks. We present a recurrent layer which is instead biased towards coreferent dependencies. The layer uses coreference annotations extracted from an external system to connect entity mentions belonging to the same cluster. Incorporating this layer into a state-of-the-art reading comprehension model improves performance on three datasets - Wikihop, LAMBADA and the bAbi AI tasks - with large gains when training data is scarce.
CITATION STYLE
Dhingra, B., Jin, Q., Yang, Z., Cohen, W. W., & Salakhutdinov, R. (2018). Neural models for reasoning over multiple mentions using coreference. In NAACL HLT 2018 - 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies - Proceedings of the Conference (Vol. 2, pp. 42–48). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/n18-2007
Mendeley helps you to discover research relevant for your work.