Word vector specialisation (also known as retrofitting) is a portable, light-weight approach to fine-Tuning arbitrary distributional word vector spaces by injecting external knowledge from rich lexical resources such as WordNet. By design, these post-processing methods only update the vectors of words occurring in external lexicons, leaving the representations of all unseen words intact. In this paper, we show that constraint-driven vector space specialisation can be extended to unseen words.We propose a novel post-specialisation method that: A) preserves the useful linguistic knowledge for seen words; while b) propagating this external signal to unseen words in order to improve their vector representations as well. Our post-specialisation approach explicits a non-linear specialisation function in the form of a deep neural network by learning to predict specialised vectors from their original distributional counterparts. The learned function is then used to specialise vectors of unseen words. This approach, applicable to any postprocessing model, yields considerable gains over the initial specialisation models both in intrinsic word similarity tasks, and in two downstream tasks: dialogue state tracking and lexical text simplification. The positive effects persist across three languages, demonstrating the importance of specialising the full vocabulary of distributional word vector spaces.
CITATION STYLE
Vulíc, I., Glavaš, G., Mrkšíc, N., & Korhonen, A. (2018). Post-specialisation: Retrofitting vectors of words unseen in lexical resources. In NAACL HLT 2018 - 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies - Proceedings of the Conference (Vol. 1, pp. 516–527). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/n18-1048
Mendeley helps you to discover research relevant for your work.