We propose a method to study the variation lying between different word embeddings models trained with different parameters. We explore the variation between models trained with only one varying parameter by observing the distributional neighbors variation and show how changing only one parameter can have a massive impact on a given semantic space. We show that the variation is not affecting all words of the semantic space equally. Variation is influenced by parameters such as setting a parameter to its minimum or maximum value but it also depends on the corpus intrinsic features such as the frequency of a word. We identify semantic classes of words remaining stable across the models trained and specific words having high variation.
CITATION STYLE
Pierrejean, B., & Tanguy, L. (2018). Towards qualitative word embeddings evaluation: Measuring neighbors variation. In NAACL HLT 2018 - 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies - Proceedings of the Student Research Workshop (Vol. 2018-January, pp. 32–39). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/n18-4005
Mendeley helps you to discover research relevant for your work.