In the field of natural language processing, combining multiple pre-trained word embeddings has become a viable approach to improve word representations. However, there is still a lack of understanding of why such improvements can be achieved. In this paper, we investigate this issue by firstly proposing a novel word meta-embedding method. The proposed method tends to disentangle common and individual information from different word embeddings and learns representations for both. Based on the proposed method, we then carry out a systematic evaluation to provide a perspective on how common and individual information contributes to different tasks. Our intrinsic evaluation results suggest that common information is critical for word-level representations in terms of word similarity and relatedness. While, based on natural language inference, our extrinsic evaluation results show that common and individual information plays different roles and can complement each other. Further, both intrinsic and extrinsic evaluations reveal that explicitly separating common and individual information is beneficial for learning word meta-embeddings.
CITATION STYLE
Chen, W., Sheng, M., Mao, J., & Sheng, W. (2020). Investigating Word Meta-Embeddings by Disentangling Common and Individual Information. IEEE Access, 8, 11692–11699. https://doi.org/10.1109/ACCESS.2020.2965719
Mendeley helps you to discover research relevant for your work.