How Do BERT Embeddings Organize Linguistic Knowledge?

14Citations
Citations of this article
48Readers
Mendeley users who have this article in their library.

Abstract

Several studies investigated the linguistic information implicitly encoded in Neural Language Models. Most of these works focused on quantifying the amount and type of information available within their internal representations and across their layers. In line with this scenario, we proposed a different study, based on Lasso regression, aimed at understanding how the information encoded by BERT sentence-level representations is arranged within its hidden units. Using a suite of several probing tasks, we showed the existence of a relationship between the implicit knowledge learned by the model and the number of individual units involved in the encodings of this competence. Moreover, we found that it is possible to identify groups of hidden units more relevant for specific linguistic properties.

Cite

CITATION STYLE

APA

Puccetti, G., Miaschi, A., & Dell’Orletta, F. (2021). How Do BERT Embeddings Organize Linguistic Knowledge? In Deep Learning Inside Out: 2nd Workshop on Knowledge Extraction and Integration for Deep Learning Architectures, DeeLIO 2021 - Proceedings, co-located with the Annual Conference of the North American Chapter of the Association for Computational Linguistics, NAACL-HLT 2021 (pp. 48–57). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.deelio-1.6

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free