Grammatical information in BERT sentence embeddings as two-dimensional arrays

3Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.

Abstract

Sentence embeddings induced with various transformer architectures encode much semantic and syntactic information in a distributed manner in a one-dimensional array. We investigate whether specific grammatical information can be accessed in these distributed representations. Using data from a task developed to test rule-like generalizations, our experiments on detecting subject-verb agreement yield several promising results. First, we show that while the usual sentence representations encoded as one-dimensional arrays do not easily support extraction of rule-like regularities, a two-dimensional reshaping of these vectors allows various learning architectures to access such information. Next, we show that various architectures can detect patterns in these two-dimensional reshaped sentence embeddings and successfully learn a model based on smaller amounts of simpler training data, which performs well on more complex test data. This indicates that current sentence embeddings contain information that is regularly distributed, and which can be captured when the embeddings are reshaped into higher dimensional arrays. Our results cast light on representations produced by language models and help move towards developing few-shot learning approaches.

Cite

CITATION STYLE

APA

Nastase, V., & Merlo, P. (2023). Grammatical information in BERT sentence embeddings as two-dimensional arrays. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 22–39). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.repl4nlp-1.3

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free