Abstract
In this work we analyze the named entity representations learned by Transformer-based language models. We investigate the role entities play in two tasks: a language modeling task, and a sequence classification task. For this purpose we collect a novel news topic classification dataset with 12 topics called RefNews-12. We perform two complementary methods of analysis. First, we use diagnostic models allowing us to quantify to what degree entity information is present in the hidden representations. Second, we perform entity mention substitution to measure how substitute-entities with different properties impact model performance. By controlling for model uncertainty we are able to show that entities are identified, and depending on the task, play a measurable role in the model’s predictions. Additionally, we show that the entities’ types alone are not enough to account for this. Finally, we find that the the frequency with which entities occur are important for the masked language modeling task, and that the entities’ distributions over topics are important for topic classification.
Cite
CITATION STYLE
Schouten, S. F., Bloem, P., & Vossen, P. (2022). Probing the representations of named entities in Transformer-based Language Models. In BlackboxNLP 2022 - BlackboxNLP Analyzing and Interpreting Neural Networks for NLP, Proceedings of the Workshop (pp. 384–393). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.blackboxnlp-1.32
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.