EXBERT: A visual analysis tool to explore learned representations in transformer models

87Citations
Citations of this article
213Readers
Mendeley users who have this article in their library.

Abstract

Large Transformer-based language models can route and reshape complex information via their multi-headed attention mechanism. Although the attention never receives explicit supervision, it can exhibit recognizable patterns following linguistic or positional information. Analyzing the learned representations and attentions is paramount to furthering our understanding of the inner workings of these models. However, analyses have to catch up with the rapid release of new models and the growing diversity of investigation techniques. To support analysis for a wide variety of models, we introduce EXBERT, a tool to help humans conduct flexible, interactive investigations and formulate hypotheses for the model-internal reasoning process. EXBERT provides insights into the meaning of the contextual representations and attention by matching a human-specified input to similar contexts in large annotated datasets. By aggregating the annotations of the matched contexts, EXBERT can quickly replicate findings from literature and extend them to previously not analyzed models.

References Powered by Scopus

What you can cram into a single $&!#* vector: Probing sentence embeddings for linguistic properties

572Citations
N/AReaders
Get full text

Analysis Methods in Neural Language Processing: A Survey

376Citations
N/AReaders
Get full text

LSTMVis: A Tool for Visual Analysis of Hidden State Dynamics in Recurrent Neural Networks

291Citations
N/AReaders
Get full text

Cited by Powered by Scopus

A primer in bertology: What we know about how bert works

884Citations
N/AReaders
Get full text

Prediction of chemical reaction yields using deep learning

210Citations
N/AReaders
Get full text

Extraction of organic chemistry grammar from unsupervised learning of chemical reactions

137Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Hoover, B., Strobelt, H., & Gehrmann, S. (2020). EXBERT: A visual analysis tool to explore learned representations in transformer models. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 187–196). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.acl-demos.22

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 75

69%

Researcher 24

22%

Professor / Associate Prof. 7

6%

Lecturer / Post doc 3

3%

Readers' Discipline

Tooltip

Computer Science 98

82%

Engineering 9

8%

Linguistics 7

6%

Mathematics 5

4%

Save time finding and organizing research with Mendeley

Sign up for free