Extracting Operator Trees from Model Embeddings

1Citations
Citations of this article
20Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Transformer-based language models are able to capture several linguistic properties such as hierarchical structures like dependency or constituency trees. Whether similar structures for mathematics are extractable from language models has not yet been explored. This work aims to probe current state-of-the-art models for the extractability of Operator Trees from their contextualized embeddings using the structure probe designed by (Hewitt and Manning, 2019). We release the code and our data set for future analyses.

Cite

CITATION STYLE

APA

Reusch, A., & Lehner, W. (2022). Extracting Operator Trees from Model Embeddings. In MathNLP 2022 - 1st Workshop on Mathematical Natural Language Processing, Proceedings of the Workshop (pp. 40–50). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.mathnlp-1.6

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free