SLICE: Supersense-based Lightweight Interpretable Contextual Embeddings

3Citations
Citations of this article
57Readers
Mendeley users who have this article in their library.

Abstract

Contextualised embeddings such as BERT have become de facto state-of-the-art references in many NLP applications, thanks to their impressive performances. However, their opaqueness makes it hard to interpret their behaviour. SLICE is a hybrid model that combines supersense labels with contextual embeddings. We introduce a weakly supervised method to learn interpretable embeddings from raw corpora and small lists of seed words. Our model is able to represent both a word and its context as embeddings into the same compact space, whose dimensions correspond to interpretable supersenses. We assess the model in a task of supersense tagging for French nouns. The little amount of supervision required makes it particularly well suited for low-resourced scenarios. Thanks to its interpretability, we perform linguistic analyses about the predicted supersenses in terms of input word and context representations.

Cite

CITATION STYLE

APA

Aloui, C., Ramisch, C., Nasr, A., & Barque, L. (2020). SLICE: Supersense-based Lightweight Interpretable Contextual Embeddings. In COLING 2020 - 28th International Conference on Computational Linguistics, Proceedings of the Conference (pp. 3357–3370). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.coling-main.298

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free