Zero-shot Sequence Labeling for Transformer-based Sentence Classifiers

5Citations
Citations of this article
49Readers
Mendeley users who have this article in their library.

Abstract

We investigate how sentence-level transformers can be modified into effective sequence labelers at the token level without any direct supervision. Existing approaches to zero-shot sequence labeling do not perform well when applied on transformer-based architectures. As transformers contain multiple layers of multi-head self-attention, information in the sentence gets distributed between many tokens, negatively affecting zero-shot token-level performance. We find that a soft attention module which explicitly encourages sharpness of attention weights can significantly outperform existing methods.

Cite

CITATION STYLE

APA

Bujel, K., Yannakoudakis, H., & Rei, M. (2021). Zero-shot Sequence Labeling for Transformer-based Sentence Classifiers. In RepL4NLP 2021 - 6th Workshop on Representation Learning for NLP, Proceedings of the Workshop (pp. 195–205). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.repl4nlp-1.20

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free