We investigate how sentence-level transformers can be modified into effective sequence labelers at the token level without any direct supervision. Existing approaches to zero-shot sequence labeling do not perform well when applied on transformer-based architectures. As transformers contain multiple layers of multi-head self-attention, information in the sentence gets distributed between many tokens, negatively affecting zero-shot token-level performance. We find that a soft attention module which explicitly encourages sharpness of attention weights can significantly outperform existing methods.
CITATION STYLE
Bujel, K., Yannakoudakis, H., & Rei, M. (2021). Zero-shot Sequence Labeling for Transformer-based Sentence Classifiers. In RepL4NLP 2021 - 6th Workshop on Representation Learning for NLP, Proceedings of the Workshop (pp. 195–205). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.repl4nlp-1.20
Mendeley helps you to discover research relevant for your work.