Augmented natural language for generative sequence labeling

43Citations
Citations of this article
134Readers
Mendeley users who have this article in their library.

Abstract

We propose a generative framework for joint sequence labeling and sentence-level classification. Our model performs multiple sequence labeling tasks at once using a single, shared natural language output space. Unlike prior discriminative methods, our model naturally incorporates label semantics and shares knowledge across tasks. Our framework is general purpose, performing well on few-shot, low-resource, and high-resource tasks. We demonstrate these advantages on popular named entity recognition, slot labeling, and intent classification benchmarks. We set a new state-of-the-art for few-shot slot labeling, improving substantially upon the previous 5-shot (75.0% ! 90.9%) and 1-shot (70.4% ! 81.0%) state-of-the-art results. Furthermore, our model generates large improvements (46.27% ! 63.83%) in low-resource slot labeling over a BERT baseline by incorporating label semantics. We also maintain competitive results on high-resource tasks, performing within two points of the state-of-the-art on all tasks and setting a new state-of-the-art on the SNIPS dataset.

Cite

CITATION STYLE

APA

Athiwaratkun, B., dos Santos, C. N., Krone, J., & Xiang, B. (2020). Augmented natural language for generative sequence labeling. In EMNLP 2020 - 2020 Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference (pp. 375–385). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.emnlp-main.27

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free