Unleashing the True Potential of Sequence-to-Sequence Models for Sequence Tagging and Structure Parsing

1Citations
Citations of this article
15Readers
Mendeley users who have this article in their library.

Abstract

Sequence-to-Sequence (S2S) models have achieved remarkable success on various text generation tasks. However, learning complex structures with S2S models remains challeng-ing as external neural modules and additional lexicons are often supplemented to predict non-textual outputs. We present a systematic study of S2S modeling using contained decoding on four core tasks: part-of-speech tag-ging, named entity recognition, constituency, and dependency parsing, to develop efficient exploitation methods costing zero extra param-eters. In particular, 3 lexically diverse lineari-zation schemas and corresponding constrained decoding methods are designed and evaluated. Experiments show that although more lexical-ized schemas yield longer output sequences that require heavier training, their sequences being closer to natural language makes them easier to learn. Moreover, S2S models using our constrained decoding outperform other S2S approaches using external resources. Our best models perform better than or comparably to the state-of-the-art for all 4 tasks, lighting a promise for S2S models to generate non-sequential structures.

Cite

CITATION STYLE

APA

He, H., & Choi, J. D. (2023). Unleashing the True Potential of Sequence-to-Sequence Models for Sequence Tagging and Structure Parsing. Transactions of the Association for Computational Linguistics, 11, 582–599. https://doi.org/10.1162/tacl_a_00557

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free