A Sequence-to-Sequence Model for Semantic Role Labeling

15Citations
Citations of this article
86Readers
Mendeley users who have this article in their library.

Abstract

We explore a novel approach for Semantic Role Labeling (SRL) by casting it as a sequence-to-sequence process. We employ an attention-based model enriched with a copying mechanism to ensure faithful regeneration of the input sequence, while enabling interleaved generation of argument role labels. Here, we apply this model in a monolingual setting, performing PropBank SRL on English language data. The constrained sequence generation set-up enforced with the copying mechanism allows us to analyze the performance and special properties of the model on manually labeled data and benchmarking against state-of-the-art sequence labeling models. We show that our model is able to solve the SRL argument labeling task on English data, yet further structural decoding constraints will need to be added to make the model truly competitive. Our work represents a first step towards more advanced, generative SRL labeling setups.

Cite

CITATION STYLE

APA

Daza, A., & Frank, A. (2018). A Sequence-to-Sequence Model for Semantic Role Labeling. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 207–216). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/w18-3027

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free