Towards robust semantic role labeling

81Citations
Citations of this article
39Readers
Mendeley users who have this article in their library.

Abstract

Most semantic role labeling (SRL) research has been focused on training and evaluating on the same corpus. This strategy, although appropriate for initiating research, can lead to overtraining to the particular corpus. This article describes the operation of ASSERT, a state-of-the art SRL system, and analyzes the robustness of the system when trained on one genre of data and used to label a different genre. As a starting point, results are first presented for training and testing the system on the PropBank corpus, which is annotated Wall Street Journal (WSJ) data. Experiments are then presented to evaluate the portability of the system to another source of data. These experiments are based on comparisons of performance using PropBanked WSJ data and PropBanked Brown Corpus data. The results indicate that whereas syntactic parses and argument identification transfer relatively well to a new corpus, argument classification does not. An analysis of the reasons for this is presented and these generally point to the nature of the more lexical/semantic features dominating the classification task where more general structural features are dominant in the argument identification task. © 2008 Association for Computational Linguistics.

Cite

CITATION STYLE

APA

Pradhan, S. S., Ward, W., & Martin, J. H. (2008). Towards robust semantic role labeling. Computational Linguistics, 34(2), 289–310. https://doi.org/10.1162/coli.2008.34.2.289

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free