Regularized structured perceptron: A case study on chineseword segmentation, POS tagging and parsing

1Citations
Citations of this article
79Readers
Mendeley users who have this article in their library.

Abstract

Structured perceptron becomes popular for various NLP tasks such as tagging and parsing. Practical studies on NLP did not pay much attention to its regularization. In this paper, we study three simple but effective task-independent regularization methods: (1) one is to average weights of different trained models to reduce the bias caused by the specific order of the training examples; (2) one is to add penalty term to the loss function; (3) and one is to randomly corrupt the data flow during training which is called dropout in the neural network. Experiments are conducted on three NLP tasks, namely Chinese word segmentation, part-of-speech tagging and dependency parsing. Applying proper regularization methods or their combinations, the error reductions with respect to the averaged perceptron for some of these tasks can be up to 10%. © 2014 Association for Computational Linguistics.

Cite

CITATION STYLE

APA

Zhang, K., Su, J., & Zhou, C. (2014). Regularized structured perceptron: A case study on chineseword segmentation, POS tagging and parsing. In 14th Conference of the European Chapter of the Association for Computational Linguistics 2014, EACL 2014 (pp. 164–173). Association for Computational Linguistics (ACL). https://doi.org/10.3115/v1/e14-1018

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free