Improving recurrent neural networks with predictive propagation for sequence labelling

2Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Recurrent neural networks (RNNs) is a useful tool for sequence labelling tasks in natural language processing. Although in practice RNNs suffer a problem of vanishing/exploding gradient, their compactness still offers efficiency and make them less prone to overfitting. In this paper we show that by propagating the prediction of previous labels we can improve the performance of RNNs while keeping the number of parameters in RNNs unchanged and adding only one more step for inference. As a result, the models are still more compact and efficient than other models with complex memory gates. In the experiment, we evaluate the idea on optical character recognition and Chunking which achieve promising results.

Cite

CITATION STYLE

APA

Tran, S. N., Zhang, Q., Nguyen, A., Vu, X. S., & Ngo, S. (2018). Improving recurrent neural networks with predictive propagation for sequence labelling. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11301 LNCS, pp. 452–462). Springer Verlag. https://doi.org/10.1007/978-3-030-04167-0_41

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free