Training bidirectional recurrent neural network architectures with the scaled conjugate gradient algorithm

1Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Predictions on sequential data, when both the upstream and downstream information is important, is a difficult and challenging task. The Bidirectional Recurrent Neural Network (BRNN) architecture has been designed to deal with this class of problems. In this paper, we present the development and implementation of the Scaled Conjugate Gradient (SCG) learning algorithm for BRNN architectures. The model has been tested on the Protein Secondary Structure Prediction (PSSP) and Transmembrane Protein Topology Prediction problems (TMPTP). Our method currently achieves preliminary results close to 73% correct predictions for the PSSP problem and close to 79% for the TMPTP problem, which are expected to increase with larger datasets, external rules, ensemble methods and filtering techniques. Importantly, the SCG algorithm is training the BRNN architecture approximately 3 times faster than the Backpropagation Through Time (BPTT) algorithm.

Cite

CITATION STYLE

APA

Agathocleous, M., Christodoulou, C., Promponas, V., Kountouris, P., & Vassiliades, V. (2016). Training bidirectional recurrent neural network architectures with the scaled conjugate gradient algorithm. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9886 LNCS, pp. 123–131). Springer Verlag. https://doi.org/10.1007/978-3-319-44778-0_15

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free