On the gradient-based sequential tuning of the echo state network reservoir parameters

2Citations
Citations of this article
1Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper, the derivative of the input scaling and spectral radius parameters of Echo State Network reservoir are derived. This was achieved by re-writing the reservoir state update equation in terms of template matrices whose eigenvalues can be pre-calculated, so the two parameters appear in the state update equation in the form of simple multiplication which is differentiable. After that the paper derives the derivatives and then discusses why direct application of these two derivatives in gradient descent to optimize reservoirs in a sequential manner would be ineffective due to the nature of the error surface and the problem of large eigenvalue spread on the reservoir state matrix. Finally it is suggested how to apply the derivatives obtained here for joint-optimizing the reservoir and readout at the same time.

Cite

CITATION STYLE

APA

Yuenyong, S. (2016). On the gradient-based sequential tuning of the echo state network reservoir parameters. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9810 LNCS, pp. 651–660). Springer Verlag. https://doi.org/10.1007/978-3-319-42911-3_54

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free