In this paper, the derivative of the input scaling and spectral radius parameters of Echo State Network reservoir are derived. This was achieved by re-writing the reservoir state update equation in terms of template matrices whose eigenvalues can be pre-calculated, so the two parameters appear in the state update equation in the form of simple multiplication which is differentiable. After that the paper derives the derivatives and then discusses why direct application of these two derivatives in gradient descent to optimize reservoirs in a sequential manner would be ineffective due to the nature of the error surface and the problem of large eigenvalue spread on the reservoir state matrix. Finally it is suggested how to apply the derivatives obtained here for joint-optimizing the reservoir and readout at the same time.
CITATION STYLE
Yuenyong, S. (2016). On the gradient-based sequential tuning of the echo state network reservoir parameters. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9810 LNCS, pp. 651–660). Springer Verlag. https://doi.org/10.1007/978-3-319-42911-3_54
Mendeley helps you to discover research relevant for your work.