Evolving deep recurrent neural networks using ant colony optimization

47Citations
Citations of this article
38Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper presents a novel strategy for using ant colony optimization (ACO) to evolve the structure of deep recurrent neural networks. While versions of ACO for continuous parameter optimization have been previously used to train the weights of neural networks, to the authors’ knowledge they have not been used to actually design neural networks. The strategy presented is used to evolve deep neural networks with up to 5 hidden and 5 recurrent layers for the challenging task of predicting general aviation flight data, and is shown to provide improvements of 63% for airspeed, a 97% for altitude and 120% for pitch over previously best published results, while at the same time not requiring additional input neurons for residual values. The strategy presented also has many benefits for neuro evolution, including the fact that it is easily parallizable and scalable, and can operate using any method for training neural networks. Further, the networks it evolves can typically be trained in fewer iterations than fully connected networks.

Cite

CITATION STYLE

APA

Desell, T., Clachar, S., Higgins, J., & Wild, B. (2015). Evolving deep recurrent neural networks using ant colony optimization. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9026, pp. 86–98). Springer Verlag. https://doi.org/10.1007/978-3-319-16468-7_8

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free