Abstract
A re-occurring challenge in applying feed-forward neural networks to a new dataset is the need to manually tune the neural network topology. If one’s attention is restricted to fully-connected three-layer networks, then there is only the need to manually tune the number of neurons in the single hidden layer. In this paper, we present a novel Ant Colony Optimization (ACO) algorithm that optimizes neural network topology for a given dataset. Our algorithm is not restricted to three-layer networks, and can produce topologies that contain multiple hidden layers, and topologies that do not have full connectivity between successive layers. Our algorithm uses Backward Error Propagation (BP) as a subroutine, but it is possible, in general, to use any neural network learning algorithm within our ACO approach instead. We describe all the elements necessary to tackle our learning problem using ACO, and experimentally compare the classification performance of the optimized topologies produced by our ACO algorithm with the standard fully-connected three-layer network topology most-commonly used in the literature.
Cite
CITATION STYLE
Salama, K., & Abdelbar, A. M. (2014). A Novel Ant Colony Algorithm for Building Neural Network Topologies. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 8667. https://doi.org/10.1007/978-3-319-09952-1_1
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.