Artificial neural networks achieve fast parallel processing via massively parallel non-linear computational elements. Most neural network models base their ability to adapt to problems on changing the strength of the interconnections between computational elements according to a given learning algorithm. However, constrained interconnection structures may limit such ability. Field programmable hardware devices allow the implementation of neural networks with in-circuit structure adaptation. This paper describes an FPGA implementation of the FAST (Flexible Adaptable-Size Topology) architecture, a neural network that dynamically changes its size. Since initial experiments indicated a good performance on pattern clustering tasks, we have applied our dynamic-structure FAST neural network to an image segmentation and recognition problem.
CITATION STYLE
Pérez-Uribe, A., & Sanchez, E. (1996). FPGA implementation of an Adaptable-Size neural network. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 1112 LNCS, pp. 383–388). Springer Verlag. https://doi.org/10.1007/3-540-61510-5_67
Mendeley helps you to discover research relevant for your work.