In this chapter we review the basic aspects of Neural Networks (NNs) and investigate techniques for reducing the amount of time necessary to build NN models. With this goal in mind, we present the details of a GPU parallel implementation of the Back-Propagation (BP) and Multiple Back-Propagation (MBP) algorithms. In particular, regarding the CUDA implementation of the BP and MBP algorithms we include both the adaptive step size and the robustness techniques which overall improve the algorithms stability and training speed. In the sequel, the training process is shown to be decomposed into three sequential phases: forward, robust learning and back-propagation. For each stage we point out the details for designing efficient kernels and provide the respective models of execution. Despite the benefit through the use of GPU, an automatic generator of topologies Autonomous Training System (ATS) algorithm is given. The approach tries to mimic the heuristics that are usually employed for model selection in a stepby- step constructive based error evaluation. Its advantage is to reduce significantly the effort necessary for building NN models. A final experimental section supports the effectiveness of the proposed systems. Finally, the software configuration parameters, the results and discussion on benchmarks and real case studies are presented.
CITATION STYLE
Lopes, N., & Ribeiro, B. (2015). Neural Networks. In Studies in Big Data (Vol. 7, pp. 39–69). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-319-06938-8_3
Mendeley helps you to discover research relevant for your work.