On training efficiency and computational costs of a feed forward neural network: A review

63Citations
Citations of this article
80Readers
Mendeley users who have this article in their library.

Abstract

A comprehensive review on the problem of choosing a suitable activation function for the hidden layer of a feed forward neural network has been widely investigated. Since the nonlinear component of a neural network is the main contributor to the network mapping capabilities, the different choices that may lead to enhanced performances, in terms of training, generalization, or computational costs, are analyzed, both in general-purpose and in embedded computing environments. Finally, a strategy to convert a network configuration between different activation functions without altering the network mapping capabilities will be presented.

Cite

CITATION STYLE

APA

Laudani, A., Lozito, G. M., Fulginei, F. R., & Salvini, A. (2015). On training efficiency and computational costs of a feed forward neural network: A review. Computational Intelligence and Neuroscience. Hindawi Publishing Corporation. https://doi.org/10.1155/2015/818243

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free