Evaluation of Parameter Settings for Training Neural Networks Using Backpropagation Algorithms: A Study With Clinical Datasets

9Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.

Abstract

Artificial neural networks (ANN) are widely used for classification, and the training algorithm commonly used is the backpropagation (BP) algorithm. The major bottleneck faced in the backpropagation neural network training is in fixing the appropriate values for network parameters. The network parameters are initial weights, biases, activation function, number of hidden layers and the number of neurons per hidden layer, number of training epochs, learning rate, minimum error, and momentum term for the classification task. The objective of this work is to investigate the performance of 12 different BP algorithms with the impact of variations in network parameter values for the neural network training. The algorithms were evaluated with different training and testing samples taken from the three benchmark clinical datasets, namely, Pima Indian Diabetes (PID), Hepatitis, and Wisconsin Breast Cancer (WBC) dataset obtained from the University of California Irvine (UCI) machine learning repository.

Cite

CITATION STYLE

APA

Leema, N., Nehemiah, K. H., Christo, E. V. R., & Kannan, A. (2020). Evaluation of Parameter Settings for Training Neural Networks Using Backpropagation Algorithms: A Study With Clinical Datasets. International Journal of Operations Research and Information Systems, 11(4), 62–85. https://doi.org/10.4018/IJORIS.2020100104

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free