Abstract
Deep neural network (DNN) has been used to solve many pattern recognition tasks specifically for classification of images, sounds and texts. This is due to the ability of DNN model to extract high level representation of features. However, a deep neural network is built up from a set of hyperparameters that need to be tuned in order to obtain the highest performance and at lower computational time. In this paper, the hyperparameters that are tuned is the combination of number of hidden layers and the number of neurons in each layer. Nevertheless, the challenge arises when choosing the suitable tuning method for this model. The aim of this study is to evaluate and compare the tuning methods of a conventional grid search (GS) method with a population-based searching method, known as genetic algorithm (GA). The comparison is made based on the performance of DNN model for classifying MNIST handwritten digits, in terms of the classification accuracy and the time taken to complete the task. The MNIST handwritten dataset is divided into 3 sets, 54000 images in training set, 6000 images in validation set and 10000 images in testing set. The results show that GA and GS methods achieved a comparable classification accuracy of 98.23% and 98.27%, respectively. However, GA method took only half of the time to search for the optimized combination, when compared to GS method, which is only 4.19 hours compared to 8.59 hours, for the same search space area.
Author supplied keywords
Cite
CITATION STYLE
Hui, A. N. I., Huddin, A. B., Ibrahim, M. F., Hashim, F. H., & Samad, S. A. (2019). GA-deep neural network optimization for image classification. International Journal of Advanced Trends in Computer Science and Engineering, 8(1.6 Special Issue), 238–245. https://doi.org/10.30534/ijatcse/2019/3681.62019
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.