Simulated Annealing Algorithm for Deep Learning

174Citations
Citations of this article
257Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Deep learning (DL) is a new area of research in machine learning, in which the objective is moving us closer to the goal of artificial intelligent. This method can learn many levels of abstraction and representation to create a common sense of data such as text, sound and image. Although DL is useful for a variety of tasks, it's hard to train. Some methods in training deep learning to make it optimal have been proposed, including Stochastic Gradient Descent, Conjugate Gradient, Hessian-free optimization, and Krylov Subspace Descent. In this paper, we proposed Simulated Annealing (SA) to improve the performance of Convolution Neural Network (CNN), as an alternative approach for optimal DL using modern optimization technique, i.e. metaheuristic algorithm. MNIST dataset is used to ensure the accuracy and efficiency of the proposed method. Moreover, we also compare our proposed method with the original of CNN. Although there is an increase in computation time, the experiment results show that the proposed method can improve the performance of original CNN.

Cite

CITATION STYLE

APA

Rere, L. M. R., Fanany, M. I., & Arymurthy, A. M. (2015). Simulated Annealing Algorithm for Deep Learning. In Procedia Computer Science (Vol. 72, pp. 137–144). Elsevier. https://doi.org/10.1016/j.procs.2015.12.114

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free