The q-G method

  • Soterroni A
  • Galski R
  • Scarabello M
  • et al.
N/ACitations
Citations of this article
5Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

In this work, the q-Gradient (q-G) method, a q-version of the Steepest Descent method, is presented. The main idea behind the q-G method is the use of the negative of the q-gradient vector of the objective function as the search direction. The q-gradient vector, or simply the q-gradient, is a generalization of the classical gradient vector based on the concept of Jackson’s derivative from the q-calculus. Its use provides the algorithm an effective mechanism for escaping from local minima. The q-G method reduces to the Steepest Descent method when the parameter q tends to 1. The algorithm has three free parameters and it is implemented so that the search process gradually shifts from global exploration in the beginning to local exploitation in the end. We evaluated the q-G method on 34 test functions, and compared its performance with 34 optimization algorithms, including derivative-free algorithms and the Steepest Descent method. Our results show that the q-G method is competitive and has a great potential for solving multimodal optimization problems.

Cite

CITATION STYLE

APA

Soterroni, A. C., Galski, R. L., Scarabello, M. C., & Ramos, F. M. (2015). The q-G method. SpringerPlus, 4(1). https://doi.org/10.1186/s40064-015-1434-4

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free