Natural gradient approach for linearly constrained continuous optimization

1Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.
Get full text

Abstract

When a feasible set of an optimization problem is a proper subset of a multidimensional real space and the optimum of the problem is located on or near the boundary of the feasible set, most evolutionary algorithms require a constraint handling machinery to generate better candidate solutions in the feasible set. However, some standard constraint handling such as a resampling strategy affects the distribution of the candidate solutions; the distribution is truncated into the feasible set. Then, the statistical meaning of the update of the distribution parameters will change. To construct the parameter update rule for the covariance matrix adaptation evolution strategy from the same principle as unconstrained cases, namely the natural gradient principle, we derive the natural gradient of the log-likelihood of the Gaussian distribution truncated into a linearly constrained feasible set.We analyze the novel parameter update on a minimization of a spherical function with a linear constraint.

Cite

CITATION STYLE

APA

Akimoto, Y., & Shirakawa, S. (2014). Natural gradient approach for linearly constrained continuous optimization. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 8672, 252–261. https://doi.org/10.1007/978-3-319-10762-2_25

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free