A new kind of hopfield networks for finding global optimum

  • Huang X
  • 8

    Readers

    Mendeley users who have this article in their library.
  • 2

    Citations

    Citations of this article.

Abstract

The Hopfield network has been applied to solve optimization problems over decades. However, it still has many limitations in accomplishing this task. Most of them are inherited from the optimization algorithms it implements. The computation of a Hopfield network, defined by a set of difference equations, can easily be trapped into one local optimum or another, sensitive to initial conditions, perturbations, and neuron update orders. It doesn't know how long it will take to converge, as well as if the final solution is a global optimum, or not. In this paper, we present a Hopfield network with a new set of difference equations to fix those problems. The difference equations directly implement a new powerful optimization algorithm.

Get free article suggestions today

Mendeley saves you time finding and organizing research

Sign up here
Already have an account ?Sign in

Find this document

Authors

  • Xiaofei Huang

Cite this document

Choose a citation style from the tabs below

Save time finding and organizing research with Mendeley

Sign up for free