A one-layer recurrent neural network for non-smooth convex optimization subject to linear equality constraints

23Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper, a one-layer recurrent neural network is proposed for solving non-smooth convex optimization problems with linear equality constraints. Comparing with the existing neural networks, the proposed neural network has simpler architecture and the number of neurons is the same as that of decision variables in the optimization problems. The global convergence of the neural network can be guaranteed if the non-smooth objective function is convex. Simulation results are provided to show that the state trajectories of the neural network can converge to the optimal solutions of the non-smooth convex optimization problems and show the performance of the proposed neural network. © 2009 Springer Berlin Heidelberg.

Cite

CITATION STYLE

APA

Liu, Q., & Wang, J. (2009). A one-layer recurrent neural network for non-smooth convex optimization subject to linear equality constraints. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 5507 LNCS, pp. 1003–1010). https://doi.org/10.1007/978-3-642-03040-6_122

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free