The structures of the Artificial Intelligence (AI) are sometimes "created" in order to solve specific problems of science and engineering. They may be viewed as dedicated signal processors, with dedicated, in particular repetitive, structure. In this paper such structures of Neural Networks (NN)-like devices are considered, having as starting point the problems in Mathematical Physics. Both the ways followed by such inferences and their outcomes may be quite diverse - one of the paper's aims is to illustrate this assertion. Next, ensuring global stability and convergence properties in the presence of several equilibria is a common feature of the field. The general discussion on the "emergence" of AI devices with NN structure is followed by the presentation of the elements of the global behavior for systems with several equilibria. The approach is illustrated on the case of the M-lattice; in tackling this application there is pointed out the role of the high gain to ensure both gradient like behavior combined with binary outputs which are required e.g. in image processing. © 2013 Springer-Verlag Berlin Heidelberg.
CITATION STYLE
Rǎsvan, V. (2013). Reflections on neural networks as repetitive structures with several equilibria and stable behavior. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 7903 LNCS, pp. 375–385). https://doi.org/10.1007/978-3-642-38682-4_40
Mendeley helps you to discover research relevant for your work.