Structural complexity and neural networks

7Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We survey some relationships between computational complexity and neural network theory. Here, only networks of binary threshold neurons are considered. We begin by presenting some contributions of neural networks in structural complexity theory. In parallel complexity, the class TCκ0 of problems solvable by feed-forward networks with κ levels and a polynomial number of neurons is considered. Separation results are recalled and the relation between TC0 = ∪TC κ0 and NC1 is analyzed. In particular, under the conjecture TC ≠ NC1, we characterize the class of regular languages accepted by feed-forward networks with a constant number of levels and a polynomial number of neurons. We also discuss the use of complexity theory to study computational aspects of learning and combinatorial optimization in the context of neural networks. We consider the PAC model of learning, emphasizing some negative results based on complexity theoretic assumptions. Finally, we discussed some results in the realm of neural networks related to a probabilistic characterization of NP. © 2002 Springer-Verlag Berlin Heidelberg.

Cite

CITATION STYLE

APA

Bertoni, A., & Palano, B. (2002). Structural complexity and neural networks. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 2486 LNCS, pp. 190–216). Springer Verlag. https://doi.org/10.1007/3-540-45808-5_21

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free