Learning the Architectural Features That Predict Functional Similarity of Neural Networks

2Citations
Citations of this article
40Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The mapping of the wiring diagrams of neural circuits promises to allow us to link the structure and function of neural networks. Current approaches to analyzing such connectomes rely mainly on graph-theoretical tools, but these may downplay the complex nonlinear dynamics of single neurons and the way networks respond to their inputs. Here, we measure the functional similarity of simulated networks of neurons, by quantifying the similitude of their spiking patterns in response to the same stimuli. We find that common graph-theory metrics convey little information about the similarity of networks' responses. Instead, we learn a functional metric between networks based on their synaptic differences and show that it accurately predicts the similarity of novel networks, for a wide range of stimuli. We then show that a sparse set of architectural features - the sum of synaptic inputs that each neuron receives and the sum of each neuron's synaptic outputs - predicts the functional similarity of networks of up to 1000 neurons, with high accuracy. We thus suggest new architectural design principles that shape the function of neural networks. These architectural features conform with experimental evidence of homeostatic synaptic mechanisms.

Cite

CITATION STYLE

APA

Haber, A., & Schneidman, E. (2022). Learning the Architectural Features That Predict Functional Similarity of Neural Networks. Physical Review X, 12(2). https://doi.org/10.1103/PhysRevX.12.021051

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free