The mapping of the wiring diagrams of neural circuits promises to allow us to link the structure and function of neural networks. Current approaches to analyzing such connectomes rely mainly on graph-theoretical tools, but these may downplay the complex nonlinear dynamics of single neurons and the way networks respond to their inputs. Here, we measure the functional similarity of simulated networks of neurons, by quantifying the similitude of their spiking patterns in response to the same stimuli. We find that common graph-theory metrics convey little information about the similarity of networks' responses. Instead, we learn a functional metric between networks based on their synaptic differences and show that it accurately predicts the similarity of novel networks, for a wide range of stimuli. We then show that a sparse set of architectural features - the sum of synaptic inputs that each neuron receives and the sum of each neuron's synaptic outputs - predicts the functional similarity of networks of up to 1000 neurons, with high accuracy. We thus suggest new architectural design principles that shape the function of neural networks. These architectural features conform with experimental evidence of homeostatic synaptic mechanisms.
CITATION STYLE
Haber, A., & Schneidman, E. (2022). Learning the Architectural Features That Predict Functional Similarity of Neural Networks. Physical Review X, 12(2). https://doi.org/10.1103/PhysRevX.12.021051
Mendeley helps you to discover research relevant for your work.