Structures of neural network effective theories

2Citations
Citations of this article
13Readers
Mendeley users who have this article in their library.

Abstract

We develop a diagrammatic approach to effective field theories (EFTs) corresponding to deep neural networks at initialization, which dramatically simplifies computations of finite-width corrections to neuron statistics. The structures of EFT calculations make it transparent that a single condition governs criticality of all connected correlators of neuron preactivations. Understanding of such EFTs may facilitate progress in both deep learning and field theory simulations.

References Powered by Scopus

TikZ-Feynman: Feynman diagrams with TikZ

227Citations
N/AReaders
Get full text

Wide neural networks of any depth evolve as linear models under gradient descent

73Citations
N/AReaders
Get full text

Neural networks and quantum field theory

58Citations
N/AReaders
Get full text

Cited by Powered by Scopus

Neural scaling laws from large-N field theory: solvable model beyond the ridgeless limit

0Citations
N/AReaders
Get full text

Bayesian RG flow in neural network field theories

0Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Banta, I., Cai, T., Craig, N., & Zhang, Z. (2024). Structures of neural network effective theories. Physical Review D, 109(10). https://doi.org/10.1103/PhysRevD.109.105007

Readers' Seniority

Tooltip

Researcher 4

50%

Professor / Associate Prof. 2

25%

PhD / Post grad / Masters / Doc 2

25%

Readers' Discipline

Tooltip

Physics and Astronomy 8

100%

Save time finding and organizing research with Mendeley

Sign up for free