Random synaptic feedback weights support error backpropagation for deep learning

528Citations
Citations of this article
836Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

The brain processes information through multiple layers of neurons. This deep architecture is representationally powerful, but complicates learning because it is difficult to identify the responsible neurons when a mistake is made. In machine learning, the backpropagation algorithm assigns blame by multiplying error signals with all the synaptic weights on each neuron's axon and further downstream. However, this involves a precise, symmetric backward connectivity pattern, which is thought to be impossible in the brain. Here we demonstrate that this strong architectural constraint is not required for effective error propagation. We present a surprisingly simple mechanism that assigns blame by multiplying errors by even random synaptic weights. This mechanism can transmit teaching signals across multiple layers of neurons and performs as effectively as backpropagation on a variety of tasks. Our results help reopen questions about how the brain could use error signals and dispel long-held assumptions about algorithmic constraints on learning.

References Powered by Scopus

Gradient-based learning applied to document recognition

44103Citations
N/AReaders
Get full text

Human-level control through deep reinforcement learning

22573Citations
N/AReaders
Get full text

Learning representations by back-propagating errors

20765Citations
N/AReaders
Get full text

Cited by Powered by Scopus

Neuroscience-Inspired Artificial Intelligence

1012Citations
N/AReaders
Get full text

Surrogate Gradient Learning in Spiking Neural Networks: Bringing the Power of Gradient-based optimization to spiking neural networks

901Citations
N/AReaders
Get full text

Deep learning in spiking neural networks

871Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Lillicrap, T. P., Cownden, D., Tweed, D. B., & Akerman, C. J. (2016). Random synaptic feedback weights support error backpropagation for deep learning. Nature Communications, 7. https://doi.org/10.1038/ncomms13276

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 388

71%

Researcher 112

20%

Professor / Associate Prof. 41

7%

Lecturer / Post doc 7

1%

Readers' Discipline

Tooltip

Computer Science 169

35%

Neuroscience 154

32%

Engineering 105

22%

Agricultural and Biological Sciences 56

12%

Article Metrics

Tooltip
Mentions
Blog Mentions: 2
News Mentions: 3
Social Media
Shares, Likes & Comments: 82

Save time finding and organizing research with Mendeley

Sign up for free