Current State and Future Directions for Learning in Biological Recurrent Neural Networks: A Perspective Piece

  • Henha Eyono R
  • Boven E
  • Ghosh A
  • et al.
N/ACitations
Citations of this article
23Readers
Mendeley users who have this article in their library.

Abstract

This perspective piece came about through the Generative Adversarial Collaboration (GAC) series of workshops organized by the Computational Cognitive Neuroscience (CCN) conference in 2020. We brought together a number of experts from the field of theoretical neuroscience to debate emerging issues in our understanding of how learning is implemented in biological recurrent neural networks. Here, we will give a brief review of the common assumptions about biological learning and the corresponding findings from experimental neuroscience and contrast them with the efficiency of gradient-based learning in recurrent neural networks commonly used in artificial intelligence. We will then outline the key issues discussed in the workshop: synaptic plasticity, neural circuits, theory-experiment divide, and objective functions. Finally, we conclude with recommendations for both theoretical and experimental neuroscientists when designing new studies that could help to bring clarity to these issues.

Cite

CITATION STYLE

APA

Henha Eyono, R., Boven, E., Ghosh, A., Pemberton, J., Scherr, F., Clopath, C., … Prince, L. Y. (2022). Current State and Future Directions for Learning in Biological Recurrent Neural Networks: A Perspective Piece. Neurons, Behavior, Data Analysis, and Theory, 1. https://doi.org/10.51628/001c.35302

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free