Vygotsky meets backpropagation: artificial neural models and the development of higher forms of thought

5Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper we revisit Vygotsky’s developmental model of concept formation, and use it to discuss learning in artificial neural networks. We study learning in neural networks from a learning science point of view, asking whether it is possible to construct systems that have developmental patterns that align with empirical studies on concept formation. We put the state-of-the-art Inception-v3 image recognition architecture in an experimental setting that highlights differences and similarities in algorithmic and human cognitive processes. The Vygotskian model of cognitive development reveals important limitations in currently popular neural algorithms, and puts neural AI in the context of post-behavioristic science of learning. At the same time, the Vygotskian model of development of thought suggests new architectural principles for developing AI, machine learning, and systems that support human learning. In this context we can ask what would it take for machines to learn, and what could they learn from research on learning.

Cite

CITATION STYLE

APA

Tuomi, I. (2018). Vygotsky meets backpropagation: artificial neural models and the development of higher forms of thought. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10947 LNAI, pp. 570–583). Springer Verlag. https://doi.org/10.1007/978-3-319-93843-1_42

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free