Learning and Representation: From Compressive Sampling to the ‘Symbol Learning Problem’

  • Lőrincz A
N/ACitations
Citations of this article
14Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper a novel approach to neurocognitive modeling is proposed in which the central constraints are provided by the theory of reinforcement learning. In this formulation learning is (1) exploiting the statistical properties of the system's environment, (2) constrained by biologically inspired Hebbian interactions and (3) based only on algorithms which are consistent and stable. In the resulting model some of the most enigmatic problems of artificial intelligence have to be addressed. In particular, considerations on combinatorial explosion lead to constraints on the concepts of state-action pairs: these concepts have the peculiar flavor of determinism in a partially observed and thus highly uncertain world. We will argue that these concepts of factored reinforcement learning result in an intriguing learning task that we call the symbol learning problem. For this task we sketch an information theoretic framework and point towards a possible resolution.

Cite

CITATION STYLE

APA

Lőrincz, A. (2008). Learning and Representation: From Compressive Sampling to the ‘Symbol Learning Problem’ (pp. 445–488). https://doi.org/10.1007/978-3-540-69395-6_11

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free