Incremental learning of relational action models in noisy environments

12Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In the Relational Reinforcement Learning framework, we propose an algorithm that learns an action model (or an approximation of the transition function) in order to predict the resulting state of an action in a given situation. This algorithm learns incrementally a set of first order rules in a noisy environment following a data-driven loop. Each time a new example is presented that contradicts the current action model, the model is revised (by generalization and/or specialization). As opposed to a previous version of our algorithm that operates in a noise-free context, we introduce here a number of indicators attached to each rule that allows to evaluate if the revision should take place immediately or should be delayed. We provide an empirical evaluation on usual RRL benchmarks. © 2011 Springer-Verlag Berlin Heidelberg.

Cite

CITATION STYLE

APA

Rodrigues, C., Gérard, P., & Rouveirol, C. (2011). Incremental learning of relational action models in noisy environments. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 6489 LNAI, pp. 206–213). https://doi.org/10.1007/978-3-642-21295-6_24

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free