On overfitting and asymptotic bias in batch reinforcement learning with partial observability

ISSN: 10450823
1Citations
Citations of this article
31Readers
Mendeley users who have this article in their library.

Abstract

When an agent has limited information on its environment, the suboptimality of an RL algorithm can be decomposed into the sum of two terms: a term related to an asymptotic bias (suboptimality with unlimited data) and a term due to overfitting (additional suboptimality due to limited data). In the context of reinforcement learning with partial observability, this paper provides an analysis of the tradeoff between these two sources of error. In particular, our theoretical analysis formally characterizes how a smaller state representation increases the asymptotic bias while decreasing the risk of overfitting.

Cite

CITATION STYLE

APA

François-Lavet, V., Rabusseau, G., Pineau, J., Ernst, D., & Fonteneau, R. (2020). On overfitting and asymptotic bias in batch reinforcement learning with partial observability. In IJCAI International Joint Conference on Artificial Intelligence (Vol. 2021-January, pp. 5055–5059). International Joint Conferences on Artificial Intelligence.

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free