The convergence of TD(?) for general ?

  • Dayan P
N/ACitations
Citations of this article
88Readers
Mendeley users who have this article in their library.

Abstract

The method of temporal differences (TD) is one way of making consistent predictions about the future. This paper uses some analysis of Watkins (1989) to extend a convergence theorem due to Sutton (1988) from the case which only uses information from adjacent time steps to that involving information from arbitrary ones. It also considers how this version of TD behaves in the face of linearly dependent representations for states—demonstrating that it still converges, but to a different answer from the least mean squares algorithm. Finally it adapts Watkins' theorem that Q-learning, his closely related prediction and action learning method, converges with probability one, to demonstrate this strong form of convergence for a slightly modified version of TD.

Cite

CITATION STYLE

APA

Dayan, P. (1992). The convergence of TD(?) for general ? Machine Learning, 8(3–4), 341–362. https://doi.org/10.1007/bf00992701

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free