Incremental Model-based Learners With Formal Learning-Time Guarantees

  • Strehl A
  • Li L
  • Littman M
  • 39

    Readers

    Mendeley users who have this article in their library.
  • 31

    Citations

    Citations of this article.

Abstract

Model-based learning algorithms have been shown to use experience efficiently when learning to solve Markov Decision Processes (MDPs) with finite state and action spaces. However, their high computational cost due to repeatedly solving an internal model inhibits their use in large-scale problems. We propose a method based on real-time dynamic programming (RTDP) to speed up two model-based algorithms, RMAX and MBIE (model-based interval estimation), resulting in computationally much faster algorithms with little loss compared to existing bounds. Specifically, our two new learning algorithms, RTDP-RMAX and RTDP-IE, have considerably smaller computational demands than RMAX and MBIE. We develop a general theoretical framework that allows us to prove that both are efficient learners in a PAC (probably approximately correct) sense. We also present an experimental evaluation of these new algorithms that helps quantify the tradeoff between computational and experience demands.

Get free article suggestions today

Mendeley saves you time finding and organizing research

Sign up here
Already have an account ?Sign in

Find this document

  • ISBN: 0-9749039-2-2
  • SGR: 34548745051
  • SCOPUS: 2-s2.0-34548745051
  • PUI: 362629461

Authors

  • Alexander L Strehl

  • Lihong Li

  • Michael L Littman

Cite this document

Choose a citation style from the tabs below

Save time finding and organizing research with Mendeley

Sign up for free