SVRG for policy evaluation with fewer gradient evaluations

3Citations
Citations of this article
17Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Stochastic variance-reduced gradient (SVRG) is an optimization method originally designed for tackling machine learning problems with a finite sum structure. SVRG was later shown to work for policy evaluation, a problem in reinforcement learning in which one aims to estimate the value function of a given policy. SVRG makes use of gradient estimates at two scales. At the slower scale, SVRG computes a full gradient over the whole dataset, which could lead to prohibitive computation costs. In this work, we show that two variants of SVRG for policy evaluation could significantly diminish the number of gradient calculations while preserving a linear convergence speed. More importantly, our theoretical result implies that one does not need to use the entire dataset in every epoch of SVRG when it is applied to policy evaluation with linear function approximation. Our experiments demonstrate large computational savings provided by the proposed methods.

Cite

CITATION STYLE

APA

Peng, Z., Touati, A., Vincent, P., & Precup, D. (2020). SVRG for policy evaluation with fewer gradient evaluations. In IJCAI International Joint Conference on Artificial Intelligence (Vol. 2021-January, pp. 2697–2703). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2020/374

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free