Sample Complexity of Policy Gradient Finding Second-Order Stationary Points

14Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.

Abstract

The policy-based reinforcement learning (RL) can be considered as maximization of its objective. However, due to the inherent non-concavity of its objective, the policy gradient method to a first-order stationary point (FOSP) cannot guarantee a maximal point. A FOSP can be a minimal or even a saddle point, which is undesirable for RL. It has be found that if all the saddle points are strict, all the second-order stationary points (SOSP) are exactly equivalent to local maxima. Instead of FOSP, we consider SOSP as the convergence criteria to characterize the sample complexity of policy gradient. Our result shows that policy gradient converges to an (ε, √εχ)SOSP with probability at least 1 - Oe(δ) after the total cost 9 of O((1-ε-γ)2√χ log 1δ ) = O(ε-92), where γ ∈ (0, 1). It significantly improves the state of the art cost Oe(ε-9).Our analysis is based on the key idea that decomposes the parameter space Rp into three non-intersected regions: non-stationary point region, saddle point region, and local optimal region, then making a local improvement of the objective of RL in each region. This technique can be potentially generalized to extensive policy gradient methods. For the complete proof, please refer to https://arxiv.org/pdf/2012.01491.pdf.

Cite

CITATION STYLE

APA

Yang, L., Zheng, Q., & Pan, G. (2021). Sample Complexity of Policy Gradient Finding Second-Order Stationary Points. In 35th AAAI Conference on Artificial Intelligence, AAAI 2021 (Vol. 12A, pp. 10630–10638). Association for the Advancement of Artificial Intelligence. https://doi.org/10.1609/aaai.v35i12.17271

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free