Policy gradient methods are a type of reinforcement learning techniques that rely upon optimizing parameterized policies with respect to the expected return (long-term cumulative reward) by gradient descent. They do not suffer from many of the problems that have been traditional reinforcement learning approaches such as the lack of guarantees of an accurate value function, the intractability problem resulting from the uncertain state information, and the complexity arising from continuous states and actions. In this chapter, we will introduce a list of popular policy gradient methods. Starting with the basic policy gradient method REINFORCE, we then introduce the actor-critic method, the distributed versions of actor-critic, and trust region policy optimization and its approximate versions, each one improving its precedent. All the methods introduced in this chapter will be accompanied with its pseudo-code and, at the end of this chapter, a concrete implementation example.
CITATION STYLE
Huang, R., Yu, T., Ding, Z., & Zhang, S. (2020). Policy gradient. In Deep Reinforcement Learning: Fundamentals, Research and Applications (pp. 161–212). Springer Singapore. https://doi.org/10.1007/978-981-15-4095-0_5
Mendeley helps you to discover research relevant for your work.