Addressing Action Oscillations through Learning Policy Inertia

9Citations
Citations of this article
16Readers
Mendeley users who have this article in their library.

Abstract

Deep reinforcement learning (DRL) algorithms have been demonstrated to be effective in a wide range of challenging decision making and control tasks. However, these methods typically suffer from severe action oscillations in particular in discrete action setting, which means that agents select different actions within consecutive steps even though states only slightly differ. This issue is often neglected since the policy is usually evaluated by its cumulative rewards only. Action oscillation strongly affects the user experience and can even cause serious potential security menace especially in real-world domains with the main concern of safety, such as autonomous driving. To this end, we introduce Policy Inertia Controller (PIC) which serves as a generic plug-in framework to off-the-shelf DRL algorithms, to enables adaptive trade-off between the optimality and smoothness of the learned policy in a formal way. We propose Nested Policy Iteration as a general training algorithm for PIC-augmented policy which ensures monotonically non-decreasing updates under some mild conditions. Further, we derive a practical DRL algorithm, namely Nested Soft Actor-Critic. Experiments on a collection of autonomous driving tasks and several Atari games suggest that our approach demonstrates substantial oscillation reduction in comparison to a range of commonly adopted baselines with almost no performance degradation.

Cite

CITATION STYLE

APA

Chen, C., Tang, H., Hao, J., Liu, W., & Meng, Z. (2021). Addressing Action Oscillations through Learning Policy Inertia. In 35th AAAI Conference on Artificial Intelligence, AAAI 2021 (Vol. 8B, pp. 7020–7027). Association for the Advancement of Artificial Intelligence. https://doi.org/10.1609/aaai.v35i8.16864

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free