Joint Optimization of Concave Scalarized Multi-Objective Reinforcement Learning with Policy Gradient Based Algorithm

3Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.

Abstract

Many engineering problems have multiple objectives, and the overall aim is to optimize a non-linear function of these objectives. In this paper, we formulate the problem of maximizing a non-linear concave function of multiple long-term objectives. A policy-gradient based model-free algorithm is proposed for the problem. To compute an estimate of the gradient, an asymptotically biased estimator is proposed. The proposed algorithm is shown to achieve convergence to within an ∊ of the global optima after sampling (Formula Presented) trajectories where γ is the discount factor and M is the number of the agents, thus achieving the same dependence on ∊ as the policy gradient algorithm for the standard reinforcement learning.

Cite

CITATION STYLE

APA

Bai, Q., Agarwal, M., & Aggarwal, V. (2022). Joint Optimization of Concave Scalarized Multi-Objective Reinforcement Learning with Policy Gradient Based Algorithm. Journal of Artificial Intelligence Research, 74, 1565–1597. https://doi.org/10.1613/JAIR.1.13981

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free