Joint Optimization of Multi-Objective Reinforcement Learning with Policy Gradient Based Algorithm
Qinbo Bai, Mridul Agarwal, Vaneet Aggarwal
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
Many engineering problems have multiple objectives, and the overall aim is to optimize a non-linear function of these objectives. In this paper, we formulate the problem of maximizing a non-linear concave function of multiple long-term objectives. A policy-gradient based model-free algorithm is proposed for the problem. To compute an estimate of the gradient, a biased estimator is proposed. The proposed algorithm is shown to achieve convergence to within an of the global optima after sampling O(M^4^2(1-)^8^4) trajectories where is the discount factor and M is the number of the agents, thus achieving the same dependence on as the policy gradient algorithm for the standard reinforcement learning.