Finite-Time Analysis of Fully Decentralized Single-Timescale Actor-Critic
Qijun Luo, Xiao Li
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
Decentralized Actor-Critic (AC) algorithms have been widely utilized for multi-agent reinforcement learning (MARL) and have achieved remarkable success. Apart from its empirical success, the theoretical convergence property of decentralized AC algorithms is largely unexplored. Most of the existing finite-time convergence results are derived based on either double-loop update or two-timescale step sizes rule, and this is the case even for centralized AC algorithm under a single-agent setting. In practice, the single-timescale update is widely utilized, where actor and critic are updated in an alternating manner with step sizes being of the same order. In this work, we study a decentralized single-timescale AC algorithm.Theoretically, using linear approximation for value and reward estimation, we show that the algorithm has sample complexity of O(^-2) under Markovian sampling, which matches the optimal complexity with a double-loop implementation (here, O hides a logarithmic term). When we reduce to the single-agent setting, our result yields new sample complexity for centralized AC using a single-timescale update scheme. The central to establishing our complexity results is the hidden smoothness of the optimal critic variable we revealed. We also provide a local action privacy-preserving version of our algorithm and its analysis. Finally, we conduct experiments to show the superiority of our algorithm over the existing decentralized AC algorithms.