SOTAVerified

Conditioning of Reinforcement Learning Agents and its Policy Regularization Application

2019-06-13Unverified0· sign in to hype

Arip Asadulaev, Igor Kuznetsov, Gideon Stein, Andrey Filchenkov

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

The outcome of Jacobian singular values regularization was studied for supervised learning problems. It also was shown that Jacobian conditioning regularization can help to avoid the ``mode-collapse'' problem in Generative Adversarial Networks. In this paper, we try to answer the following question: Can information about policy conditioning help to shape a more stable and general policy of reinforcement learning agents? To answer this question, we conduct a study of Jacobian conditioning behavior during policy optimization. To the best of our knowledge, this is the first work that research condition number in reinforcement learning agents. We propose a conditioning regularization algorithm and test its performance on the range of continuous control tasks. Finally, we compare algorithms on the CoinRun environment with separated train end test levels to analyze how conditioning regularization contributes to agents' generalization.

Tasks

Reproductions