SOTAVerified

A framework for reinforcement learning with autocorrelated actions

2020-09-10Code Available0· sign in to hype

Marcin Szulc, Jakub Łyskawa, Paweł Wawrzyński

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

The subject of this paper is reinforcement learning. Policies are considered here that produce actions based on states and random elements autocorrelated in subsequent time instants. Consequently, an agent learns from experiments that are distributed over time and potentially give better clues to policy improvement. Also, physical implementation of such policies, e.g. in robotics, is less problematic, as it avoids making robots shake. This is in opposition to most RL algorithms which add white noise to control causing unwanted shaking of the robots. An algorithm is introduced here that approximately optimizes the aforementioned policy. Its efficiency is verified for four simulated learning control problems (Ant, HalfCheetah, Hopper, and Walker2D) against three other methods (PPO, SAC, ACER). The algorithm outperforms others in three of these problems.

Tasks

Reproductions