SOTAVerified

Policy Gradient with Kernel Quadrature

2023-10-23Unverified0· sign in to hype

Satoshi Hayakawa, Tetsuro Morimura

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Reward evaluation of episodes becomes a bottleneck in a broad range of reinforcement learning tasks. Our aim in this paper is to select a small but representative subset of a large batch of episodes, only on which we actually compute rewards for more efficient policy gradient iterations. We build a Gaussian process modeling of discounted returns or rewards to derive a positive definite kernel on the space of episodes, run an ``episodic" kernel quadrature method to compress the information of sample episodes, and pass the reduced episodes to the policy network for gradient updates. We present the theoretical background of this procedure as well as its numerical illustrations in MuJoCo tasks.

Tasks

Reproductions