SOTAVerified

A Short Note on Soft-max and Policy Gradients in Bandits Problems

2020-07-20Unverified0· sign in to hype

Neil Walton

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

This is a short communication on a Lyapunov function argument for softmax in bandit problems. There are a number of excellent papers coming out using differential equations for policy gradient algorithms in reinforcement learning agarwal2019optimality,bhandari2019global,mei2020global. We give a short argument that gives a regret bound for the soft-max ordinary differential equation for bandit problems. We derive a similar result for a different policy gradient algorithm, again for bandit problems. For this second algorithm, it is possible to prove regret bounds in the stochastic case DW20. At the end, we summarize some ideas and issues on deriving stochastic regret bounds for policy gradients.

Tasks

Reproductions