SOTAVerified

Reward Tweaking: Maximizing the Total Reward While Planning for Short Horizons

2020-02-09Unverified0· sign in to hype

Chen Tessler, Shie Mannor

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

In reinforcement learning, the discount factor controls the agent's effective planning horizon. Traditionally, this parameter was considered part of the MDP; however, as deep reinforcement learning algorithms tend to become unstable when the effective planning horizon is long, recent works refer to as a hyper-parameter -- thus changing the underlying MDP and potentially leading the agent towards sub-optimal behavior on the original task. In this work, we introduce reward tweaking. Reward tweaking learns a surrogate reward function r for the discounted setting that induces optimal behavior on the original finite-horizon total reward task. Theoretically, we show that there exists a surrogate reward that leads to optimality in the original task and discuss the robustness of our approach. Additionally, we perform experiments in high-dimensional continuous control tasks and show that reward tweaking guides the agent towards better long-horizon returns although it plans for short horizons.

Tasks

Reproductions