SOTAVerified

Comparing Observation and Action Representations for Deep Reinforcement Learning in μRTS

2019-10-26Code Available1· sign in to hype

Shengyi Huang, Santiago Ontañón

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

This paper presents a preliminary study comparing different observation and action space representations for Deep Reinforcement Learning (DRL) in the context of Real-time Strategy (RTS) games. Specifically, we compare two representations: (1) a global representation where the observation represents the whole game state, and the RL agent needs to choose which unit to issue actions to, and which actions to execute; and (2) a local representation where the observation is represented from the point of view of an individual unit, and the RL agent picks actions for each unit independently. We evaluate these representations in RTS showing that the local representation seems to outperform the global representation when training agents with the task of harvesting resources.

Tasks

Reproductions