SOTAVerified

Attend Before you Act: Leveraging human visual attention for continual learning

2018-07-25Code Available0· sign in to hype

Khimya Khetarpal, Doina Precup

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

When humans perform a task, such as playing a game, they selectively pay attention to certain parts of the visual input, gathering relevant information and sequentially combining it to build a representation from the sensory data. In this work, we explore leveraging where humans look in an image as an implicit indication of what is salient for decision making. We build on top of the UNREAL architecture in DeepMind Lab's 3D navigation maze environment. We train the agent both with original images and foveated images, which were generated by overlaying the original images with saliency maps generated using a real-time spectral residual technique. We investigate the effectiveness of this approach in transfer learning by measuring performance in the context of noise in the environment.

Tasks

Reproductions