Imitating Unknown Policies via Exploration
Nathan Gavenski, Juarez Monteiro, Roger Granada, Felipe Meneguzzi, Rodrigo C. Barros
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/NathanGavenski/IUPEOfficialIn paperpytorch★ 14
- github.com/nathangavenski/il-datasetspytorch★ 13
Abstract
Behavioral cloning is an imitation learning technique that teaches an agent how to behave through expert demonstrations. Recent approaches use self-supervision of fully-observable unlabeled snapshots of the states to decode state-pairs into actions. However, the iterative learning scheme from these techniques are prone to getting stuck into bad local minima. We address these limitations incorporating a two-phase model into the original framework, which learns from unlabeled observations via exploration, substantially improving traditional behavioral cloning by exploiting (i) a sampling mechanism to prevent bad local minima, (ii) a sampling mechanism to improve exploration, and (iii) self-attention modules to capture global features. The resulting technique outperforms the previous state-of-the-art in four different environments by a large margin.