SOTAVerified

Learning to Navigate Unseen Environments: Back Translation with Environmental Dropout

2019-04-08NAACL 2019Code Available0· sign in to hype

Hao Tan, Licheng Yu, Mohit Bansal

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

A grand goal in AI is to build a robot that can accurately navigate based on natural language instructions, which requires the agent to perceive the scene, understand and ground language, and act in the real-world environment. One key challenge here is to learn to navigate in new environments that are unseen during training. Most of the existing approaches perform dramatically worse in unseen environments as compared to seen ones. In this paper, we present a generalizable navigational agent. Our agent is trained in two stages. The first stage is training via mixed imitation and reinforcement learning, combining the benefits from both off-policy and on-policy optimization. The second stage is fine-tuning via newly-introduced 'unseen' triplets (environment, path, instruction). To generate these unseen triplets, we propose a simple but effective 'environmental dropout' method to mimic unseen environments, which overcomes the problem of limited seen environment variability. Next, we apply semi-supervised learning (via back-translation) on these dropped-out environments to generate new paths and instructions. Empirically, we show that our agent is substantially better at generalizability when fine-tuned with these triplets, outperforming the state-of-art approaches by a large margin on the private unseen test set of the Room-to-Room task, and achieving the top rank on the leaderboard.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
Room2RoomR2R+EnvDropspl0.61Unverified
VLN Challengenullsuccess0.56Unverified
VLN Challengenullsuccess0.69Unverified
VLN ChallengeBack Translation with Environmental Dropout (no beam search)success0.51Unverified

Reproductions