SOTAVerified

Cross-Lingual Vision-Language Navigation

2019-10-24Code Available0· sign in to hype

An Yan, Xin Eric Wang, Jiangtao Feng, Lei LI, William Yang Wang

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Commanding a robot to navigate with natural language instructions is a long-term goal for grounded language understanding and robotics. But the dominant language is English, according to previous studies on vision-language navigation (VLN). To go beyond English and serve people speaking different languages, we collect a bilingual Room-to-Room (BL-R2R) dataset, extending the original benchmark with new Chinese instructions. Based on this newly introduced dataset, we study how an agent can be trained on existing English instructions but navigate effectively with another language under a zero-shot learning scenario. Without any training data of the target language, our model shows competitive results even compared to a model with full access to the target language training data. Moreover, we investigate the transferring ability of our model when given a certain amount of target language training data.

Tasks

Reproductions