ALFRED: A Benchmark for Interpreting Grounded Instructions for Everyday Tasks
Mohit Shridhar, Jesse Thomason, Daniel Gordon, Yonatan Bisk, Winson Han, Roozbeh Mottaghi, Luke Zettlemoyer, Dieter Fox
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/askforalfred/alfredOfficialpytorch★ 0
- github.com/alfworld/alfworldpytorch★ 688
- github.com/alexpashevich/E.T.pytorch★ 93
- github.com/gistvision/mocapytorch★ 40
- github.com/snumprlab/cl-alfredpytorch★ 31
- github.com/facebookresearch/EgoTVpytorch★ 27
- github.com/snumprlab/capeampytorch★ 16
- github.com/yonseivnl/mcr-agentpytorch★ 7
- github.com/yenchehsiao/autonomousllmagentwithadaptingplanningpytorch★ 3
- github.com/caisarl76/alfredpytorch★ 0
Abstract
We present ALFRED (Action Learning From Realistic Environments and Directives), a benchmark for learning a mapping from natural language instructions and egocentric vision to sequences of actions for household tasks. ALFRED includes long, compositional tasks with non-reversible state changes to shrink the gap between research benchmarks and real-world applications. ALFRED consists of expert demonstrations in interactive visual environments for 25k natural language directives. These directives contain both high-level goals like "Rinse off a mug and place it in the coffee maker." and low-level language instructions like "Walk to the coffee maker on the right." ALFRED tasks are more complex in terms of sequence length, action space, and language than existing vision-and-language task datasets. We show that a baseline model based on recent embodied vision-and-language tasks performs poorly on ALFRED, suggesting that there is significant room for developing innovative grounded visual language understanding models with this benchmark.