SOTAVerified

FLAME: Learning to Navigate with Multimodal LLM in Urban Environments

2024-08-20Code Available2· sign in to hype

Yunzhe Xu, Yiyuan Pan, Zhe Liu, Hesheng Wang

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Large Language Models (LLMs) have demonstrated potential in Vision-and-Language Navigation (VLN) tasks, yet current applications face challenges. While LLMs excel in general conversation scenarios, they struggle with specialized navigation tasks, yielding suboptimal performance compared to specialized VLN models. We introduce FLAME (FLAMingo-Architected Embodied Agent), a novel Multimodal LLM-based agent and architecture designed for urban VLN tasks that efficiently handles multiple observations. Our approach implements a three-phase tuning technique for effective adaptation to navigation tasks, including single perception tuning for street view description, multiple perception tuning for route summarization, and end-to-end training on VLN datasets. The augmented datasets are synthesized automatically. Experimental results demonstrate FLAME's superiority over existing methods, surpassing state-of-the-art methods by a 7.3% increase in task completion on Touchdown dataset. This work showcases the potential of Multimodal LLMs (MLLMs) in complex navigation tasks, representing an advancement towards applications of MLLMs in the field of embodied intelligence.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
map2seqFLAMETask Completion (TC)52.44Unverified
Touchdown DatasetFLAMETask Completion (TC)40.2Unverified

Reproductions