SOTAVerified

NaviDriveVLM: Decoupling High-Level Reasoning and Motion Planning for Autonomous Driving

2026-03-09Code Available1· sign in to hype

Ximeng Tao, Pardis Taghavi, Dimitar Filev, Reza Langari, Gaurav Pandey

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Vision-language models (VLMs) have emerged as a promising direction for end-to-end autonomous driving (AD) by jointly modeling visual observations, driving context, and language-based reasoning. However, existing VLM-based systems face a trade-off between high-level reasoning and motion planning: large models offer strong semantic understanding but are costly to adapt for precise control, whereas small VLM models can be fine-tuned efficiently but often exhibit weaker reasoning. We propose NaviDriveVLM, a decoupled framework that separates reasoning from action generation using a large-scale Navigator and a lightweight trainable Driver. This design preserves reasoning ability, reduces training cost, and provides an explicit interpretable intermediate representation for downstream planning. Experiments on the nuScenes benchmark show that NaviDriveVLM outperforms large VLM baselines in end-to-end motion planning.

Reproductions