SOTAVerified

Vision and Language Navigation

Papers

Showing 110 of 223 papers

TitleStatusHype
Rethinking the Embodied Gap in Vision-and-Language Navigation: A Holistic Study of Physical and Visual Disparities0
NavMorph: A Self-Evolving World Model for Vision-and-Language Navigation in Continuous EnvironmentsCode2
Grounded Vision-Language Navigation for UAVs with Open-Vocabulary Goal Understanding0
A Navigation Framework Utilizing Vision-Language ModelsCode0
Disrupting Vision-Language Model-Driven Navigation Services via Adversarial Object Fusion0
Cross from Left to Right Brain: Adaptive Text Dreamer for Vision-and-Language NavigationCode1
FlightGPT: Towards Generalizable and Interpretable UAV Vision-and-Language Navigation with Vision-Language ModelsCode2
Dynam3D: Dynamic Layered 3D Tokens Empower VLM for Vision-and-Language NavigationCode2
CityNavAgent: Aerial Vision-and-Language Navigation with Hierarchical Semantic Planning and Global MemoryCode1
MetaScenes: Towards Automated Replica Creation for Real-world 3D Scans0
Show:102550
← PrevPage 1 of 23Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1R2R+EnvDropspl0.61Unverified
2RCM + SILspl0.59Unverified
3Tactical Rewind - shortspl0.41Unverified