SOTAVerified

Vision and Language Navigation

Papers

Showing 101150 of 223 papers

TitleStatusHype
DAP: Domain-aware Prompt Learning for Vision-and-Language Navigation0
Diagnosing Vision-and-Language Navigation: What Really Matters0
Disrupting Vision-Language Model-Driven Navigation Services via Adversarial Object Fusion0
Does VLN Pretraining Work with Nonsensical or Irrelevant Instructions?0
DOPE: Dual Object Perception-Enhancement Network for Vision-and-Language Navigation0
Do Visual Imaginations Improve Vision-and-Language Navigation Agents?0
Endowing Embodied Agents with Spatial Reasoning Capabilities for Vision-and-Language Navigation0
Evaluating Explanation Methods for Vision-and-Language Navigation0
Evolving Graphical Planner: Contextual Global Planning for Vision-and-Language Navigation0
Explicit Object Relation Alignment for Vision and Language Navigation0
Explore the Potential Performance of Vision-and-Language Navigation Model: a Snapshot Ensemble Method0
Explore the Potential Performance of Vision-and-Language Navigation Model: a Snapshot Ensemble Method0
Extended Abstract: Improving Vision-and-Language Navigation with Image-Text Pairs from the Web0
Fine-Grained Alignment in Vision-and-Language Navigation through Bayesian Optimization0
FlexVLN: Flexible Adaptation for Diverse Vision-and-Language Navigation Tasks0
Generative Language-Grounded Policy in Vision-and-Language Navigation with Bayes' Rule0
Graph based Environment Representation for Vision-and-Language Navigation in Continuous Environments0
Grounded Vision-Language Navigation for UAVs with Open-Vocabulary Goal Understanding0
Ground-level Viewpoint Vision-and-Language Navigation in Continuous Environments0
HA-VLN: A Benchmark for Human-Aware Navigation in Discrete-Continuous Environments with Dynamic Multi-Human Interactions, Real-World Validation, and an Open Leaderboard0
Hijacking Vision-and-Language Navigation Agents with Adversarial Environmental Attacks0
HOP+: History-enhanced and Order-aware Pre-training for Vision-and-Language Navigation0
I2EDL: Interactive Instruction Error Detection and Localization0
Improving Vision-and-Language Navigation by Generating Future-View Image Semantics0
Iterative Vision-and-Language Navigation0
IVLMap: Instance-Aware Visual Language Grounding for Consumer Robot Navigation0
Just Ask:An Interactive Learning Framework for Vision and Language Navigation0
LangNav: Language as a Perceptual Representation for Navigation0
Language-Aligned Waypoint (LAW) Supervision for Vision-and-Language Navigation in Continuous Environments0
Language and Planning in Robotic Navigation: A Multilingual Evaluation of State-of-the-Art Models0
Language-guided Navigation via Cross-Modal Grounding and Alternate Adversarial Learning0
Learning to Stop: A Simple yet Effective Approach to Urban Vision-Language Navigation0
Loc4Plan: Locating Before Planning for Outdoor Vision and Language Navigation0
MapGPT: Map-Guided Prompting with Adaptive Path Planning for Vision-and-Language Navigation0
Masked Path Modeling for Vision-and-Language Navigation0
MC-GPT: Empowering Vision-and-Language Navigation with Memory Map and Reasoning Chains0
Meta-Explore: Exploratory Hierarchical Vision-and-Language Navigation Using Scene Object Spectrum Grounding0
MetaScenes: Towards Automated Replica Creation for Real-world 3D Scans0
Mind the Error! Detection and Localization of Instruction Errors in Vision-and-Language Navigation0
Mind the Gap: Improving Success Rate of Vision-and-Language Navigation by Revisiting Oracle Success Routes0
MiniVLN: Efficient Vision-and-Language Navigation by Progressive Knowledge Distillation0
CorNav: Autonomous Agent with Self-Corrected Planning for Zero-Shot Vision-and-Language Navigation0
Multi-modal Discriminative Model for Vision-and-Language Navigation0
Multi-View Learning for Vision-and-Language Navigation0
NavAgent: Multi-scale Urban Street View Fusion For UAV Embodied Vision-and-Language Navigation0
NAVCON: A Cognitively Inspired and Linguistically Grounded Corpus for Vision and Language Navigation0
NaVid: Video-based VLM Plans the Next Step for Vision-and-Language Navigation0
Navigation as Attackers Wish? Towards Building Robust Embodied Agents under Federated Learning0
NaVILA: Legged Robot Vision-Language-Action Model for Navigation0
Object-and-Action Aware Model for Visual Language Navigation0
Show:102550
← PrevPage 3 of 5Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1humansuccess0.86Unverified
2Lilysuccess0.79Unverified
3Airbertsuccess0.78Unverified
4Global Normalizationsuccess0.74Unverified
5explore@40 beam-searchsuccess0.74Unverified
6BEVBertsuccess0.73Unverified
7GMapsuccess0.73Unverified
8VLN-Bertsuccess0.73Unverified
9Gloabl Normalization pre-exploresuccess0.73Unverified
10FOAM-Beam Searchsuccess0.72Unverified
#ModelMetricClaimedVerifiedStatus
1FLAMETask Completion (TC)40.2Unverified
2ORAR + junction type + heading deltaTask Completion (TC)29.1Unverified
3ORARTask Completion (TC)24.2Unverified
4ARC + L2STOPTask Completion (TC)16.68Unverified
5VLN Transformer +M-50 +styleTask Completion (TC)16.2Unverified
6VLN TransformerTask Completion (TC)14.9Unverified
7ARCTask Completion (TC)14.13Unverified
8Retouch-RConcatTask Completion (TC)12.8Unverified
9Gated Attention (GA)Task Completion (TC)11.9Unverified
10RConcatTask Completion (TC)11.8Unverified
#ModelMetricClaimedVerifiedStatus
1MARVALndtw66.76Unverified
2EnvEdit-PTndtw64.61Unverified
3HAMTndtw59.94Unverified
4CLEAR-CLIPndtw53.69Unverified
5Monolingual Baselinendtw41.05Unverified
6Multilingual Baselinendtw36.81Unverified
#ModelMetricClaimedVerifiedStatus
1FLAMETask Completion (TC)52.44Unverified
2ORAR + junction type + heading deltaTask Completion (TC)46.7Unverified
3ORARTask Completion (TC)45.1Unverified
4Gated AttentionTask Completion (TC)17Unverified
5RconcatTask Completion (TC)14.7Unverified
#ModelMetricClaimedVerifiedStatus
1R2R+EnvDropspl0.61Unverified
2RCM + SILspl0.59Unverified
3Tactical Rewind - shortspl0.41Unverified
#ModelMetricClaimedVerifiedStatus
1Hierarchical Cross-Modal AgentSPL (Sucess Weighted by Path Length)0.4Unverified