SOTAVerified

Vision and Language Navigation

Papers

Showing 101150 of 223 papers

TitleStatusHype
Behavioral Analysis of Vision-and-Language Navigation AgentsCode0
VELMA: Verbalization Embodiment of LLM Agents for Vision and Language Navigation in Street ViewCode1
CorNav: Autonomous Agent with Self-Corrected Planning for Zero-Shot Vision-and-Language Navigation0
PanoGen: Text-Conditioned Panoramic Environment Generation for Vision-and-Language Navigation0
GeoVLN: Learning Geometry-Enhanced Visual Representation with Slot Attention for Vision-and-Language NavigationCode0
NavGPT: Explicit Reasoning in Vision-and-Language Navigation with Large Language ModelsCode2
Masked Path Modeling for Vision-and-Language Navigation0
PASTS: Progress-Aware Spatio-Temporal Transformer Speaker For Vision-and-Language Navigation0
A Dual Semantic-Aware Recurrent Global-Adaptive Network For Vision-and-Language NavigationCode1
Improving Vision-and-Language Navigation by Generating Future-View Image Semantics0
KERM: Knowledge Enhanced Reasoning for Vision-and-Language NavigationCode1
HOP+: History-enhanced and Order-aware Pre-training for Vision-and-Language Navigation0
Meta-Explore: Exploratory Hierarchical Vision-and-Language Navigation Using Scene Object Spectrum Grounding0
MLANet: Multi-Level Attention Network with Sub-instruction for Continuous Vision-and-Language NavigationCode0
ESceme: Vision-and-Language Navigation with Episodic Scene MemoryCode1
VLN-Trans: Translator for the Vision and Language Navigation AgentCode1
Graph based Environment Representation for Vision-and-Language Navigation in Continuous Environments0
BEVBert: Multimodal Map Pre-training for Language-guided NavigationCode2
CLIP-Nav: Using CLIP for Zero-Shot Vision-and-Language Navigation0
Navigation as Attackers Wish? Towards Building Robust Embodied Agents under Federated Learning0
Structure-Encoding Auxiliary Tasks for Improved Visual Representation in Vision-and-Language Navigation0
DOROTHIE: Spoken Dialogue for Handling Unexpected Situations in Interactive Autonomous Driving AgentsCode1
ULN: Towards Underspecified Vision-and-Language NavigationCode0
Weakly-Supervised Multi-Granularity Map Learning for Vision-and-Language NavigationCode1
Iterative Vision-and-Language Navigation0
A New Path: Scaling Vision-and-Language Navigation with Synthetic Instructions and Imitation Learning0
LOViS: Learning Orientation and Visual Signals for Vision and Language NavigationCode0
Ground then Navigate: Language-guided Navigation in Dynamic ScenesCode0
Anticipating the Unseen Discrepancy for Vision and Language Navigation0
Learning from Unlabeled 3D Environments for Vision-and-Language NavigationCode1
A Priority Map for Vision-and-Language Navigation with Trajectory Plans and Feature-Location CuesCode0
CLEAR: Improving Vision-Language Navigation with Cross-Lingual, Environment-Agnostic RepresentationsCode0
1st Place Solutions for RxR-Habitat Vision-and-Language Navigation Competition (CVPR 2022)Code2
Local Slot Attention for Vision-and-Language NavigationCode0
FOAM: A Follower-aware Speaker Model For Vision-and-Language NavigationCode0
Explicit Object Relation Alignment for Vision and Language NavigationCode0
Sim-2-Sim Transfer for Vision-and-Language Navigation in Continuous Environments0
Reinforced Structured State-Evolution for Vision-Language NavigationCode1
Simple and Effective Synthesis of Indoor 3D ScenesCode1
EnvEdit: Environment Editing for Vision-and-Language NavigationCode1
FedVLN: Privacy-preserving Federated Vision-and-Language NavigationCode1
Analyzing Generalization of Vision and Language Navigation to Unseen Outdoor AreasCode1
Vision-and-Language Navigation: A Survey of Tasks, Methods, and Future DirectionsCode2
HOP: History-and-Order Aware Pre-training for Vision-and-Language NavigationCode1
Cross-modal Map Learning for Vision and Language NavigationCode1
Bridging the Gap Between Learning in Discrete and Continuous Environments for Vision-and-Language NavigationCode1
Think Global, Act Local: Dual-scale Graph Transformer for Vision-and-Language NavigationCode2
One Step at a Time: Long-Horizon Vision-and-Language Navigation with MilestonesCode1
Self-supervised 3D Semantic Representation Learning for Vision-and-Language Navigation0
Explore the Potential Performance of Vision-and-Language Navigation Model: a Snapshot Ensemble Method0
Show:102550
← PrevPage 3 of 5Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1humansuccess0.86Unverified
2Lilysuccess0.79Unverified
3Airbertsuccess0.78Unverified
4Global Normalizationsuccess0.74Unverified
5explore@40 beam-searchsuccess0.74Unverified
6BEVBertsuccess0.73Unverified
7GMapsuccess0.73Unverified
8VLN-Bertsuccess0.73Unverified
9Gloabl Normalization pre-exploresuccess0.73Unverified
10FOAM-Beam Searchsuccess0.72Unverified
#ModelMetricClaimedVerifiedStatus
1FLAMETask Completion (TC)40.2Unverified
2ORAR + junction type + heading deltaTask Completion (TC)29.1Unverified
3ORARTask Completion (TC)24.2Unverified
4ARC + L2STOPTask Completion (TC)16.68Unverified
5VLN Transformer +M-50 +styleTask Completion (TC)16.2Unverified
6VLN TransformerTask Completion (TC)14.9Unverified
7ARCTask Completion (TC)14.13Unverified
8Retouch-RConcatTask Completion (TC)12.8Unverified
9Gated Attention (GA)Task Completion (TC)11.9Unverified
10RConcatTask Completion (TC)11.8Unverified
#ModelMetricClaimedVerifiedStatus
1MARVALndtw66.76Unverified
2EnvEdit-PTndtw64.61Unverified
3HAMTndtw59.94Unverified
4CLEAR-CLIPndtw53.69Unverified
5Monolingual Baselinendtw41.05Unverified
6Multilingual Baselinendtw36.81Unverified
#ModelMetricClaimedVerifiedStatus
1FLAMETask Completion (TC)52.44Unverified
2ORAR + junction type + heading deltaTask Completion (TC)46.7Unverified
3ORARTask Completion (TC)45.1Unverified
4Gated AttentionTask Completion (TC)17Unverified
5RconcatTask Completion (TC)14.7Unverified
#ModelMetricClaimedVerifiedStatus
1R2R+EnvDropspl0.61Unverified
2RCM + SILspl0.59Unverified
3Tactical Rewind - shortspl0.41Unverified
#ModelMetricClaimedVerifiedStatus
1Hierarchical Cross-Modal AgentSPL (Sucess Weighted by Path Length)0.4Unverified