SOTAVerified

Visual Navigation

Visual Navigation is the problem of navigating an agent, e.g. a mobile robot, in an environment using camera input only. The agent is given a target image (an image it will see from the target position), and its goal is to move from its current position to the target by applying a sequence of actions, based on the camera observations only.

Source: Vision-based Navigation Using Deep Reinforcement Learning

Papers

Showing 151200 of 316 papers

TitleStatusHype
Value Explicit Pretraining for Learning Transferable Representations0
ReCoRe: Regularized Contrastive Representation Learning of World Model0
Building Category Graphs Representation with Spatial and Temporal Attention for Visual Navigation0
Visual Hindsight Self-Imitation Learning for Interactive Navigation0
Deep Learning for Visual Navigation of Underwater Robots0
Bird's Eye View Based Pretrained World model for Visual Navigation0
Invariance is Key to Generalization: Examining the Role of Representation in Sim-to-Real Transfer for Visual Navigation0
What you see is what you get: Experience ranking with deep neural dataset-to-dataset similarity for topological localisationCode0
Zero-Shot Object Goal Visual Navigation With Class-Independent Relationship NetworkCode0
Multimodal Large Language Model for Visual Navigation0
A Decentralized Cooperative Navigation Approach for Visual Homing Networks0
STERLING: Self-Supervised Terrain Representation Learning from Unconstrained Robot Experience0
Wait, That Feels Familiar: Learning to Extrapolate Human Preferences for Preference Aligned Path Planning0
Omnidirectional Information Gathering for Knowledge Transfer-based Audio-Visual Navigation0
VLN-PETL: Parameter-Efficient Transfer Learning for Vision-and-Language NavigationCode0
Multi-goal Audio-visual Navigation using Sound Direction Map0
CAVEN: An Embodied Conversational Agent for Efficient Audio-Visual Navigation in Noisy Environments0
SACSoN: Scalable Autonomous Control for Social Navigation0
L-SA: Learning Under-Explored Targets in Multi-Target Reinforcement Learning0
Fast Traversability Estimation for Wild Visual Navigation0
Moving Forward by Moving Backward: Embedding Action Impact over Action Semantics0
Filter-Aware Model-Predictive Control0
Improving Vision-and-Language Navigation by Generating Future-View Image Semantics0
OVRL-V2: A simple state-of-art baseline for ImageNav and ObjectNav0
DRISHTI: Visual Navigation Assistant for Visually Impaired0
Meta-Explore: Exploratory Hierarchical Vision-and-Language Navigation Using Scene Object Spectrum Grounding0
Robustness of Utilizing Feedback in Embodied Visual Navigation0
ELBA: Learning by Asking for Embodied Visual Navigation and Task Completion0
Embodied Agents for Efficient Exploration and Smart Scene Description0
Object-Goal Visual Navigation via Effective Exploration of Relations Among Historical Navigation States0
Knowledge-driven Scene Priors for Semantic Audio-Visual Embodied Navigation0
Navigating to Objects in the Real World0
Instance-Specific Image Goal Navigation: Training Embodied Agents to Find Object Instances0
MoDA: Map style transfer for self-supervised Domain Adaptation of embodied agents0
Predicting Topological Maps for Visual Navigation in Unexplored Environments0
NaRPA: Navigation and Rendering Pipeline for Astronautics0
Scaling up and Stabilizing Differentiable Planning with Implicit Differentiation0
AVLEN: Audio-Visual-Language Embodied Navigation in 3D Environments0
Retrospectives on the Embodied AI Workshop0
Pay Self-Attention to Audio-Visual Navigation0
Autonomous Visual Navigation A Biologically Inspired Approach0
Towards self-attention based visual navigation in the real world0
UAS Navigation in the Real World Using Visual Observation0
MemoNav: Selecting Informative Memories for Visual Navigation0
See What the Robot Can't See: Learning Cooperative Perception for Visual NavigationCode0
RCA: Ride Comfort-Aware Visual Navigation via Self-Supervised Learning0
Visual Pre-training for Navigation: What Can We Learn from Noise?Code0
Good Time to Ask: A Learning Framework for Asking for Help in Embodied Visual NavigationCode0
Integrating Symmetry into Differentiable Planning with Steerable Convolutions0
SAMPLE-HD: Simultaneous Action and Motion Planning Learning Environment0
Show:102550
← PrevPage 4 of 7Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1NaviLLMdist_to_end_reduction7.9Unverified
2VLN-PETLdist_to_end_reduction6.13Unverified
3early to beddist_to_end_reduction6.03Unverified
4HAMTdist_to_end_reduction5.58Unverified
5s-agent (NDH-Full)dist_to_end_reduction5.27Unverified
6BabyWalk (r2r-pretrain)dist_to_end_reduction4.46Unverified
7Environment-agnostic Multitask Learningdist_to_end_reduction3.91Unverified
8BabyWalkdist_to_end_reduction3.65Unverified
9Test2-NDHdist_to_end_reduction3.44Unverified
10SCoAdist_to_end_reduction3.37Unverified
#ModelMetricClaimedVerifiedStatus
1SUSAspl0.64Unverified
2Meta-Explorespl0.61Unverified
3NaviLLMspl0.6Unverified
4BEV-BERTspl0.6Unverified
5HOPspl0.59Unverified
6DUETspl0.58Unverified
7VLN-PETLspl0.58Unverified
8VLN-BERTspl0.57Unverified
9Prevalentspl0.51Unverified
10RCM+SIL(no early exploration)spl0.38Unverified
#ModelMetricClaimedVerifiedStatus
1AutoVLNNav-SPL27.83Unverified
2NaviLLMNav-SPL26.26Unverified
3Meta-ExploreNav-SPL25.8Unverified
4SUSANav-SPL25.47Unverified
5DUETNav-SPL21.42Unverified
6GBENav-SPL13.3Unverified
#ModelMetricClaimedVerifiedStatus
1MVV-INSPL (All)17.27Unverified
2SAVNSPL (All)16.15Unverified
#ModelMetricClaimedVerifiedStatus
1PopArt-IMPALAMedium Human-Normalized Score72.8Unverified
#ModelMetricClaimedVerifiedStatus
1Prevalentspl28.72Unverified