SOTAVerified

Visual Odometry

Visual Odometry is an important area of information fusion in which the central aim is to estimate the pose of a robot using data collected by visual sensors.

Source: Bi-objective Optimization for Robust RGB-D Visual Odometry

Papers

Showing 3140 of 408 papers

TitleStatusHype
LGU-SLAM: Learnable Gaussian Uncertainty Matching with Deformable Correlation Sampling for Deep Visual SLAMCode1
LiVisSfM: Accurate and Robust Structure-from-Motion with LiDAR and Visual Cues0
ESVO2: Direct Visual-Inertial Odometry with Stereo Event CamerasCode2
IncEventGS: Pose-Free Gaussian Splatting from a Single Event CameraCode2
ORB-SfMLearner: ORB-Guided Self-supervised Visual Odometry with Selective Online Adaptation0
MAC-VO: Metrics-aware Covariance for Learning-based Stereo Visual Odometry0
Robust Vehicle Localization and Tracking in Rain using Street MapsCode0
Efficient Camera Exposure Control for Visual Odometry via Deep Reinforcement LearningCode1
Creating a Segmented Pointcloud of Grapevines by Combining Multiple Viewpoints Through Visual Odometry0
Single-Photon 3D Imaging with Equi-Depth Photon Histograms0
Show:102550
← PrevPage 4 of 41Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1CIVORelative Position Error Translation [cm]1.36Unverified