SOTAVerified

Visual Odometry

Visual Odometry is an important area of information fusion in which the central aim is to estimate the pose of a robot using data collected by visual sensors.

Source: Bi-objective Optimization for Robust RGB-D Visual Odometry

Papers

Showing 3140 of 408 papers

TitleStatusHype
Converting Depth Images and Point Clouds for Feature-based Pose EstimationCode1
Deep Visual Odometry with Events and FramesCode1
S2LD: Sparse-to-Local-Dense Matching for Geometry-Guided Correspondence EstimationCode1
Transformer-Based Model for Monocular Visual Odometry: A Video Understanding ApproachCode1
Modality-invariant Visual Odometry for Embodied VisionCode1
FLSea: Underwater Visual-Inertial and Stereo-Vision Forward-Looking DatasetsCode1
Dense Prediction Transformer for Scale Estimation in Monocular Visual OdometryCode1
SF2SE3: Clustering Scene Flow into SE(3)-Motions via Proposal and SelectionCode1
ALTO: A Large-Scale Dataset for UAV Visual Place Recognition and LocalizationCode1
JPerceiver: Joint Perception Network for Depth, Pose and Layout Estimation in Driving ScenesCode1
Show:102550
← PrevPage 4 of 41Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1CIVORelative Position Error Translation [cm]1.36Unverified