SOTAVerified

Monocular Depth Estimation

Monocular Depth Estimation is the task of estimating the depth value (distance relative to the camera) of each pixel given a single (monocular) RGB image. This challenging task is a key prerequisite for determining scene understanding for applications such as 3D scene reconstruction, autonomous driving, and AR. State-of-the-art methods usually fall into one of two categories: designing a complex network that is powerful enough to directly regress the depth map, or splitting the input into bins or windows to reduce computational complexity. The most popular benchmarks are the KITTI and NYUv2 datasets. Models are typically evaluated using RMSE or absolute relative error.

Source: Defocus Deblurring Using Dual-Pixel Data

Papers

Showing 161170 of 876 papers

TitleStatusHype
Diffusion Models for Monocular Depth Estimation: Overcoming Challenging ConditionsCode2
Mono-ViFI: A Unified Learning Framework for Self-supervised Single- and Multi-frame Monocular Depth EstimationCode2
IDOL: Unified Dual-Modal Latent Diffusion for Human-Centric Joint Video-Depth GenerationCode2
ProDepth: Boosting Self-Supervised Multi-Frame Monocular Depth with Probabilistic FusionCode1
ScaleDepth: Decomposing Metric Depth Estimation into Scale Prediction and Relative Depth EstimationCode0
SCIPaD: Incorporating Spatial Clues into Unsupervised Pose-Depth Joint LearningCode1
Uni-DVPS: Unified Model for Depth-Aware Video Panoptic SegmentationCode1
Deep Learning-based Depth Estimation Methods from Monocular Image and Videos: A Comprehensive Survey0
Dense Monocular Motion Segmentation Using Optical Flow and Pseudo Depth Map: A Zero-Shot Approach0
WaterMono: Teacher-Guided Anomaly Masking and Enhancement Boosting for Robust Underwater Self-Supervised Monocular Depth EstimationCode0
Show:102550
← PrevPage 17 of 88Next →

No leaderboard results yet.