SOTAVerified

Monocular Depth Estimation

Monocular Depth Estimation is the task of estimating the depth value (distance relative to the camera) of each pixel given a single (monocular) RGB image. This challenging task is a key prerequisite for determining scene understanding for applications such as 3D scene reconstruction, autonomous driving, and AR. State-of-the-art methods usually fall into one of two categories: designing a complex network that is powerful enough to directly regress the depth map, or splitting the input into bins or windows to reduce computational complexity. The most popular benchmarks are the KITTI and NYUv2 datasets. Models are typically evaluated using RMSE or absolute relative error.

Source: Defocus Deblurring Using Dual-Pixel Data

Papers

Showing 201210 of 876 papers

TitleStatusHype
Image Masking for Robust Self-Supervised Monocular Depth EstimationCode1
DiPE: Deeper into Photometric Errors for Unsupervised Learning of Depth and Ego-motion from Monocular VideosCode1
Improving 360 Monocular Depth Estimation via Non-local Dense Prediction Transformer and Joint Supervised and Self-supervised LearningCode1
OmniFusion: 360 Monocular Depth Estimation via Geometry-Aware FusionCode1
DCDepth: Progressive Monocular Depth Estimation in Discrete Cosine DomainCode1
Aerial Single-View Depth Completion with Image-Guided Uncertainty EstimationCode1
Digging Into Uncertainty-based Pseudo-label for Robust Stereo MatchingCode1
InSpaceType: Dataset and Benchmark for Reconsidering Cross-Space Type Performance in Indoor Monocular DepthCode1
Detaching and Boosting: Dual Engine for Scale-Invariant Self-Supervised Monocular Depth EstimationCode1
LightDepth: A Resource Efficient Depth Estimation Approach for Dealing with Ground Truth Sparsity via Curriculum LearningCode1
Show:102550
← PrevPage 21 of 88Next →

No leaderboard results yet.