SOTAVerified

Monocular Depth Estimation

Monocular Depth Estimation is the task of estimating the depth value (distance relative to the camera) of each pixel given a single (monocular) RGB image. This challenging task is a key prerequisite for determining scene understanding for applications such as 3D scene reconstruction, autonomous driving, and AR. State-of-the-art methods usually fall into one of two categories: designing a complex network that is powerful enough to directly regress the depth map, or splitting the input into bins or windows to reduce computational complexity. The most popular benchmarks are the KITTI and NYUv2 datasets. Models are typically evaluated using RMSE or absolute relative error.

Source: Defocus Deblurring Using Dual-Pixel Data

Papers

Showing 381390 of 876 papers

TitleStatusHype
MetricGold: Leveraging Text-To-Image Latent Diffusion Models for Metric Depth EstimationCode0
Mono2Stereo: Monocular Knowledge Transfer for Enhanced Stereo Matching0
D^3epth: Self-Supervised Depth Estimation with Dynamic Mask in Dynamic ScenesCode0
Enhancing Bronchoscopy Depth Estimation through Synthetic-to-Real Domain Adaptation0
PMPNet: Pixel Movement Prediction Network for Monocular Depth Estimation in Dynamic Scenes0
Improving Domain Generalization in Self-supervised Monocular Depth Estimation via Stabilized Adversarial Training0
Optical Lens Attack on Monocular Depth Estimation for Autonomous Driving0
Enhanced Encoder-Decoder Architecture for Accurate Monocular Depth EstimationCode0
Structure-Centric Robust Monocular Depth Estimation via Knowledge Distillation0
Surgical Depth Anything: Depth Estimation for Surgical Scenes using Foundation Models0
Show:102550
← PrevPage 39 of 88Next →

No leaderboard results yet.