SOTAVerified

Monocular Depth Estimation

Monocular Depth Estimation is the task of estimating the depth value (distance relative to the camera) of each pixel given a single (monocular) RGB image. This challenging task is a key prerequisite for determining scene understanding for applications such as 3D scene reconstruction, autonomous driving, and AR. State-of-the-art methods usually fall into one of two categories: designing a complex network that is powerful enough to directly regress the depth map, or splitting the input into bins or windows to reduce computational complexity. The most popular benchmarks are the KITTI and NYUv2 datasets. Models are typically evaluated using RMSE or absolute relative error.

Source: Defocus Deblurring Using Dual-Pixel Data

Papers

Showing 781790 of 876 papers

TitleStatusHype
Conf-Net: Toward High-Confidence Dense 3D Point-Cloud with Error-Map PredictionCode0
Learning Depth from Monocular Videos Using Synthetic Data: A Temporally-Consistent Domain Adaptation Approach0
Structure-Aware Residual Pyramid Network for Monocular Depth EstimationCode0
Self-supervised Learning with Geometric Constraints in Monocular Video: Connecting Flow, Depth, and Camera0
SLAM Endoscopy enhanced by adversarial depth prediction0
Generating and Exploiting Probabilistic Monocular Depth EstimatesCode0
Unsupervised Monocular Depth and Ego-motion Learning with Structure and Semantics0
Pattern-Affinitive Propagation across Depth, Surface Normal and Semantic Segmentation0
Multimodal End-to-End Autonomous Driving0
Towards Scene Understanding: Unsupervised Monocular Depth Estimation With Semantic-Aware Representation0
Show:102550
← PrevPage 79 of 88Next →

No leaderboard results yet.