SOTAVerified

Depth Estimation

Depth Estimation is the task of measuring the distance of each pixel relative to the camera. Depth is extracted from either monocular (single) or stereo (multiple views of a scene) images. Traditional methods use multi-view geometry to find the relationship between the images. Newer methods can directly estimate depth by minimizing the regression loss, or by learning to generate a novel view from a sequence. The most popular benchmarks are KITTI and NYUv2. Models are typically evaluated according to a RMS metric.

Source: DIODE: A Dense Indoor and Outdoor DEpth Dataset

Papers

Showing 110 of 2454 papers

TitleStatusHype
π^3: Scalable Permutation-Equivariant Visual Geometry Learning0
S^2M^2: Scalable Stereo Matching Model for Reliable Depth Estimation0
Vision-based Perception for Autonomous Vehicles in Obstacle Avoidance Scenarios0
Efficient Calisthenics Skills Classification through Foreground Instance Selection and Depth EstimationCode0
MonoMVSNet: Monocular Priors Guided Multi-View Stereo NetworkCode1
Towards Depth Foundation Model: Recent Trends in Vision-Based Depth Estimation0
Cameras as Relative Positional Encoding0
ByDeWay: Boost Your multimodal LLM with DEpth prompting in a Training-Free Way0
LighthouseGS: Indoor Structure-aware 3D Gaussian Splatting for Panorama-Style Mobile Captures0
Beyond Appearance: Geometric Cues for Robust Video Instance Segmentation0
Show:102550
← PrevPage 1 of 246Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1LFattNetBadPix(0.01)17.23Unverified