Depth Estimation
Depth Estimation is the task of measuring the distance of each pixel relative to the camera. Depth is extracted from either monocular (single) or stereo (multiple views of a scene) images. Traditional methods use multi-view geometry to find the relationship between the images. Newer methods can directly estimate depth by minimizing the regression loss, or by learning to generate a novel view from a sequence. The most popular benchmarks are KITTI and NYUv2. Models are typically evaluated according to a RMS metric.
Papers
Showing 1–10 of 2454 papers
All datasetsStanford2D3D PanoramicNYU-Depth V2DCMeBDthequeScanNetV2Cityscapes testDIODEKITTI 2015Mars DTM EstimationScanNet4D Light Field DatasetKITTI Eigen split
Benchmark Results
| # | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| 1 | OmniDepth | RMSE | 0.62 | — | Unverified |
| 2 | SphereDepth | RMSE | 0.45 | — | Unverified |
| 3 | Jin et al. | RMSE | 0.42 | — | Unverified |
| 4 | BiFuse with fusion | RMSE | 0.41 | — | Unverified |
| 5 | HoHoNet (ResNet-101) | RMSE | 0.38 | — | Unverified |
| 6 | PanoDepth | RMSE | 0.37 | — | Unverified |
| 7 | BiFuse++ | RMSE | 0.37 | — | Unverified |
| 8 | UniFuse with fusion | RMSE | 0.37 | — | Unverified |
| 9 | DisConv | RMSE | 0.37 | — | Unverified |
| 10 | SliceNet | RMSE | 0.37 | — | Unverified |