SOTAVerified

Inverse Rendering

Inverse Rendering is the task of recovering the properties of a scene, such as shape, material, and lighting, from an image or a video. The goal of inverse rendering is to determine the properties of a scene given an observation of it, and to generate new images or videos based on these properties.

Papers

Showing 131140 of 271 papers

TitleStatusHype
Mesh Density Adaptation for Template-based Shape ReconstructionCode1
Measured Albedo in the Wild: Filling the Gap in Intrinsics Evaluation0
UrbanIR: Large-Scale Urban Scene Inverse Rendering from a Single Video0
Factored-NeuS: Reconstructing Surfaces, Illumination, and Materials of Possibly Glossy Objects0
Robust Category-Level 3D Pose Estimation from Synthetic Data0
Eclipse: Disambiguating Illumination and Materials using Unintended Shadows0
NOVUM: Neural Object Volumes for Robust Object ClassificationCode0
Inverse Rendering of Translucent Objects using Physical and Neural RenderersCode1
Inverse Global Illumination using a Neural Radiometric Prior0
ScatterNeRF: Seeing Through Fog with Physically-Based Inverse Neural Rendering0
Show:102550
← PrevPage 14 of 28Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Neural-PBIRHDR-PSNR26.01Unverified
2NVDiffRecMCHDR-PSNR24.43Unverified
3InvRenderHDR-PSNR23.76Unverified
4NeRFactorHDR-PSNR23.54Unverified
5NeRDHDR-PSNR23.29Unverified
6NVDiffRecHDR-PSNR22.91Unverified
7PhySGHDR-PSNR21.81Unverified