SOTAVerified

Inverse Rendering

Inverse Rendering is the task of recovering the properties of a scene, such as shape, material, and lighting, from an image or a video. The goal of inverse rendering is to determine the properties of a scene given an observation of it, and to generate new images or videos based on these properties.

Papers

Showing 241250 of 271 papers

TitleStatusHype
Weakly-supervised Single-view Image Relighting0
3D Gaussian Inverse Rendering with Approximated Global Illumination0
SfSNet: Learning Shape, Reflectance and Illuminance of Faces `in the Wild'0
Shading Annotations in the Wild0
Shading Meets Motion: Self-supervised Indoor 3D Reconstruction Via Simultaneous Shape-from-Shading and Structure-from-Motion0
SHINOBI: Shape and Illumination using Neural Object Decomposition via BRDF Optimization In-the-wild0
Simultaneous Estimation of Near IR BRDF and Fine-Scale Surface Geometry0
Single-Image 3D Human Digitization with Shape-Guided Diffusion0
Single-Shot Neural Relighting and SVBRDF Estimation0
SIR: Multi-view Inverse Rendering with Decomposable Shadow for Indoor Scenes0
Show:102550
← PrevPage 25 of 28Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Neural-PBIRHDR-PSNR26.01Unverified
2NVDiffRecMCHDR-PSNR24.43Unverified
3InvRenderHDR-PSNR23.76Unverified
4NeRFactorHDR-PSNR23.54Unverified
5NeRDHDR-PSNR23.29Unverified
6NVDiffRecHDR-PSNR22.91Unverified
7PhySGHDR-PSNR21.81Unverified