SOTAVerified

Inverse Rendering

Inverse Rendering is the task of recovering the properties of a scene, such as shape, material, and lighting, from an image or a video. The goal of inverse rendering is to determine the properties of a scene given an observation of it, and to generate new images or videos based on these properties.

Papers

Showing 121130 of 271 papers

TitleStatusHype
DeepShaRM: Multi-View Shape and Reflectance Map Recovery Under Unknown Lighting0
Stanford-ORB: A Real-World 3D Object Inverse Rendering BenchmarkCode1
SIRe-IR: Inverse Rendering for BRDF Reconstruction with Shadow and Illumination Removal in High-Illuminance ScenesCode1
Diffusion Posterior Illumination for Ambiguity-aware Inverse RenderingCode1
Joint Sampling and Optimisation for Inverse Rendering0
Self-Calibrating, Fully Differentiable NLOS Inverse RenderingCode1
OpenIllumination: A Multi-Illumination Dataset for Inverse Rendering Evaluation on Real Objects0
A Theory of Topological Derivatives for Inverse Rendering of Geometry0
Efficient Multi-View Inverse Rendering Using a Hybrid Differentiable Rendering Method0
Relightable and Animatable Neural Avatar from Sparse-View Video0
Show:102550
← PrevPage 13 of 28Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Neural-PBIRHDR-PSNR26.01Unverified
2NVDiffRecMCHDR-PSNR24.43Unverified
3InvRenderHDR-PSNR23.76Unverified
4NeRFactorHDR-PSNR23.54Unverified
5NeRDHDR-PSNR23.29Unverified
6NVDiffRecHDR-PSNR22.91Unverified
7PhySGHDR-PSNR21.81Unverified