SOTAVerified

Inverse Rendering

Inverse Rendering is the task of recovering the properties of a scene, such as shape, material, and lighting, from an image or a video. The goal of inverse rendering is to determine the properties of a scene given an observation of it, and to generate new images or videos based on these properties.

Papers

Showing 261270 of 271 papers

TitleStatusHype
IRGS: Inter-Reflective Gaussian Splatting with 2D Gaussian Ray Tracing0
IRIS: Inverse Rendering of Indoor Scenes from Low Dynamic Range Images0
IRON: Inverse Rendering by Optimizing Neural SDFs and Materials from Photometric Images0
Joint Learning of Portrait Intrinsic Decomposition and Relighting0
Joint Sampling and Optimisation for Inverse Rendering0
Learning 3D-Gaussian Simulators from RGB Videos0
Learning-based Inverse Rendering of Complex Indoor Scenes with Differentiable Monte Carlo Raytracing0
Learning Indoor Inverse Rendering with 3D Spatially-Varying Lighting0
Learning Object-Centric Neural Scattering Functions for Free-Viewpoint Relighting and Scene Composition0
LightLab: Controlling Light Sources in Images with Diffusion Models0
Show:102550
← PrevPage 27 of 28Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Neural-PBIRHDR-PSNR26.01Unverified
2NVDiffRecMCHDR-PSNR24.43Unverified
3InvRenderHDR-PSNR23.76Unverified
4NeRFactorHDR-PSNR23.54Unverified
5NeRDHDR-PSNR23.29Unverified
6NVDiffRecHDR-PSNR22.91Unverified
7PhySGHDR-PSNR21.81Unverified