SOTAVerified

Inverse Rendering

Inverse Rendering is the task of recovering the properties of a scene, such as shape, material, and lighting, from an image or a video. The goal of inverse rendering is to determine the properties of a scene given an observation of it, and to generate new images or videos based on these properties.

Papers

Showing 110 of 271 papers

TitleStatusHype
HiNeuS: High-fidelity Neural Surface Mitigating Low-texture and Reflective Ambiguity0
TextureSplat: Per-Primitive Texture Mapping for Reflective Gaussian SplattingCode0
Efficient multi-view training for 3D Gaussian Splatting0
Neural Inverse Rendering from Propagating Light0
LightLab: Controlling Light Sources in Images with Diffusion Models0
TransparentGS: Fast Inverse Rendering of Transparent Objects with Gaussians0
RGS-DR: Reflective Gaussian Surfels with Deferred Rendering for Shiny Objects0
Digital Twin Catalog: A Large-Scale Photorealistic 3D Object Digital Twin Dataset0
SVG-IR: Spatially-Varying Gaussian Splatting for Inverse RenderingCode1
3D Gaussian Inverse Rendering with Approximated Global Illumination0
Show:102550
← PrevPage 1 of 28Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Neural-PBIRHDR-PSNR26.01Unverified
2NVDiffRecMCHDR-PSNR24.43Unverified
3InvRenderHDR-PSNR23.76Unverified
4NeRFactorHDR-PSNR23.54Unverified
5NeRDHDR-PSNR23.29Unverified
6NVDiffRecHDR-PSNR22.91Unverified
7PhySGHDR-PSNR21.81Unverified