SOTAVerified

Inverse Rendering

Inverse Rendering is the task of recovering the properties of a scene, such as shape, material, and lighting, from an image or a video. The goal of inverse rendering is to determine the properties of a scene given an observation of it, and to generate new images or videos based on these properties.

Papers

Showing 8190 of 271 papers

TitleStatusHype
IllumiNeRF: 3D Relighting Without Inverse Rendering0
3D Reconstruction with Fast Dipole Sums0
Don't Splat your Gaussians: Volumetric Ray-Traced Primitives for Modeling and Rendering Scattering and Emissive Media0
GS-ROR^2: Bidirectional-guided 3DGS and SDF for Reflective Object Relighting and Reconstruction0
A Simple Approach to Differentiable Rendering of SDFs0
RGBX: Image decomposition and synthesis using material- and lighting-aware diffusion models0
ESR-NeRF: Emissive Source Reconstruction Using LDR Multi-view Images0
Unveiling the Ambiguity in Neural Inverse Rendering: A Parameter Compensation Analysis0
Inverse Neural Rendering for Explainable Multi-Object Tracking0
IntrinsicAnything: Learning Diffusion Priors for Inverse Rendering Under Unknown Illumination0
Show:102550
← PrevPage 9 of 28Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Neural-PBIRHDR-PSNR26.01Unverified
2NVDiffRecMCHDR-PSNR24.43Unverified
3InvRenderHDR-PSNR23.76Unverified
4NeRFactorHDR-PSNR23.54Unverified
5NeRDHDR-PSNR23.29Unverified
6NVDiffRecHDR-PSNR22.91Unverified
7PhySGHDR-PSNR21.81Unverified