SOTAVerified

Inverse Rendering

Inverse Rendering is the task of recovering the properties of a scene, such as shape, material, and lighting, from an image or a video. The goal of inverse rendering is to determine the properties of a scene given an observation of it, and to generate new images or videos based on these properties.

Papers

Showing 6170 of 271 papers

TitleStatusHype
GIR: 3D Gaussian Inverse Rendering for Relightable Scene FactorizationCode1
The Sky's the Limit: Re-lightable Outdoor Scenes via a Sky-pixel Constrained Illumination Prior and Outside-In VisibilityCode1
High-Quality Mesh Blendshape Generation from Face Videos via Neural Inverse RenderingCode1
IntrinsicAvatar: Physically Based Inverse Rendering of Dynamic Humans from Monocular Videos via Explicit Ray TracingCode1
Invertible Neural BRDF for Object Inverse RenderingCode1
Modeling Indirect Illumination for Inverse RenderingCode1
PhyIR: Physics-Based Inverse Rendering for Panoramic Indoor ImagesCode1
Epi-NAF: Enhancing Neural Attenuation Fields for Limited-Angle CT with Epipolar Consistency Conditions0
Environment Maps Editing using Inverse Rendering and Adversarial Implicit Functions0
A Theory of Topological Derivatives for Inverse Rendering of Geometry0
Show:102550
← PrevPage 7 of 28Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Neural-PBIRHDR-PSNR26.01Unverified
2NVDiffRecMCHDR-PSNR24.43Unverified
3InvRenderHDR-PSNR23.76Unverified
4NeRFactorHDR-PSNR23.54Unverified
5NeRDHDR-PSNR23.29Unverified
6NVDiffRecHDR-PSNR22.91Unverified
7PhySGHDR-PSNR21.81Unverified