SOTAVerified

Inverse Rendering

Inverse Rendering is the task of recovering the properties of a scene, such as shape, material, and lighting, from an image or a video. The goal of inverse rendering is to determine the properties of a scene given an observation of it, and to generate new images or videos based on these properties.

Papers

Showing 171180 of 271 papers

TitleStatusHype
TileGen: Tileable, Controllable Material Generation and Capture0
TransparentGS: Fast Inverse Rendering of Transparent Objects with Gaussians0
TurboSL: Dense Accurate and Fast 3D by Neural Inverse Structured Light0
Uncalibrated Neural Inverse Rendering for Photometric Stereo of General Surfaces0
Uni-Renderer: Unifying Rendering and Inverse Rendering Via Dual Stream Diffusion0
Unveiling the Ambiguity in Neural Inverse Rendering: A Parameter Compensation Analysis0
URAvatar: Universal Relightable Gaussian Codec Avatars0
UrbanIR: Large-Scale Urban Scene Inverse Rendering from a Single Video0
Vid2Avatar-Pro: Authentic Avatar from Videos in the Wild via Universal Prior0
VMINer: Versatile Multi-view Inverse Rendering with Near- and Far-field Light Sources0
Show:102550
← PrevPage 18 of 28Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Neural-PBIRHDR-PSNR26.01Unverified
2NVDiffRecMCHDR-PSNR24.43Unverified
3InvRenderHDR-PSNR23.76Unverified
4NeRFactorHDR-PSNR23.54Unverified
5NeRDHDR-PSNR23.29Unverified
6NVDiffRecHDR-PSNR22.91Unverified
7PhySGHDR-PSNR21.81Unverified