SOTAVerified

Inverse Rendering

Inverse Rendering is the task of recovering the properties of a scene, such as shape, material, and lighting, from an image or a video. The goal of inverse rendering is to determine the properties of a scene given an observation of it, and to generate new images or videos based on these properties.

Papers

Showing 3140 of 271 papers

TitleStatusHype
BG-Triangle: Bezier Gaussian Triangle for 3D Vectorization and Rendering0
IRGS: Inter-Reflective Gaussian Splatting with 2D Gaussian Ray Tracing0
Uni-Renderer: Unifying Rendering and Inverse Rendering Via Dual Stream Diffusion0
GenLit: Reformulating Single-Image Relighting as Video Generation0
Acquisition of Spatially-Varying Reflectance and Surface Normals via Polarized Reflectance Fields0
PBR-NeRF: Inverse Rendering with Physics-Based Neural FieldsCode1
MaterialPicker: Multi-Modal Material Generation with Diffusion Transformers0
Differentiable Inverse Rendering with Interpretable Basis BRDFs0
NeISF++: Neural Incident Stokes Field for Polarized Inverse Rendering of Conductors and Dielectrics0
RenderBender: A Survey on Adversarial Attacks Using Differentiable Rendering0
Show:102550
← PrevPage 4 of 28Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Neural-PBIRHDR-PSNR26.01Unverified
2NVDiffRecMCHDR-PSNR24.43Unverified
3InvRenderHDR-PSNR23.76Unverified
4NeRFactorHDR-PSNR23.54Unverified
5NeRDHDR-PSNR23.29Unverified
6NVDiffRecHDR-PSNR22.91Unverified
7PhySGHDR-PSNR21.81Unverified