SOTAVerified

Inverse Rendering

Inverse Rendering is the task of recovering the properties of a scene, such as shape, material, and lighting, from an image or a video. The goal of inverse rendering is to determine the properties of a scene given an observation of it, and to generate new images or videos based on these properties.

Papers

Showing 141150 of 271 papers

TitleStatusHype
Neural-PBIR Reconstruction of Shape, Material, and Illumination0
Pointersect: Neural Rendering with Cloud-Ray Intersection0
TensoIR: Tensorial Inverse RenderingCode2
Segment Anything in 3D with Radiance FieldsCode3
Factorized Inverse Path Tracing for Efficient and Accurate Material-Lighting EstimationCode1
Light Sampling Field and BRDF Representation for Physically-based Neural RenderingCode0
Inferring Fluid Dynamics via Inverse Rendering0
Neural Fields meet Explicit Geometric Representation for Inverse Rendering of Urban Scenes0
Neural Microfacet Fields for Inverse Rendering0
NeFII: Inverse Rendering for Reflectance Decomposition with Near-Field Indirect IlluminationCode1
Show:102550
← PrevPage 15 of 28Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Neural-PBIRHDR-PSNR26.01Unverified
2NVDiffRecMCHDR-PSNR24.43Unverified
3InvRenderHDR-PSNR23.76Unverified
4NeRFactorHDR-PSNR23.54Unverified
5NeRDHDR-PSNR23.29Unverified
6NVDiffRecHDR-PSNR22.91Unverified
7PhySGHDR-PSNR21.81Unverified