SOTAVerified

Inverse Rendering

Inverse Rendering is the task of recovering the properties of a scene, such as shape, material, and lighting, from an image or a video. The goal of inverse rendering is to determine the properties of a scene given an observation of it, and to generate new images or videos based on these properties.

Papers

Showing 4150 of 271 papers

TitleStatusHype
Differentiable Programming for Hyperspectral Unmixing using a Physics-based Dispersion ModelCode1
IBL-NeRF: Image-Based Lighting Formulation of Neural Radiance FieldsCode1
Efficient Meshy Neural Fields for Animatable Human AvatarsCode1
Inverse Rendering of Translucent Objects using Physical and Neural RenderersCode1
Diffusion Posterior Illumination for Ambiguity-aware Inverse RenderingCode1
InverseRenderNet: Learning single image inverse renderingCode1
Dynamic Scene Understanding through Object-Centric Voxelization and Neural RenderingCode1
DANI-Net: Uncalibrated Photometric Stereo by Differentiable Shadow Handling, Anisotropic Reflectance Modeling, and Neural Inverse RenderingCode1
Factorized Inverse Path Tracing for Efficient and Accurate Material-Lighting EstimationCode1
Learning Inverse Rendering of Faces from Real-world VideosCode1
Show:102550
← PrevPage 5 of 28Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Neural-PBIRHDR-PSNR26.01Unverified
2NVDiffRecMCHDR-PSNR24.43Unverified
3InvRenderHDR-PSNR23.76Unverified
4NeRFactorHDR-PSNR23.54Unverified
5NeRDHDR-PSNR23.29Unverified
6NVDiffRecHDR-PSNR22.91Unverified
7PhySGHDR-PSNR21.81Unverified