SOTAVerified

Inverse Rendering

Inverse Rendering is the task of recovering the properties of a scene, such as shape, material, and lighting, from an image or a video. The goal of inverse rendering is to determine the properties of a scene given an observation of it, and to generate new images or videos based on these properties.

Papers

Showing 3140 of 271 papers

TitleStatusHype
IBL-NeRF: Image-Based Lighting Formulation of Neural Radiance FieldsCode1
A General Albedo Recovery Approach for Aerial Photogrammetric Images through Inverse RenderingCode1
GIR: 3D Gaussian Inverse Rendering for Relightable Scene FactorizationCode1
Multiview Textured Mesh Recovery by Differentiable RenderingCode1
Efficient Meshy Neural Fields for Animatable Human AvatarsCode1
Uncertainty for SVBRDF Acquisition using Frequency AnalysisCode1
Diffusion Posterior Illumination for Ambiguity-aware Inverse RenderingCode1
High-Quality Mesh Blendshape Generation from Face Videos via Neural Inverse RenderingCode1
Differentiable Programming for Hyperspectral Unmixing using a Physics-based Dispersion ModelCode1
Dynamic Scene Understanding through Object-Centric Voxelization and Neural RenderingCode1
Show:102550
← PrevPage 4 of 28Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Neural-PBIRHDR-PSNR26.01Unverified
2NVDiffRecMCHDR-PSNR24.43Unverified
3InvRenderHDR-PSNR23.76Unverified
4NeRFactorHDR-PSNR23.54Unverified
5NeRDHDR-PSNR23.29Unverified
6NVDiffRecHDR-PSNR22.91Unverified
7PhySGHDR-PSNR21.81Unverified